ArticlePDF Available

Options for basing Dietary Reference Intakes (DRIs) on chronic disease endpoints: report from a joint US-/Canadian-sponsored working group

Authors:

Abstract and Figures

Dietary Reference Intakes (DRIs) are used in Canada and the United States in planning and assessing diets of apparently healthy individuals and population groups. The approaches used to establish DRIs on the basis of classical nutrient deficiencies and/or toxicities have worked well. However, it has proved to be more challenging to base DRI values on chronic disease endpoints; deviations from the traditional framework were often required, and in some cases, DRI values were not established for intakes that affected chronic disease outcomes despite evidence that supported a relation. The increasing proportions of elderly citizens, the growing prevalence of chronic diseases, and the persistently high prevalence of overweight and obesity, which predispose to chronic disease, highlight the importance of understanding the impact of nutrition on chronic disease prevention and control. A multidisciplinary working group sponsored by the Canadian and US government DRI steering committees met from November 2014 to April 2016 to identify options for addressing key scientific challenges encountered in the use of chronic disease endpoints to establish reference values. The working group focused on 3 key questions: 1) What are the important evidentiary challenges for selecting and using chronic disease endpoints in future DRI reviews, 2) what intake-response models can future DRI committees consider when using chronic disease endpoints, and 3) what are the arguments for and against continuing to include chronic disease endpoints in future DRI reviews? This report outlines the range of options identified by the working group for answering these key questions, as well as the strengths and weaknesses of each option.
Content may be subject to copyright.
Options for basing Dietary Reference Intakes (DRIs) on chronic disease
endpoints: report from a joint US-/Canadian-sponsored working group
1–3
Elizabeth A Yetley,
4
Amanda J MacFarlane,
5
* Linda S Greene-Finestone,
5
Cutberto Garza,
6–8
Jamy D Ard,
9
Stephanie A Atkinson,
10
Dennis M Bier,
11
Alicia L Carriquiry,
12
William R Harlan,
13
Dale Hattis,
14
Janet C King,
15–17
Daniel Krewski,
18
Deborah L O’Connor,
19,20
Ross L Prentice,
21,22
Joseph V Rodricks,
23
and George A Wells
24
4
Office of Dietary Supplements, NIH, Bethesda, MD;
5
Bureau of Nutritional Sciences, Health Canada, Ottawa, Ontario, Canada;
6
Boston College, Chestnut Hill,
MA;
7
Department of Global Health, George Washington University Milken Institute School of Public Health, Washington, DC;
8
Department of International Health,
Johns Hopkins University Bloomberg School of Public Health, Baltimore, MD;
9
Wake Forest School of Medicine, Wake Forest University, Winston-Salem, NC;
10
Department of Pediatrics, McMaster University, Hamilton, Ontario, Canada;
11
Children’s Nutrition Research Center, Baylor College of Medicine, Houston, TX;
12
Department of Statistics, Iowa State University, Ames, IA;
13
Retired, Office of the Director, NIH, Bethesda, MD;
14
The George Perkins Marsh Institute, Clark
University, Worcester, MA;
15
Children’s Hospital Oakland Research Institute, Oakland, CA;
16
Department of Nutritional Sciences, University of California,
Berkeley, Berkeley, CA;
17
Department of Nutrition, University of California, Davis, Davis, CA;
18
McLaughlin Centre for Population Health Risk Assessment,
University of Ottawa, Ottawa, Ontario, Canada;
19
Department of Nutritional Sciences, University of Toronto;
20
The Hospital for Sick Children, Toronto, Ontario,
Canada;
21
Fred Hutchinson Cancer Research Center;
22
School of Public Health, University of Washington, Seattle, WA;
23
Ramboll-Environ International Corpo-
ration, Arlington, VA; and
24
Department of Epidemiology and Community Medicine, University of Ottawa Heart Institute, Ottawa, Ontario, Canada
ABSTRACT
Dietary Reference Intakes (DRIs) are used in Canada and the United
States in planning and assessing diets of apparently healthy individuals
and population groups. The approaches used to establish DRIs on the basis
of classical nutrient deficiencies and/or toxicities have worked well. How-
ever, it has proved to be more challenging to base DRI values on chronic
disease endpoints; deviations from the traditional framework were often
required, and in some cases, DRI values were not established for intakes
that affected chronic disease outcomes despite evidence that supported
a relation. The increasing proportions of elderly citizens, the growing prev-
alence of chronic diseases, and the persistently high prevalence of over-
weight and obesity, which predispose to chronic disease, highlight the
importance of understanding the impact of nutrition on chronic disease
prevention and control. A multidisciplinary working group sponsored
by the Canadian and US government DRI steering committees met from
November 2014 to April 2016 to identify options for addressing key sci-
entific challenges encountered in the use of chronic disease endpoints to
establish reference values. The working group focused on 3 key questions:
1) What are the important evidentiary challenges for selecting and using
chronic disease endpoints in future DRI reviews, 2) what intake-response
models can future DRI committees consider when using chronic disease
endpoints, and 3) what are the arguments for and against continuing to
include chronic disease endpoints in future DRI reviews? This report
outlines the range of options identified by the working group for answer-
ing these key questions, as well as the strengths and weaknesses of each
option. Am J Clin Nutr doi: 10.3945/ajcn.116.139097.
Keywords: Dietary Reference Intakes, chronic disease, intake
response, evidentiary challenges, evidence assessments
I. EXECUTIVE SUMMARY
Background
Dietary Reference Intakes (DRIs)
21
represent a common set of
reference intake values used in Canada and the United States in
planning and assessing diets of apparently healthy individuals
and population groups. Past expert committees that developed
these reference values took into consideration the deficiencies,
inadequacies, and toxicities of nutrients and related food sub-
stances as well as relevant chronic disease outcomes. The increasing
1
This is a report based on working group meetings held between Novem-
ber 2014 and April 2016 and a public workshop titled “Options for Consid-
eration of Chronic Disease Endpoints for Dietary Reference Intakes (DRIs)”
held at the NIH in Bethesda, MD, 10–11 March 2015.
2
Supported by the Bureau of Nutritional Sciences, Health Canada; Office of
Nutrition Policy and Promotion, Health Canada; the Social Determinants and
Science Integration Directorate, Public Health Agency of Canada; the Office of
Dietary Supplements, NIH; the Agricultural Research Service, USDA; the Na-
tional Heart, Lung, and Blood Institute, NIH; the Center for Food Safety and
Applied Nutrition, US Food and Drug Administration; and the National Center for
Chronic Disease Prevention and Health Promotion, US CDC. This is a free access
article, distributed under terms (http://www.nutrition.org/publications/guidelines-
and-policies/license/) that permit unrestricted noncommercial use, distribution,
and reproduction in any medium, provided the original work is properly cited.
3
The findings and conclusions in this article are those of the authors and do
not necessarily represent the official views or positions of Health Canada, the
US NIH, the USDA, the US Food and Drug Administration, or the US CDC.
*To whom correspondence should be addressed. E-mail: amanda.macfarlane@
hc-sc.gc.ca.
21
Abbreviations used: AHRQ, Agency for Healthcare Research and Quality; AI,
Adequate Intake; AMDR, Acceptable Macronutrient Distribution Range; AM-
STAR, A Measurement Tool to Assess Systematic Reviews; CD, chronic disease;
CD
cancer
, chronic disease risk reduction intake value for cancer; CD
CVD
,chronic
disease risk reduction intake value for cardiovascular disease; CVD, cardiovascular
disease; DRI, Dietary Reference Intake; EAR, Estimated Average Requirement;
FNB, Food and Nutrition Board; GRADE, Grading of Recommendations Assess-
ment, Development, and Evaluation; RCT, randomized controlled trial; RDA,
Recommended Dietary Allowance; ROBINS,RiskofBiasinNonrandomized
Studies; SIGN 50, Scottish Intercollegiate Guidelines Network 50; UL, Tolerable
Upper Intake Level; UL
CD
, chronic disease Tolerable Upper Intake Level.
doi: 10.3945/ajcn.116.139097.
Am J Clin Nutr doi: 10.3945/ajcn.116.139097. Printed in USA. Ó2016 American Society for Nutrition 1S of 37S
AJCN. First published ahead of print December 7, 2016 as doi: 10.3945/ajcn.116.139097.
Copyright (C) 2016 by the American Society for Nutrition
proportions of elderly citizens, the growing prevalence of chronic
diseases, and the persistently high prevalence of overweight and
obesity, which predispose to chronic disease, in Canada and the
United States highlight the importance of understanding the im-
pact of nutrition on chronic disease prevention and control, and on
health promotion.
The approaches that expert committees have used to establish
the DRIs usually worked well when these groups considered
classical nutrient deficiencies and/or toxicities. However, when
committees concluded that there was sufficient evidence to base
a reference value on a chronic disease endpoint, deviations from
the frameworks that were initially developed for DRI use were
often required. In some cases, committees were unable to es-
tablish reference values for intakes that affected chronic disease
outcomes despite evidence that supported relations between
intakes and chronic disease outcomes.
Current project
A multidisciplinary working group sponsored by Canadian and
US government DRI steering committees met from November
2014 to April 2016 to identify key scientific challenges that past
DRI committees encountered in the use of chronic disease
endpoints to establish reference values. The working group fo-
cused its discussions on 3 key questions:
1) What are the important evidentiary challenges for select-
ing and using chronic disease endpoints in future DRI re-
views?
2) What intake-response models can future DRI committees
consider when using chronic disease endpoints?
3) What are the arguments for and against continuing to in-
clude chronic disease endpoints in future DRI reviews?
Currently, DRIs apply to apparently healthy populations, but
changing demographics (e.g., an aging population) and health
status (e.g., increasing rates of obesity) suggest a possible need
for broader population coverage. Past DRIs generally focused on
intakes achievable by dietary strategies, but the growing ability to
modify intakes through fortification and supplementation is in-
creasingly relevant to future DRI development. In addition to
these evolving concerns, future DRI committees need to continue
to take into account the broad and diverse uses of DRIs when
considering options for DRIs, including those based on chronic
disease endpoints.
The sponsors asked the working group to identify a (not
necessarily exhaustive) range of options for answering each of the
key questions and the strengths and weaknesses of each option,
while keeping in mind current and future DRI contexts and uses.
The sponsors did not ask the group to reach a consensus on which
options have the highest priority. Final decisions about the
feasibility and options for specific approaches for deriving DRIs
on the basis of chronic disease outcomes will be made by a future
DRI committee.
Judging the evidence
The DRI process includes 2 key scientific decisions: 1)
whether the available evidence supports a causal relation be-
tween the food substance of interest and a selected outcome and,
2) if so, which DRIs are appropriate based on the available data.
DRI committees make these decisions for both beneficial and
adverse effects. In the current project, the outcome of interest is
a chronic disease.
Challenges in evaluating the evidence
When a DRI committee assesses whether the intake of a given
food substance is causally related to a chronic disease or attempts
to determine the nature of an intake-response relation between
a food substance and a chronic disease, it considers the char-
acteristics of individual study designs and overarching issues that
apply across different types of study designs. One of these
overarching issues is the risk of bias, which depends on the
design, conduct, and analysis of a study and is useful for assessing
whether evidence is likely to support a conclusion about a causal
relation. Randomized controlled trials (RCTs) when they are well
conducted and have adequate statistical power can minimize or
eliminate many sources of bias, whereas observational studies are
more vulnerable to confounding and sample-selection bias.
Causality can be directly assessed with RCTs but must be inferred
or its likelihood assessed from observational studies.
In RCTs, the food-substance intervention is known. Ran-
domization increases the likelihood that measurement error or
bias associated with dietary intake assessment will be evenly
distributed among the groups. In contrast, assessing relations
between food substances and chronic diseases in observational
studies is particularly challenging because the assessment
of intake is most often based on self-reported dietary intakes,
which are subject to systematic bias, particularly intakes of
energy. Unlike RCTs, in which valid comparisons among ran-
domly assigned groups are possible without the use of dietary-
assessment data, the validity and usefulness of observational
studies depend on the accuracy and precision of the dietary
assessments these studies use. Systematic reviews and meta-
analyses, when they are well designed, can provide useful and
well-documented summaries of the evidence on a relation be-
tween food substances and chronic diseases. However, the use of
data from such analyses also requires caution because these
analyses have the same biases and confounding problems as the
original studies.
Which outcome measures a DRI committee selects for
assessing the causality of a relation between food substances and
chronic diseases is also important. It is possible to measure the
occurrence of a chronic disease of interest directly or indirectly.
Confidence that an observed relation between a food substance
and a chronic disease outcome is causal is greatest when a study
directly measures the chronic disease event or incidence. An
indirect measurement involves a substitute measure (e.g.,
a qualified surrogate disease marker such as LDL cholesterol or
a nonqualified disease marker such as carotid intima-media
thickness for coronary heart disease). Some uncertainty is as-
sociated with the use of qualified surrogate disease markers, and
considerable uncertainty is associated with the use of non-
qualified disease markers as outcome measures.
Tools for assessing the evidence
Tools are available to assess 1) individual study quality and 2)
the overall strength of the totality of the evidence. Tools to as-
sess individual study quality include the Bradford Hill criteria,
2S of 37S YETLEY ET AL.
quality-assessment instruments, and risk-of-bias tools. Quality-
assessment instruments, such as the Scottish Intercollegiate
Guidelines Network 50 (SIGN 50) methodology, assess the
quality of a study from conception to interpretation. Risk-of bias
tools assess the accuracy of estimates of benefit and risk in
RCTs and nonrandomized studies. Other tools evaluate the
quality of systematic reviews and meta-analyses [e.g., A Mea-
surement Tool to Assess Systematic Reviews (AMSTAR)] or
provide criteria for grading the evidence [e.g., Grading of
Recommendations Assessment, Development, and Evaluation
(GRADE)]. For DRI applications, reviewers might need to add
nutrition-specific measures to generic assessment tools when
they evaluate relations between food substances and chronic
diseases (e.g., information on baseline or background nutritional
status, assay methods used to measure biomarkers).
Options for addressing evidence-related challenges
An early challenge in the DRI decision-making process is the
identification of potentially useful measures (indicators) that
reflect a health outcome associated with the food substance of
interest. One option is to select an endpoint that is assessed as the
chronic disease event (i.e., chronic disease defined by accepted
diagnostic criteria) or by a qualified surrogate disease marker
(e.g., LDL cholesterol for coronary heart disease). An alternative
option would expand the types of outcome measures of chronic
disease to include nonqualified disease markers. This would
increase the number of relations between food substances and
chronic disease outcomes for which committees could establish
DRIs but is associated with considerable uncertainty as to
whether the relation of the food substance and the chronic disease
is causal.
Another challenge is to specify the acceptable level of con-
fidence in the data that a DRI committee uses to establish cau-
sality. The level of confidence is based on the type of endpoint
measured and the overall strength of the evidence. One option
is to specify an acceptable level of confidence in (e.g., high or
moderate) about the validity of the results that must be met
before a reference value can be established. Another option is to
use the actual level of certainty (e.g., high, moderate, or low)
to describe the evidence associated with a given reference value.
A final option is to let committees make this decision on a case-
by-case basis.
Intake-response relations
Intake-response relations for classical nutrient requirements
and adverse events associated with excessive intakes differ from
those associated with chronic diseases. Traditional deficiency
relations are based on absolute risk, in which an inadequate intake
of the nutrient is both necessary and sufficient to cause a de-
ficiency and an adequate intake is both necessary and sufficient to
treat a deficiency. The intake-response relation between a nutrient
and a deficiency disease is linear or monotonic within the range of
inadequacy. In contrast, food substance–chronic disease relations
are often expressed as relative risks, in which the baseline risk of
a chronic disease is never zero and changes in intake may alter
risk by relatively small amounts. In addition, reductions in rel-
ative risk are achievable through .1 intervention, which means
that the food substance of interest may not be necessary or
sufficient to increase or decrease the relative risk of the disease.
The relation between a food substance and a chronic disease
indicator can be diverse (e.g., linear, monotonic, or nonmono-
tonic). A single food substance can have a causal relation with
.1 chronic disease, and intake-response curves for these dif-
ferent relations can differ.
Options for determining an acceptable level of confidence
Several options are available for determining the acceptable
level of confidence in the data that a DRI committee uses to
determine intake-response relations once it has data that establish
a causal relation. One option is to require a high level of con-
fidence by, for example, using RCTs with a chronic disease or
qualified surrogate disease marker as the outcome measure.
Another option is to accept a moderate level of confidence in the
data, which would allow for inclusion of data on chronic disease
outcomes or qualified surrogate markers of disease from ob-
servational studies. A third option is to “piece together” different
relations in which the outcome marker of interest is a common
factor when direct evidence of the outcome marker’s presence
on the causal pathway between the food substance and a chronic
disease is lacking. Therefore, if data show a quantitative relation
between a food-substance intake and the outcome marker of
interest and other data show a quantitative relation between the
outcome marker of interest and the chronic disease, this evi-
dence could be combined to establish a quantitative reference
intake value for the chronic disease risk, if the confidence in the
data is at an acceptable level.
Options for types of reference values
If data for an acceptable level of confidence are available,
a reference value based on chronic disease risk reduction can be
determined. The challenges presented by the use of chronic
disease endpoints to set reference values by using the traditional
framework suggest the need for different types of reference
values than are used for classical nutrient deficiencies and tox-
icities. For cases in which increasing intakes will reduce the risk
of a chronic disease, one option is to estimate a chronic disease
risk-reduction intake value [e.g., a chronic disease risk-reduction
intake value, such as a chronic disease (CD) value for reduced
cardiovascular disease (CVD) reduction, could be denoted as
CD
CVD
] that is specific to a chronic disease outcome and is
based on data reported as relative rather than absolute risk.
Within this type of approach, 3 possible adaptations are identi-
fied: 1) set a single chronic disease value at a level above which
higher intakes are unlikely to achieve additional risk reduction
for a specified disease (i.e., point estimate), 2) set multiple
reference values in relation to the expected degree of disease
risk reduction across a spectrum of intakes to give a “family of
targeted reductions,” or 3) set multiple chronic disease–related
values (e.g., CD
CVD
,CD
cancer
) if the food substance is related to
multiple diseases at different intakes. Another option is to ex-
press reference intakes as ranges of beneficial intakes.
Options for the derivation of Tolerable Upper Intake Levels
(ULs) include the use of either one or both traditional adverse
events (i.e., toxicities) and chronic disease endpoints, depending
on the nature and strength of available evidence. One option is to
derive ULs on the basis of a threshold approach by using tra-
ditional adverse events, if the UL based on chronic disease risk
would be higher than a UL associated with a traditional adverse
CHRONIC DISEASE ENDPOINTS AND DRIs 3S of 37S
effect. A second option is to use chronic disease endpoints to set
a UL in cases in which intakes associated with increased chronic
disease risk are at a level below those associated with traditional
adverse events. These values could be denoted as a chronic
disease UL (UL
CD
) to distinguish them from a traditional UL.
For this second option, approaches analogous to the derivation
of CD values (e.g., the development of 1 or multiple values for
specified levels of relative risk) or a threshold approach (e.g.,
identifying the inflection point at which absolute or relative risk
increases) could be used. When increased chronic disease risks
are observed over a range of intakes and the intake-response
curve shows an inflection point that supports a threshold effect,
the inflection point could be set as a UL
CD
. If there is no clear
inflection point, then a single UL
CD
value or a set of UL
CD
values could be based on intakes that reduce risk at specified
levels with the acknowledgment that it may not be possible to
eliminate the targeted risk. Basing UL
CD
values on risk re-
duction or minimization rather than risk elimination would
further differentiate UL
CD
values from traditional UL values.
Such an option would entail the provision of adequate guidance
to users with regard to their uses and application. A third option
is to develop multiple values on the basis of both traditional
adverse events and chronic disease endpoints with guidance
provided to users with regard to the strengths and weaknesses of
derived values, and examples of their appropriate uses. For all
options, the feasibility of avoiding or minimizing the food
substance in the diet must be considered when there is no thresh-
old for risk.
Options for resolving overlaps between benefit and harm
Intake distributions for some food substances associated with
disease risk reduction might overlap with intake distributions
associated with adverse events, including higher chronic disease
risk. Several descriptive options are proposed for dealing with
this issue. One option is to ensure that no point estimate or range
of beneficial intakes for chronic disease risk reduction extends
beyond the intake at which the risk of adverse events, including
chronic diseases, increases. A second option is to predetermine
criteria related to the severity and prevalence of targeted chronic
diseases and the degree of change in the risk of specified intakes
required to set a reference value. A third option is to simply
describe the nature of the evidence and the public health im-
plications of benefits and risks across the full range of intakes in
which inferences are reasonably possible together with remaining
uncertainties. Users would choose an appropriate balance be-
tween benefit and harm for the population of concern.
Options for selecting an indicator or indicators and specifying
intake-response relations
Several possible options are identified to address examples of
challenges likely to be encountered when intake-response curves
are based on chronic disease endpoints. One possible approach is
to identify alternatives for addressing different types of outcome
markers [e.g., chronic diseases defined by accepted diagnostic
criteria (clinical diseases per se) compared with qualified sur-
rogate disease markers and nonqualified disease markers] to
derive intake-response relations. In this approach, several pos-
sible options are identified. One option is to select a single out-
come indicator on the causal pathway, provided that it is sufficiently
sensitive to quantify the relation between the food substance and
the chronic disease. Another option is to integrate information
from multiple indicators for a given chronic disease if they add
substantially to the accuracy of the intake-response relation and
reference value variation. A third option may be required when
a single food substance is related to multiple chronic disease
outcomes, each with a distinct intake-response relation. In this
case, criteria for selecting appropriate endpoints or surrogate
endpoints to establish intake-response relations, methods to in-
tegrate multiple endpoints, and methods to account for interin-
dividual variability in the relations of interest need to be
developed. Another option is to use a biological mode-of-action
framework instead of a statistical approach in establishing quan-
titative reference intakes.
In applying these possible approaches, several factors that
influence or confound quantitative intake-response relations need
to be considered. The accuracy of intake-response relations is
dependent on the accuracy of the measurements of intakes and
outcomes. Systematic bias due to substantial underreporting (e.g.,
intakes, particularly energy intakes) is of particular concern. When
available, the use of qualified and accurately measured biomarkers
of nutrient and food-substance intakes may overcome biases
in self-reported intakes. Another factor relates to the common
problem of data being available on some, but not all, life-stage
groups for which DRIs are established. Two options for dealing
with this issue are identified, including limiting the establishment
of DRI values based on chronic disease endpoints to populations
that are identical or similar to the studied groups. Alternatively,
extrapolation could be considered when sufficient evidence is
available that specific intakes of a food substance can increase or
decrease the risk of a chronic disease.
DRI process
Arguments for or against including chronic disease endpoints
in future DRIs
Evidence-based reference intake values and/or recommenda-
tions with regard to food substances causally related to the
chronic diseases are desirable from public health and clinical
perspectives. Yet, despite the growing chronic disease burden and
continued use of DRIs, substantial challenges persist related
to both the paucity of sufficiently relevant and robust evidence for
evaluating putative causal relations between intakes and a chronic
disease and the often-poor fit of the current Estimated Average
Requirement (EAR)/Recommended Dietary Allowance (RDA)
and UL frameworks for deriving DRIs on the basis of chronic
disease endpoints. There is a clear desire to include chronic
disease endpoints in the DRIs; however, the challenges reviewed
in this report underscore the fact that the broader incorporation
of chronic disease endpoints requires more sophisticated ap-
proaches than those previously used. These must also include
approaches to issues concerning processes and starting points.
Options for process components
The current DRI values were set by a process that reviews
a group of essential nutrients and related food substances and
clearly focuses on intakes required for health maintenance and
chronic disease risk reduction. Two possible options for orga-
nizing future reviews and derivations of DRIs based on chronic
4S of 37S YETLEY ET AL.
disease endpoints are identified. The first option is to continue
incorporating chronic disease endpoint considerations in future
DRI reviews but to expand the types of reference values that
could be set, while clearly differentiating between values based
on classical nutrient adequacy and chronic disease endpoints. A
second option is to create 2 separate but complementary, and
possibly iterative and/or integrated, processes for the development
of reference values on the basis of chronic disease endpoints and/or
deficiency diseases. For example, a review is initiated specifically
to set DRIs on the basis of chronic disease endpoints or when an
existing independent process could be used.
Options for starting point
The starting point of current DRI processes is individual food
substances, and all pertinent outcomes related to varying intakes
of given food substances are considered. If chronic disease
endpoints are to be considered, one option is to focus on in-
dividual food substances or small groups of interrelated nutrients,
an approach that is similar to the current DRI process. Con-
versely, another option is to focus on a specific chronic disease
and its relation with multiple food substances.
Forthcoming tools
Examples are discussed of forthcoming tools and novel study
designs with potential utility in overcoming anticipated hurdles,
such as complexities related to multiple, interactive etiologies
and longitudinal characteristics of chronic diseases. These in-
clude the identification and use of new dietary intake biomarkers,
the potential for the use of Mendelian randomization studies to
inform causality, the use of U-shaped dose-risk relation modeling
based on severity scoring and categorical regression analysis,
consideration of enhanced function endpoints, the use of systems
science, and the application of principles subsumed under the
umbrella of precision medicine.
Conclusions
The development of the DRIs has proven to be critical for the
successful elimination of diseases of deficiency in Canada and the
United States. If the DRI framework could be improved to more
effectively incorporate chronic disease outcomes, the potential
impact on public health would be even greater. The next steps are
to assess the feasibility of including chronic disease endpoints in
future DRI reviews, to evaluate the relevance and appropriateness
of expanding DRIs to populations beyond those currently tar-
geted, and to determine which of the options and/or their ad-
aptations identified in this report may warrant inclusion in
a future chronic disease DRI framework.
II. BACKGROUND
DRIs are a common set of reference intake values that the
Canadian and US governments, individuals, and organizations
use for planning and assessing the diets of apparently healthy
individuals and populations (1–3). The Food and Nutrition Board
(FNB) periodically convenes ad hoc expert committees to de-
velop DRIs for specified food substances. DRIs are guides for
achieving safe and adequate intakes of nutrients and other
food substances from foods and dietary supplements. The DRI
committees establish DRIs within a public health context for the
prevention of nutrient deficiencies, for reduction in risk of other
diseases, and for the avoidance of potential adverse effects of
excessive intakes. DRIs are available for 22 groups based on
age, sex, pregnancy, and lactation in apparently healthy pop-
ulations. Future DRI committees might need to review whether
the population coverage should be expanded to include mor-
bidities of high prevalence.
The definition of “food substances” for this report is provided
in Text Box 1. Future DRI committees might find it useful to
review and revise this definition.
Previous DRI committees have used the term “apparently
healthy populations” as defined in Text Box 2.
There is no single uniform definition of “chronic disease” (4)
and defining this concept for DRI evaluations, although highly
relevant, is outside this project’s scope. Future DRI committees
will probably need to define this term. Existing definitions of
this term differ with respect to whether a chronic disease requires
medical attention, affects function, has multiple risk factors, or can
be cured. There are many definitions of chronic disease, several
examples of which are shown in Te x t B o x 3.
History of nutrient intake reference values
The establishment of quantitative nutrient intake reference
values in the United States and Canada began around 1940 with
a single type of reference value in each country: 1) the Rec-
ommended Nutrient Intakes, or RNIs, for Canadians and 2) the
RDAs for the United States (1). These values were the intakes of
essential nutrients that the experts who developed them expected
would meet the known nutrient needs of practically all healthy
persons.
In 1994, an FNB committee recommended that future intake
reference values reflect more explicit statistical constructs of
distributions of requirements across individuals (7). As a result,
DRI committees began deriving reference values from population-
specific estimates of average requirements (EARs) and associ-
ated population variability (RDAs) (1, 3). This approach allowed
DRI users to calculate the prevalence of inadequacy in pop-
ulations and the probability of inadequacy in individuals (1, 8–10).
Text Box 1
Food substances consist of nutrients that are essential or
conditionally essential, energy nutrients, or other naturally
occurring bioactive food components.
Text Box 2
DRIs are reference intakes for apparently healthy pop-
ulations. DRI intake levels are not necessarily sufficient for
individuals who are malnourished, have diseases that result
in malabsorption or dialysis treatments, or have increased
or decreased energy needs because of disability or de-
creased mobility (1).
CHRONIC DISEASE ENDPOINTS AND DRIs 5S of 37S
The FNB committee also recommended adding a reference
value that reflects an upper safe level of intake (UL) (7, 11). All
DRI reports published after 1996 implemented these recom-
mendations (Tab l e 1). However, with the progressive im-
plementation of the revised DRI process, the committees that
produced these reports recognized that the EAR and RDA
model and the UL model were inappropriate for some outcomes
of interest. Therefore, DRI committees added new reference
values, as follows: 1) Adequate Intake (AI), 2) Acceptable Macro-
nutrient Distribution Range (AMDR), and 3) Estimated Energy
Requirement, or EER (Table 1).
In response to evolving science that suggests beneficial effects
of diets and dietary components in reducing the risk of chronic
disease (12), the 1994 FNB committee also recommended that
DRI committees include reduction in the risk of chronic disease
in the formulation of future reference values when sufficient data
on efficacy and safety are available (7). All 7 subsequently
published DRI reports placed a high priority on an evaluation of
potential chronic disease endpoints for all of the nutrients they
reviewed (13, 14). However, these panels based only a limited
number of DRIs on chronic disease endpoints: dietary fiber and
coronary heart disease, fluoride and dental caries, potassium and
both hypertension and kidney stones, and sodium and CVD (15).
Uses of DRIs
The uses of reference intake values have expanded consid-
erably beyond the original intent of helping governments plan
and evaluate nutrition programs and policies. Uses now include
general nutrition education and guidance for the public, dietary
management of clinical patients, identification of research gaps
and priorities, research design and interpretation, food product
development, regulatory applications, and guidance for inter-
national and other organizational reference values.
The evolving range of diverse uses and users of reference
intake values underscores the need for the transparent docu-
mentation of scientific decisions made by DRI committees and
for reference intake values that lend themselves to a wide range of
applications. DRI reports focus on the scientific and public health
aspects of the intakes of nutrients and food substances, but they
do not make policy recommendations, with one notable excep-
tion. The 1997 amendments to the US Food, Drug, and Cosmetic
Act mandated that food manufacturers could use “authoritative
statements” from certain scientific bodies, including the National
Academies of Sciences, Engineering, and Medicine, as health
claims on food labels in the US marketplace without undergoing
usual US Food and Drug Administration review and authorization
procedures (16). This latter policy is not operative in Canada.
TABLE 1
DRIs and their definitions
1
DRIs Definition
Based on 1994 Food and Nutrition
Committee recommendations
EAR The average daily nutrient intake level that is estimated to meet the requirements of half of the healthy individuals in
a particular life stage and sex group.
RDA The average daily dietary nutrient intake level that is sufficient to meet the nutrient requirements of nearly all
(97–98%) healthy individuals in a particular life stage and sex group.
UL The highest average daily nutrient intake level that is likely to pose no risk of adverse health effects for almost all individuals
in the general population. As intake increases above the UL, the potential risk of adverse effects may increase.
Added by DRI committees
in 1994–2011
AI The recommended average daily intake level based on observed or experimentally determined approximations or
estimates of nutrient intake by a group (or groups) of apparently healthy people that are assumed to be adequate;
used when an RDA cannot be determined.
AMDR The range of intakes of an energy source that is associated with a reduced risk of chronic disease, yet can provide
adequate amounts of essential nutrients; expressed as a percentage of total energy intake.
EER The average dietary energy intake that is predicted to maintain energy balance in a healthy adult of a defined age, sex,
weight, height, and level of physical activity consistent with good health. In children and pregnant and lactating
women, the EER includes the needs associated with the deposition of tissues or the secretion of milk at rates
consistent with good health.
1
From reference 1. AI, Adequate Intake; AMDR, Acceptable Macronutrient Distribution Range; DRI, Dietary Reference Intake; EAR, Estimated
Average Requirement; EER, Estimated Energy Requirement; RDA, Recommended Dietary Allowance; UL, Tolerable Upper Intake Level.
Text Box 3
Examples of definitions of chronic diseases
WHO: Noncommunicable diseases, also known as chronic
diseases, are not passed from person to person. They are
of long duration and generally slow progression. The 4
main types of noncommunicable diseases are CVDs, can-
cers, chronic respiratory diseases, and diabetes (5).
US Department of Health and Human Services: Chronic
illnesses are conditions that last $1 y and require on-
going medical attention and/or limit activities of daily
living (4).
Institute of Medicine Biomarkers Committee: A chronic dis-
ease is a culmination of a series of pathogenic processes
in response to internal or external stimuli over time that
results in a clinical diagnosis or ailment and health out-
comes (e.g., diabetes) (6).
6S of 37S YETLEY ET AL.
Report overview
This report, in section III, provides an overview of the current
project, whose purpose is to critically evaluate key scientific
challenges in the use of chronic disease endpoints to establish
reference intake values. Section IV describes the framework that the
working group used as background information for this project.
Sections V-A, V-B, and V-C describe options that the working
group identified to assess evidentiary challenges related to de-
termining whether relations between food substances and targeted
chronic diseases are causal. Options for establishing intake-response
relations between food substances and chronic disease endpoints are
the focus of section VI. Section VII addresses considerations for future
DRI committee processes, and section VIII discusses some forth-
coming tools that could be applied to the establishment or application
of DRI values based on chronic disease endpoints. Section IX offers
a few conclusions and next steps.
III. CURRENT PROJECT
This section describes the rationale for this project as well as its
objectives and key questions. Motivations for the project were
well-established links between diet and health throughout the life
course and the expectation that evidence-based changes in the
intakes of food substances would enhance well-being and reduce
disease risk. The broad application of reference intake values,
increasing rates of chronic diseases among US and Canadian
populations, growing financial and quality-of-life burdens repre-
sented by that dynamic, and shortcomings of the EAR/RDA and
UL models provided additional reasons to undertake this effort.
Several US and Canadian government agencies are continuing DRI-
related harmonization efforts initiated in the mid-1990s by jointly
sponsoring the current project. These agencies convened a working
group with a broad and diverse range of scientific and DRI e xper ienc e
(Tabl e 2 ). The group had numerous discussions via conference
calls and at a public workshop (17). The sponsors also solicited
public comment on the working group deliberations.
The focus of the current project was on the relation between
food-substance intakes and chronic disease endpoints. The
working group applied elements of the traditional DRI-related
context to its work: a prevention (public health) orientation,
intakes that are achievable within a dietary context (and, in a few
highly selected cases, through dietary supplements, such as folate
supplements during pregnancy), and primary applicability to the
apparently healthy population.
Objectives
One objective of this project was to critically evaluate key
scientific issues involved in the use of chronic disease endpoints
to establish reference intake values. A second objective was to
provide options for future decisions about whether and/or how
to incorporate chronic disease endpoints into the process for
establishing DRI values. The sponsors asked the working group
not to try to reach consensus on which options were best, but
rather, to identify a range of options and their strengths and
weaknesses. None of the options in this report excludes other
possibilities, and the order of presentation or amount of space
devoted to each option is not intended to convey relative pri-
orities. Subsequent expert groups will make final decisions
about future DRI approaches to chronic disease endpoints. The
key scientific decisions that are the backbone of DRI de-
velopment (Tab l e 3 ) provided context for the working group’s
discussions.
The working group identified a (not necessarily exhaustive)
range of options for answering each of 3 key questions and
identifying strengths and weaknesses of each option, while keeping
in mind current and future DRI uses. The key questions are listed in
the following sections.
TABLE 2
Working group members and their institutions
Working group member Institution
Jamy D Ard, MD Associate Professor, Wake Forest School of Medicine, Wake Forest University
Stephanie Atkinson, PhD, FCAHS Professor, Department of Pediatrics, McMaster University
Dennis M Bier, MD Professor of Pediatrics, and Director, Children’s Nutrition Research Center,
Baylor College of Medicine
Alicia L Carriquiry, PhD Distinguished Professor, Department of Statistics, Iowa State University
Cutberto Garza, MD, PhD (Chair) Professor, Boston College, and Visiting Professor, George Washington University
Milken Institute School of Public Health and Johns Hopkins University
William R Harlan, MD, FACP,
FACPM, FAAFP, FAHA
Research Consultant (retired), NIH
Dale B Hattis, PhD Research Professor, The George Perkins Marsh Institute, Clark University
Janet C King, PhD Executive Director, Children’s Hospital Oakland Research Institute, and Professor
Emeritus, University of California, Berkeley and Davis
Daniel Krewski, PhD Professor and Director, McLaughlin Centre for Population Health Risk
Assessment, University of Ottawa
Deborah L O’Connor, PhD, RD Professor, Department of Nutritional Sciences, University of Toronto, and Senior
Associate Scientist, The Hospital for Sick Children
Ross L Prentice, PhD Member, Public Health Sciences Division, Fred Hutchinson Cancer Research
Center, and Professor of Biostatistics, University of Washington
Joseph V Rodricks, PhD, DABT Principal, Ramboll-Environ International Corporation
George A Wells, PhD, MSc Professor, Department of Epidemiology and Community Medicine,
University of Ottawa Heart Institute
CHRONIC DISEASE ENDPOINTS AND DRIs 7S of 37S
Key question 1: What are the important evidentiary challenges
for selecting and using chronic disease endpoints in future DRI
reviews?
The types of scientific evidence in the DRI-development
process that are necessary to establish the essentiality of nutrients
differ from the type of evidence needed to evaluate relations
between food substances and chronic diseases (7). A key chal-
lenge is the limited availability of RCTs that are designed to
establish that a food substance of interest is causally related to
a given chronic disease outcome. A much larger body of evidence
based on prospective cohort and other observational studies is
available that shows associations between food substances and
chronic diseases, but common study design limitations in such
instances make it challenging to determine causality (18). The
availability of studies that measured functional and other in-
termediate biomarkers (including qualified surrogate disease
markers and nonqualified disease markers) of chronic disease risk
has strengthened the ability to determine the utility of different
study designs and endpoints for accurately predicting the impact
of reference intakes on chronic disease outcomes (6).
The availability of recently developed evaluation tools and
techniques (e.g., SIGN 50 methodology) (19) and grading tools
(e.g., GRADE) (20) have enhanced the ability to assess the
quality of individual studies and the overall strength of the totality
of the available evidence. Although developers did not design and
validate these types of tools for DRI applications (21), DRI
committees can adapt them for DRI applications to help address
the evidentiary challenges that are discussed more fully in sec-
tions V-A, V-B, and V-C.
A re-evaluation of the appropriateness of chronic disease end-
points and development of criteria for their use is timely because of
the substantive knowledge base that has emerged in recent decades
on relations between food substances and chronic diseases. The
working group focused on options for addressing evidentiary
challenges that future DRI committees must consider when they
evaluate and select chronic disease endpoints.
Key question 2: What intake-response models can future DRI
committees consider when using chronic disease endpoints?
The DRI intake-response relation models best equipped to deal
with deficiency endpoints often are not appropriate for chronic
disease endpoints (13, 22). For the purpose of this report, “intake”
refers to intake exposure to a food substance. “Intake-response
relation” refers to the impact on physiologic processes of a range
of dietary intakes. Related challenges include difficulties in the
use of nutrient-status indicators (e.g., serum nutrient concen-
trations) to estimate optimal intakes on the basis of chronic
disease endpoints. In addition, it is often difficult to use the
relative risk data commonly available on relations between
food substances and chronic diseases to calculate a population
average and variance, as is necessary for deriving EARs and
RDAs. DRI committees have generally found AIs to be useful
for deriving chronic disease endpoints, but DRI users have
found AIs difficult to apply when assessing and planning diets
for groups (13).
DRI committees have also encountered challenges in basing
ULs on chronic disease endpoints. These committees did find
convincing evidence that higher intakes of several food sub-
stances were associated with increased risks of certain chronic
diseases. However, the absence of an apparent threshold effect for
the associated intake-response relations resulted in either failure
to establish a UL or the establishment of an arbitrary UL on the
basis of considerations other than the traditional model for
establishing DRIs (23, 24). It is therefore important to identify
other approaches and models for deriving quantitative reference
values that are related to both benefits and risks of food-substance
intakes for chronic disease outcomes.
Key question 3: What are the arguments for and against
continuing to include chronic disease endpoints in future DRI
reviews?
The 1994 FNB committee was concerned about the need to
consider differences among relations between nutrients and
diseases of deficiency compared with those between food sub-
stances and chronic diseases in decisions about whether to
combine these 2 types of relations or to address them separately
(7). Subsequent evaluations of the DRI process have continued to
TABLE 3
DRI decisions and considerations
1
1. Causality: Is the relation between the food substance and the chronic
disease or diseases causal?
a. Objective assessment of the relevance and robustness of available
studies
b. Clear identification of the putative benefit or increased risk ascribed to
targeted food substance or substances (e.g., amelioration or
exacerbation of absolute or relative risks, level of severity)
c. Selection of candidate chronic disease outcomes (e.g., chronic disease
event, surrogate disease marker, nonqualified outcome) that reflects
targeted causal relations
d. Delineation of uncertainties related to determination of causality
e. Evaluation of challenges likely to be encountered because of the
extrapolation of causality from studied to unstudied groups
2. Intake-response relation: What is an appropriate DRI value (provided
that causality has already been determined)?
a. Objective assessment of the relevance and robustness of available
evidence
b. Determination of the type of reference value that is most appropriate
given the available data (e.g., mean 6variances, ranges) and user
needs (e.g., planning or assessment for individuals or groups)
c. Selection of candidate indicators for establishing an intake-response
relation (i.e., endpoints for quantification)
i. What are the complexities of the intake-response relation (e.g., linear,
curvilinear, overlapping of benefit, or increased risk curves)?
ii. What are the characteristics of possible indicators (e.g., chronic
disease event or biomarker relative to the causal pathway between
intake and the chronic disease)?
d. Identification of statistical models or other approaches (e.g., statistical,
population-derived) to quantify the relation
e. Delineation of uncertainties in the available data
f. Identification of adjustments that may be necessary (e.g., about
bioavailability, bias in exposure, outcome measures)
g. Evaluation of challenges likely to be encountered in the extrapolation of
a reference intake value from studied to unstudied groups
1
Evaluations of the effect of increasing intakes on both benefit (i.e.,
decreased risk of chronic disease) and safety (i.e., increased risk of chronic
disease) as intakes increase are a core part of the DRI review process.
Although DRI committees often review benefit and safety separately, the
generic nature of the issues they must address in their review are likely to be
the same for both types of review. This report focuses on the key questions
related to causality and the intake-response relation. DRI, Dietary Reference
Intake.
8S of 37S YETLEY ET AL.
question whether a single process or separate processes are most
appropriate for this purpose (13, 22).
IV. CURRENT PROJECT FRAMEWORK
This section describes the framework that the working group
used in its reviews and deliberations. Chronic diseases are the
leading cause of death and disability in the United States and
Canada, and they account for a major proportion of health care
costs (25, 26). Globally, 38 million people die annually from
chronic diseases, and almost three-quarters of these deaths occur
in low- and middle-income countries (5). With changing demo-
graphics (e.g., aging populations) and increasing rates of overweight
and obesity, public health concerns and costs related to chronic
diseases are expected to increase further in the coming decades.
Published evidence shows that “healthy” dietary choices and
lifestyles can help prevent or control several chronic diseases
(27). The technological capabilities of assessing individual and
population risks of chronic diseases and options for modifying
foods and behaviors that affect diets are likely to expand. At the
same time, the understanding of the development of chronic
diseases through the life course is increasing.
The evaluation of relations between food substances and
chronic diseases is complex, and a single conceptual model is
unlikely to fit all cases. Chronic diseases are generally considered
to be pathologic processes that are noncommunicable, of long
duration, of slow progression, and of multifactorial etiologies,
which, in turn, may be influenced by genetic backgrounds, age
and sex, comorbidities, environments, lifestyles, and an increasing
prevalence of obesity (5, 25). They represent a wide range of
conditions, including heart disease, cancer, arthritis, diabetes, and
macular degeneration. Chronic diseases have varying public health
importance, severity, prevalence, and availability of effective
treatments and prevention strategies. These diseases begin years
before signs and symptoms become evident with the use of current
diagnostic technologies. Complex factors interact to influence
chronic disease progression, including interactions between food
substances. In some cases, one factor (e.g., a particular food sub-
stance) may only exert an effect if other factors are also present or
absent. Food-substance effects are often small in individuals but can
have significant beneficial or detrimental effects on populations.
Defining populations at risk of a chronic disease is also challenging
because many diseases are associated with, or modified by, other
morbidities (e.g., obesity is associated with several comorbidities in
the elderly) and demographic characteristics (e.g., proportions of
individuals aged $65 y and changing pharmaceutical uses).
Because the human diet is a complex mixture of interacting
components that cumulatively affect health (28), isolating the
effects on chronic disease risk of a single food substance or a small
number of them can be challenging. In addition, the risks of
chronic disease can be associated with either decreasing or in-
creasing intakes of food substances (e.g., of fiber or saturated fat,
respectively). The observed intake-response characteristics gen-
erally do not fit the threshold-based EAR/RDA and UL approaches
that are based on absolute risk and that DRI committees use to set
referencevalues for nutrientdeficiencies and related toxicities (22).
Intake-response curves have varied shapes. Both high and low
intakes of some substances may increase the risk of a chronic disease,
and high and low intakes of the same food substance sometimes have
overlapping effects [e.g., the intake-response curve for the decreasing
effect of increasing fluoride intakes on dental caries overlaps with the
intake-response curve for the effect of increasing fluoride intakes on
fluorosis (29)]. Observational data suggest that a given food sub-
stance can be related to multiple chronic disease outcomes, and each
relation can have its own distinctive intake-response curve (22, 30).
These complexities indicate the need for a multidisciplinary approach
to developing nutrient-specific and context-specific frameworks that
involves scientists with a wide range of expertise.
It is useful to compare the reference value concepts tradi-
tionally used for nutrient requirements and toxicities with the
concepts that pertain to chronic disease risk reduction (Table 4).
TABLE 4
Traditional and chronic disease endpoints for DRIs
1
Issue
Eligibility for
consideration Focus Characteristics Expression of risk
Traditional endpoints Food substances that
are essential or
conditionally
essential or that are
components of
energy nutrients (e.g.,
fats, proteins, and
carbohydrates).
Nutrient
requirements
Adequate intakes are essential
for preventing and treating
deficiency diseases.
Average inflection point between
adequate and inadequate intakes (EAR)
of a group and its associated population
variance (RDA).
Nutrient
toxicities
Intakes at some level above
adequate intakes may pose
the risk of adverse health
effects.
Highest intake of a group that is unlikely
to pose a risk of adverse effects and
above which the risk of adverse effects
increases (UL).
Chronic-disease
endpoints
Naturally occurring
food substances,
including nutrients,
for which changes in
intake have been
demonstrated to have
a causal relationship
to the risk of one or
more chronic
diseases.
[Intakes of
“beneficial”
substances
With [intakes, the relative
risk Ycompared with baseline
intakes.
Relative risk (ratio of the probability of
an event occurring in a group with higher
intakes to the probability of an event in a
comparison group with lower intakes).
YIntakes of
“harmful”
substances
With Yintakes, the relative
risk Ycompared with baseline
intakes.
Relative risk (ratio of the probability of
an event occurring in a group with lower
intakes to the probability of an event in a
comparison group with higher intakes).
1
DRI, Dietary Reference Intake; EAR, Estimated Average Requirement; RDA, Recommended Dietary Allowance; UL, Tolerable Upper Intake Level; [,
increased or increases; Y, decreased or decreases.
CHRONIC DISEASE ENDPOINTS AND DRIs 9S of 37S
Historically, the food substances for which expert panels es-
tablished reference values tended to be essential or conditionally
essential nutrients or those that supplied energy (31). With its
inception, the DRI-development process broadened this concept
to include food substances with documented effects on chronic
disease risk (e.g., fiber, saturated fats, and trans fats). Today,
there is considerable interest in expanding future DRIs to in-
clude other bioactive components with documented health ef-
fects (32–34). Although essential nutrients have a direct and
specific effect on nutrient deficiencies, other food substances
alone might be neither necessary nor sufficient to reduce disease
risk. Even if research has established a causal relation between
a food substance and a chronic disease outcome, the mechanisms
of action are often unknown or poorly understood. Research re-
sults on chronic disease risks are often expressed as relative risks
as opposed to the reporting of absolute risks that experts typically
use to define nutrient requirements for essential nutrients. Al-
though the evidence may be reported as relative risks, DRI de-
cisions may also need to consider the relation of a food substance
and chronic disease within an absolute risk context (35).
V-A. JUDGING THE EVIDENCE: EVIDENTIARY
CHALLENGES
This section and the next 2 sections discuss ways to assess the
strength of the evidence on causal relations between food substances
of interest and targeted chronic diseases. This section focuses on
study designs and related issues that affect the use of evidence to
assess the causality of these relations in DRI evaluations.
The DRI process involves 2 key decisions: 1) whether available
evidence supports a causal relation between the food substance of
interest and the chronic disease and, 2) if so, what DRIs may be
appropriately derived from the available data. DRI committees
make these 2 key decisions for both beneficial and adverse effects
as guided by 2 key questions and their component characteristics
(Table 3). When DRI committees find causal relations between
food substances and chronic diseases, they can then derive DRI
values that are appropriate given the evidentiary base that supports
the intake-response relations. Tolerance of uncertainty is likely to
vary for decisions about beneficial compared with adverse effects
and for decisions involving causal compared with intake-response
relations.
Judging evidence to develop DRIs on the basis of chronic
disease endpoints has been an evolutionary process that continues
to present major challenges. The 1994 FNB committee noted that
consideration of chronic disease endpoints often requires a dif-
ferent type of evidence than the evidence that committees have
used for determinations of nutrient requirements on the basis of
classical deficiency diseases (7). In the 6 DRI reports published
between 1997 and 2005, the totality of the evidence from both
observational and intervention studies, appropriately weighted,
formed the basis for conclusions with regard to causal relations
between food-substance intakes and chronic disease outcomes
(23, 24, 29, 36–38). The 2011 DRI Committee on Calcium and
Vitamin D stated that RCTs provided stronger evidential support
over observational and ecologic studies and were therefore nec-
essary for the committee to further consider a health-outcome
indicator (14). This committee also considered whether evidence
from published RCTs and high-quality observational studies was
concordant and whether strong biological plausibility existed. The
paucity of studies specifically designed to support the development
of DRIs continues to be a challenge.
Overarching challenges
When a DRI committee considers the strength of the evidence
for its decisions, it considers overarching challenges that apply
across different types of study designs and specific study design
characteristics. This section discusses 3 overarching challenges:
sources of bias, selection of chronic disease outcome measures,
and statistical issues.
Sources of bias
A bias consists of systematic (not random) errors in estimates
of benefits or risks due to a study’s design or in the collection,
analysis, interpretation, reporting, publication, and/or review of
data (39). Bias results in erroneous (as opposed to less precise)
estimates of the effects of exposures (e.g., food substances) on
outcomes (e.g., risk of chronic disease).
Evaluations of whether evidence likely supports a conclusion
about causation often use risk-of-bias concepts. Risk of bias
varies by study design (Figure 1) (40–43). At each ascending
level in the pyramid in Figure 1, the quality of evidence is likely
to improve (i.e., the risk of bias decreases) and the quantity of
available studies usually declines. Within each level, however,
quality varies by study design and implementation, which can
blur the quality differences among hierarchies in the pyramid.
Confidence in whether relations of interest are causally related
generally increases toward the top of the pyramid.
Tab l e 5 lists sources and types of bias that can affect nutrition
studies. Table 6 describes examples of criteria for assessing the
risk of bias associated with different study types. It is possible to
avoid or minimize some types of biases in the study design,
conduct, and analysis stages by using, for example, double-blinding,
management of confounding by matching and/or multivariable
analyses, or assessment of objective exposure. A major source of
bias in studies of relations between food substances and chronic
diseases is the use of self-reported intake assessments (e.g.,
food-frequency questionnaires, 24-h recalls, or food diaries)
(44). Zheng et al. (45) provided an example of the dominant
influence that uncorrected nonrandom measurement error in
energy intake estimates from self-reported diets may have on
associations with risks of CVD, cancer, and diabetes in a cohort-
study context.
Selection of chronic disease outcome measures
A second overarching challenge in evaluating the strengths and
weaknesses of evidence relates to the selection of an outcome
measure for assessing whether a relation between food substances
and chronic diseases is causal and identifying an indicator for
intake-response analysis. It is possible to measure a chronic
disease outcome directly (e.g., as an incident event) or indirectly
by using a substitute measure (e.g., a qualified surrogate disease
marker or a nonqualified disease marker). The type of outcome
measured affects the level of confidence in whether the relation
between a food substance and chronic disease is causal. The
selection of an indicator for deriving intake-response relations
also depends on whether the indicator is on the causal pathway
between the intake and the disease outcome.
10S of 37S YETLEY ET AL.
For this report, the outcome of interest is a chronic disease.
Ideally, the measured outcome in available studies consists of
the incidence (event) of the chronic disease as determined by
appropriate diagnostic criteria. Data on this type of outcome
from an RCT provide the most direct assessment of a rela-
tion between a food substance and a chronic disease outcome
and a high degree of confidence that the relation is causal
(Figure 2).
The limiting factor is that studies that use a chronic disease
outcome may not be available or even feasible, and DRI com-
mittees might then consider the use of a qualified surrogate
disease marker or a nonqualified disease marker as the outcome
measure. Most of these outcomes are biomarkers or are based on
biomarkers, as defined in Text Box 4.
The types of outcomes that can substitute for direct measures
of a chronic disease outcome can range from biomarkers close to
the disease (e.g., blood pressure for CVD or LDL cholesterol for
coronary heart disease) to those that are more distant from the
disease (e.g., indicators of inflammation or immune function for
CVD and cancer). One type of substitute disease outcome is the
qualified surrogate disease marker, defined in Text Box 5,
a short-term outcome measure that has the same association with
the intake of a food substance as a long-term primary endpoint.
The use of a surrogate marker enables a more rapid de-
termination of the effectiveness of changes in intake on the risk of
the chronic disease. Achieving “surrogate” status requires strong
evidence and a compelling context (6, 46). That is, the outcome
measure must be qualified for its intended purpose (e.g., to show
that changing the intake of a food substance can prevent or alter
the risk of the chronic disease). A qualified surrogate marker has
prognostic value (i.e., correlates with the chronic disease out-
come), is on the causal pathway between the intake and the
chronic disease, and substantially captures the effect of the food
substance on the chronic disease. DRI committees have used
LDL-cholesterol concentrations as a surrogate disease marker
for coronary heart disease and blood pressure as a surrogate
marker for CVD (15, 23, 24). The use of a surrogate marker
instead of the incidence of a chronic disease can provide a
reasonable basis, but not absolute certainty, for evaluating
whether a relation between a food substance and a chronic
disease is causal (Figure 2). The second type of substitute
disease outcome is an outcome that has not been qualified as
a surrogate disease marker, referred to in this report as
a nonqualified disease marker as defined in Text B o x 6 .
An example of a nonqualified outcome for CVD is carotid
intima-media thickness (47). A nonqualified outcome marker is
associated with considerable uncertainty about whether the re-
lation between a food substance and a chronic disease is causal
(Figure 2).
Statistical issues
For any study design, careful interpretation of findings by
experts is necessary to reach appropriate conclusions about the
strength of the evidence. The use of inappropriate statistical
methods (e.g., multiple statistical comparisons involving several
outcomes and/or subpopulations without adjustment) can un-
dermine the validity of conclusions. The primary outcome of an
RCT and other study types is the endpoint for which the study is
designed and powered and that investigators use to define in-
clusion and exclusion criteria. Secondary endpoints and post hoc
endpoints might not have adequate statistical power, participants
may not be appropriately randomized (in the case of RCTs), and
participant inclusion and exclusion criteria might not be adequate
for the analysis of secondary and post hoc outcomes. Importantly,
reports on secondary and post hoc outcomes of RCTs and
analyses of subsets of the trial cohort need to account for multiple
tests of different trial hypotheses. Caution is therefore necessary
in the use of secondary outcomes and post hoc analyses of RCTs
or other study types when those outcomes were not part of the
original study protocols.
Study designs
Past DRI committees have described how the known strengths
and weaknesses of different study designs influenced their DRI
FIGURE 1 Hierarchy of evidence pyramid. The pyramidal shape qualitatively integrates the amount of evidence generally available from each type of
study design and the strength of evidence expected from indicated designs. In each ascending level, the amount of available evidence generally declines. Study
designs in ascending levels of the pyramid generally exhibit increased quality of evidence and reduced risk of bias. Confidence in causal relations increases at
the upper levels. *Meta-analyses and systematic reviews of observational studies and mechanistic studies are also possible. RCT, randomized controlled trial.
CHRONIC DISEASE ENDPOINTS AND DRIs 11S of 37S
evaluations and decisions (14, 23, 24, 29, 36–38). Concurrently,
evolving science provided new insights into how study designs
can affect evaluations of relations between food substances and
chronic diseases. Below, we integrate the perspectives of past
DRI committees and newer science as to the potential usefulness
of types of study designs for DRI contexts.
TABLE 5
Types of bias that can affect nutrition studies
1
Bias due to confounding
·Confounding: error in the estimated effect of an exposure on an outcome due to the presence of a common cause of the outcome or to baseline differences
between exposure groups in the risk factors for the outcome or because factors predicting the outcome (prognostic factors) are related to the exposure
that the person experiences
Related terms
·Allocation bias: error in the estimate of an effect caused by the lack of valid random allocation of participants to the intervention and control groupsin
a clinical trial
·Others: selection bias, case-mix bias
Bias in selection of participants for the study
·Selection bias: systematic error resulting from participant-selection procedures and factors that influence participation, systematic differences between
baseline characteristics of the groups compared, or exclusion of some participants from the analysis (i.e., some participants are excluded initially or
during follow-up), thereby changing the association between the exposure and the outcome
Related terms:
·Sampling bias: systematic error due to the methods or procedures for selecting the sample (e.g., participants, scientific papers), includes errors due to
sampling of a nonrandom population
·Others: inception bias, lead-time bias, immortal time bias
Bias in measurement of exposures: misclassification of exposure status or introduction of systematic bias by use of self-reported intake methodologies
Related terms:
·Dietary exposure assessment bias: error associated with the use of self-reporting tools for assessing dietary intakes
·Misclassification bias: systematic error due to inaccurate measurements or classifications of participants’ exposure status; may be differential
(related to the risk of the outcome) or nondifferential (unrelated to the risk of the outcome with an estimated effect that is usually biased toward
the null)
·Recall bias: systematic error due to differences in accuracy of recall, particularly relevant to case-control studies because cases are more likely to recall
potentially important events
·Others: observer bias, detection bias
Bias in measurement of outcomes: erroneous measurement or classification of outcomes
Related terms:
·Misclassification bias: systematic error due to inaccurate measurements or classifications of participants’ outcome status
·Nondifferential measurement error: can be systematic (e.g., measurements that are all too high), which does not cause bias or affect precision, or canbe
random, which affects precision but does not cause bias
·Detection bias (also known as differential measurement error): systematic differences between groups in how outcomes are determined. This bias can
occur when outcome assessors are aware of participants’ exposure status and the outcome is subjective; the researchers use different methods to assess
outcomes in different groups (e.g., questionnaires for the study group and medical records for the control group); or measurement errors are related to
exposure status or a confounder of the exposure-outcome relation. Blinding of outcome assessors can help address this bias but is often not possible.
Studies with self-reported outcomes have a higher risk of bias than those with clinically observed outcomes.
·Recall bias: see above
Bias in selection of reported findings
·Reporting bias: systematic differences between reported and unreported results
Related terms:
·Outcome-reporting bias: reporting on some, but not all, of the available outcome measures (e.g., reporting the most favorable results of multiple
measurements or the results of the most favorable subscale of the many that are available)
·Analysis-reporting bias: investigators select results from exposure effects that they measured in multiple ways (e.g., multiple analyses with and without
adjustment for different sets of potential confounders or use of a continuously scaled measure analyzed at different cutoffs)
Bias due to departures from intended exposures
·Performance bias: systematic differences between groups in care provided or in exposure to factors beyond the intended exposures
·Time-varying bias: change in the exposure over the follow-up period and postexposure prognostic factors that affect the exposure after baseline
Bias due to data missing not at random: can be due to attrition (loss to follow-up), missed appointments, incomplete data collection, or exclusion of
participants from the analysis
Related terms:
·Attrition bias: systematic differences between groups in withdrawals from a study
·Selection bias: see above
Publication bias: result of the tendency for journals to publish articles with positive results, particularly if the articles report new findings, or of the tendency
of authors to cite studies that conform to their or their sponsor’s preconceived ideas or preferred outcomes
Conflict of interest from sponsor bias: may be incurred when there is financial conflict; sponsor participation in data collection, analysis, and interpretation
of findings can compromise the validity of the findings. This may result from the choice of design and hypothesis, selective outcome reporting,
inadequacy of reporting, bias in presentation of results, or publication biases.
1
Data are from references 39 and 41–43. “Exposure” refers to the variable with the causal effect to be estimated (e.g., a food substance). In the case of
a randomized controlled trial, the exposure is an intervention; “outcome” is a true state or endpoint of interest (e.g., a health condition). Lists of related terms
are not intended to be exhaustive but to offer pertinent examples.
12S of 37S YETLEY ET AL.
RCTs
RCTs with a chronic disease event or qualified surrogate disease
marker as the primary outcome. RCTs can minimize or eliminate
the likelihood of some key types of bias when they use ran-
domization, concealment, and double-blinding protocols and
have adequate statistical power (14, 23, 24, 29, 36–38). It is
possible to compare disease incidence among randomly assigned
groups receiving different interventions (e.g., supplement com-
pared with placebo) by using the so-called intention-to-treat
analyses, without using any dietary-assessment data, thus avoiding
the systematic biases associated with reliance on self-reported
intakes to determine exposures in observational studies. Dietary
assessments need only provide assurance that a trial has adequate
precision (i.e., statistical power), and they can also provide useful
background information for evaluating that adherence to inter-
ventions has been followed or to account for background intake
when supplements are added. RCTs often allow testing of small
effects that observational studies cannot reliably detect. RCTs
usually are the only type of study that allows direct assess-
ment of causation, although other approaches, such as Men-
delian randomization, may offer an alternative in special
situations (48–52).
TABLE 6
Examples of criteria to assess the risk of bias by study type
1
Type of bias Criterion
Study type
RCT
Cohort
study
Case-control
study
Cross-sectional
study
Bias due to confounding Were relevant confounding factors prespecified and considered? NA UU U
Were study groups balanced with respect to the distribution of
confounding factors?
NA UU U
Were confounding factors taken into account in the design and/
or analyses?
NA UU U
Was the assignment of participants to study groups randomized? UNA NA NA
Was an adequate method of concealment of allocation to study
groups used?
UNA NA NA
Bias in selection of participants
for the study
Were the same inclusion and exclusion criteria used for all study
groups?
UU U U
Was the likelihood that some participants might have the
outcome before the exposure or intervention assessed and
taken into account in the design and/or analysis?
UU U U
Was the percentage of eligible nonparticipants in each study
group below an acceptable value?
UU U U
Bias in measurement of
exposures and interventions
Was the exposure or intervention status measured in an accurate
and sufficiently precise way?
UU U U
Bias due to departures
from intended
exposures and interventions
Were there systematic differences between study groups in the
care provided and/or in exposures to factors beyond those
intended by study design?
UU U U
Was the exposure or intervention status assessed more than once
or in .1 way to help ensure fidelity to the study design?
UU U U
Bias due to missing data Was the percentage of participants dropping out in each study
group below an acceptable value?
UU U U
Were missing data appropriately handled (e.g., intention-to-treat
analysis, imputation)?
UU U U
Bias in measurement
of outcomes
Were all relevant outcomes measured in an appropriately
accurate and sufficiently precise way (e.g., valid and reliable)
and done consistently across all study participants?
UU U U
Was the length of follow-up among study groups in prospective
studies the same, or in case-control studies were the times
between exposures or interventions and targeted outcomes
the same in cases and controls?
UU U U
Was the assessment of outcome made “blind” to exposure or
intervention status or, when blinding was not possible, was
there recognition that knowledge of exposure or intervention
status could have influenced the assessment of the outcome
or outcomes?
UU U U
Bias in selection of the
reported result
Were the prespecified outcomes partially reported or not
reported because of the statistical significance or magnitude
of the effect of the exposure or intervention?
UU U U
Is there evidence that the results from all participants, not only
a subset, were analyzed or that all multiple-adjusted analyses,
not only selected ones, were fully reported?
UU U U
1
NA, not applicable; RCT, randomized controlled trial; U, applicable to the study type.
CHRONIC DISEASE ENDPOINTS AND DRIs 13S of 37S
RCTs have the following limitations:
The costs are typically high for outcomes based on chronic
disease events.
Persons agreeing to undergo randomization might be a select
subset of the population of interest, which limits the gener-
alizability of trial results.
For practical reasons, RCTs usually measure only a single or
limited intake range of one food substance or a few food sub-
stances.
The study follow-up period is typically short relative to the
period of food-substance exposure preceding the initiation of
the study.
Maintaining and reporting on intervention adherence can be
challenging, particularly for diet-modification studies.
Informed-consent procedures that indicate the study purpose
(e.g., to evaluate the effect of vitamin D on bone health) may
lead participants to choose to consume different foods and/or
supplements independently of the study intervention.
Blinding of study participants is difficult when interventions
are based on dietary changes but is more achievable when the
intervention consists of dietary supplements (e.g., to deliver
micronutrients).
Over the past several decades, investigators designed several
large RCTs in which the primary aim was to evaluate relations
between food-substance intakes and chronic disease outcomes.
Examples of completed studies include trials on the relations
between the following:
b-carotene and lung cancer (53–55);
B vitamins and CVD (56);
vitamin E and both CVD and prostate cancer (57, 58);
salt and blood pressure (59);
energy and fat (combined with physical activity) and diabetes (60);
and
a low-fat diet and breast and colorectal cancer (61, 62).
The DASH (Dietary Approaches to Stop Hypertension)-
Sodium trial (59) confirmed the hypothesis that sodium-intake
reductions result in lower blood pressure, and the Diabetes
Prevention Trial showed that diet and physical activity changes
could reduce diabetes incidence (60). However, other trials either
found that the food substances of interest [B vitamins and risk of
CVD (56) and vitamin E and risk of CVD (58)] had no significant
effect or an unexpected adverse effect on the chronic disease
outcomes studied [risk of lung cancer for b-carotene (53, 54),
risk of prostate cancer for vitamin E (63)].
RCTs with nonqualified disease markers as primary outcomes.
Similar to RCTs that use chronic disease events or qualified
surrogate markers as primary outcomes, well-designed and con-
ducted trials that rely on nonqualified outcomes can also reduce
the possibility of outcome bias. Moreover, because nonqualified
disease markers often change within relatively short times after
an intervention is introduced and can be readily measured, such
studies require less time to produce effects and often have ad-
equate statistical power with smaller samples than studies that
target clinical disease events (e.g., cardiovascular events). As
a result, well-designed RCTs that use nonqualified disease
markers can be less costly than those that measure clinical disease
events. The use of nonqualified disease markers to measure re-
lations between food substances and chronic diseases is relatively
common, and many more studies use such outcomes than RCTs
with a clinical event or a qualified surrogate disease marker as
the primary outcome. However, substantial uncertainty about
whether a relation between food substances and chronic diseases
is causal frequently limits the usefulness of nonqualified disease
markers because of the lack of evidence that shows that these
outcome measures are accurate and reliable indicators for the risk
of the chronic disease of interest (Figure 2) (6, 18, 46, 64, 65).
Several publications noted the need for caution in the use of these
types of trials to establish causal relations between food sub-
stances and chronic disease events (14, 18).
Observational studies
Cohort studies. An extensive body of evidence from obser-
vational studies suggests that changes in intakes of some food
substances can beneficially or adversely alter the risk of certain
FIGURE 2 Conceptual framework for assessing causality on the basis
of level of confidence that the intake–chronic disease relation is causal. Panel
A: Direct assessment involving the measurement of both intake and chronic
disease outcome (event or incidence); highest confidence that relation is
causal. Panel B: Indirect assessment involving the measurement of a qualified
surrogate disease marker as a substitute for a direct measurement of the
chronic disease per se; provides a reasonable basis, but not absolute certainty,
that the relation between the intake and the chronic disease is causal. Panel
C: Indirect assessment involving the measurement of a nonqualified disease
marker as a substitute for a direct measurement of the chronic disease;
because this type of outcome measure lacks sufficient evidence to qualify
as a substitute for the chronic disease of interest, there is considerable un-
certainty as to whether the relation between the intake and the chronic
disease is causal. Shaded boxes indicate variables and outcomes that are
measured directly. Nonshaded boxes indicate variables or outcomes that
are not measured but whose presence on the causal pathway is inferred.
Arrows indicate a unidirectional, causal relation. This type of relation can
be directly assessed by randomized controlled trials. If observational studies
(e.g., prospective cohort studies) are being assessed, the observed relations
are associations, not causal links. Solid bold arrows indicate a relation with
high confidence. Dashed arrows indicate relations with some uncertainty.
Lighter arrows indicate less certainty than bolder arrows. If any part of the
causal pathway between intake and chronic disease outcome has uncertainty,
then the entire causal pathway has uncertainty. “Qualified” biomarkers of
outcome require strong evidence that their use as substitutes for unmeasured
outcomes can accurately and reliably predict the outcome of interest. “Qual-
ification” has a contextual basis in that the evidence about its use as a sub-
stitute for an unmeasured outcome needs to be relevant to the proposed use
of the biomarker (e.g., relation between food-substance intake and a chronic
disease). Intakes can be assessed directly or by measurement of qualified
biomarkers of intake.
Text Box 4
Abiomarker is “a characteristic that is objectively mea-
sured and evaluated as an indicator of normal biological
processes, pathogenic processes, or pharmacologic re-
sponses to [a]n .intervention” (6). (“Objectively” means
reliably and accurately measured.)
14S of 37S YETLEY ET AL.
chronic diseases. The increasing availability of large cohort
studies with long follow-up periods has increased the use of
cohort studies in recent evaluations of relations between food
substances and chronic disease events.
Ideally, investigators collect data from cohort studies pro-
spectively to more optimally control the type and quality of data
collected. The prospective acquisition of dietary data is partic-
ularly important because recall of past dietary intakes is subject to
considerable uncertainty.
Cohort studies have several advantages for supporting the
development of DRIs on the basis of chronic disease outcomes
(14, 23, 24, 29, 36–38):
Study results are frequently directly relevant to noninstitu-
tionalized humans.
Study populations can be large and diverse.
Follow-up can occur over many years or even many decades.
A range of intakes can be associated with a range of relative
risks.
Temporal relations between intakes and outcomes are less
uncertain than with cross-sectional or case-control studies.
The challenges in the use of cohort studies for DRI purposes
include the following:
Prospective cohort studies are more vulnerable to confound-
ing and selective reporting bias than are RCTs (13, 22, 40).
Statistical adjustments may decrease but cannot totally elim-
inate the likelihood of confounding.
Evidence on the causal nature of relations between expo-
sures and outcomes cannot be directly assessed and therefore
must be inferred, thus increasing uncertainty as to the val-
idity of the results.
Variations in food-substance intakes may be limited in ho-
mogenous cohorts, making it difficult to identify intake dif-
ferences between subgroups.
Relative risk trends often have small effects, although small
effects on diseases of sufficient prevalence or severity can be
substantial at the population level.
The reliability and interpretation of observed associations
depend directly on the quality of the dietary exposure assess-
ment; systematic bias in self-reported intakes is particularly
problematic.
Factors other than variations in intakes of the food substance
of interest can affect comparisons of results across time (e.g.,
long-term follow-up in a given cohort or comparison of stud-
ies conducted at different time periods). For example, the
increasing use of statins and aspirin can affect assessments
of coronary heart disease over time. Increasing intakes of
fortified foods and supplements can overwhelm the effect
of the food substance of interest. Investigators must account
for the confounding effects of these temporal changes when
evaluating relations between food substances and chronic
diseases that span long time periods.
The use of a single diet assessment in some prospective co-
hort studies assumes that no variation in dietary intake oc-
curred over time.
Other types of observational studies. Other types of observa-
tional studies include case reports, ecologic and cross-sectional
studies (including surveys), and case-control studies. These types
of studies played an important role in generating early hypotheses
about relations between nutrients and chronic diseases (12).
Case reports and case studies are descriptive studies of out-
comes in individuals or small groups who are exposed to a food
substance but are not compared with a control group or groups.
Cross-sectional studies and ecologic studies examine a food
substance and a health condition in a population at 1 point in time.
In a cross-sectional study, investigators examine the food sub-
stance and health condition in each individual in the sample. In an
ecologic study, investigators examine the variables of interest at
an aggregated or group level, sometimes resulting in errors in
association (known as “ecological fallacy”). Case-control studies
are retrospective in that they enroll patients with and without
a given condition and attempt to assess whether the 2 groups
of participants had different exposures to a food substance or
substances.
A major limitation of these types of studies is their inability to
establish the temporal relation between the intake of a food
substance and the appearance of a chronic disease. These types of
studies remain useful for hypothesis generation, but their utility
for setting DRI values is limited. Like prospective cohort studies,
they are vulnerable to confounding.
Special challenges for observational studies
Measurement error in intake assessments. Unlike RCTs, ob-
servational studies, including cohort studies, require accurate
dietary assessments for their validity and usefulness. A major
challenge for observational studies in evaluating relations be-
tween food substances and chronic diseases is the difficulty in
obtaining accurate estimates of food-substance intakes when
using self-reported data (13, 66). Self-reported intake estimates
result in substantial underestimation bias for energy and protein
intakes, especially among overweight and obese individuals (66, 67).
These systematic biases can severely distort intake-response
curves. Random errors in assessing intake may also attenuate the
relation between intakes of a food substance and chronic disease
risk, making it difficult to detect such a relation if it exists.
Cohort studies can minimize both systematic and random
aspects of intake measurement error bias by estimating food-
substance intakes with the use of a biomarker of intake or
dietary recovery (see Text Box 7) in addition to, or in place of,
Text Box 5
Asurrogate disease marker (also known as a surrogate
marker, surrogate endpoint, or surrogate disease outcome
marker) predicts clinical benefit (or harm, or lack of benefit
or harm) based on epidemiologic, therapeutic, pathophys-
iologic, or other scientific evidence (6). A surrogate disease
marker is qualified for its intended purposes.
Text Box 6
Anonqualified disease marker (also known as an in-
termediate disease outcome marker or intermediate end-
point) is a possible predictor of a chronic disease outcome
but lacks sufficient evidence to qualify as an accurate and
reliable substitute for that outcome.
CHRONIC DISEASE ENDPOINTS AND DRIs 15S of 37S
self-reported intakes. However, other important sources of bias
(e.g., confounding) may remain. Currently, only a small number
of established biomarkers of food-substance intake (e.g., doubly
labeled water for energy expenditure assessments and 24-h
urinary nitrogen for assessing protein intake) satisfy the classical
measurement error criteria for recovery biomarkers (67). How-
ever, these only assess intake over short periods. Biomarker-
calibrated intake assessments hold promise for minimizing
systematic and random errors in intake measurements, but the
field needs qualified biomarkers for additional dietary compo-
nents before their use can substantially affect nutritional epi-
demiology research (45).
Attribution to a food substance. A second challenge is the
difficulty of attributing an observed effect to the food substance of
interest (13, 22). In observational studies, investigators usually
calculate the amounts of food substances that participants con-
sume from self-reports of food and supplement intakes. In-
teractions between food substances make it difficult to determine
whether an observed association between the calculated intake
of a specific food substance is a causal factor or simply a marker
of another food component or components within the dietary
pattern.
Statistical approaches
The Rubin potential-outcomes framework is an example of
a statistical approach that potentially may enhance the usefulness
of observational studies by producing approximate inferences
about causal links in the absence of random allocation of subjects
to treatments when candidate data sets include a large number of
covariates (including key covariates) and the key covariates have
adequate overlap of their distributions between experimental and
control groups (68). The rationale is that the covariates in-
corporated in the analyses might include potential confounders.
However, there is no way to guarantee that all confounders were
measured in an observational study, and it is possible that $1
important confounders are missing. Researchers need to vali-
date these approaches for future applicability to diet and health
studies.
Systematic reviews and meta-analyses
Systematic reviews. A systematic review is the application of
scientific strategies to produce comprehensive and reproducible
summaries of the relevant scientific literature through the sys-
tematic assembly, critical appraisal, and synthesis of all relevant
studies on a specific topic. Ideally, scientists with expertise in
systematic reviews (e.g., epidemiologists) collaborate with subject
matter experts (e.g., nutritionists) in the planning of the review.
The subject matter experts can refine the key scientific questions
and the study inclusion and exclusion criteria that will guide the
review, ideally with the involvement of an experienced research
librarian. The systematic review experts then abstract the data and
summarize their findings, generally with duplication of key
screening or data-abstraction steps. Once the review is in draft
form, the review team solicits peer reviews from qualified experts
in the subject matter and in systematic review methodology. This
approach maintains scientific rigor and independence of the
systematic review while maximizing the likelihood that the re-
view will be relevant to subject matter experts and users. This was
the process that the 2011 DRI committee used for its systematic
review on calcium and vitamin D (69).
The advantages of systematic reviews include the following:
The process is characterized by an organized and transparent
methodology that locates, assembles, and evaluates a body
of literature on a particular topic by using a set of specific
criteria.
The inclusion of all relevant research and the use of a priori
criteria for judging study quality minimize study-related
biases and enhance transparency.
Non–content experts who search, assemble, and analyze the
appropriate literature minimize the potential for study se-
lection bias with assistance from content experts in refining
the key scientific questions and in developing the inclusion
and exclusion criteria.
It is possible to apply the methodology, which was devel-
oped for RCTs, to other study types as long as those con-
ducting the review appropriately account for biases in the
analysis and interpretation of the data.
Systematic reviews also have several disadvantages, including
the following:
Researchers have not agreed on or validated selection, eval-
uation, and analytic criteria that are uniquely applicable to
studies of relations between food substances and chronic
diseases.
The quality of published reviews can vary by 1) the degree
of adherence to consensus methods and reporting standards
and 2) the rigor applied to measures of variables related to
food substances (e.g., baseline intakes and status, the effect
of biases in intake estimates and biomarker assays) in the
reviewed studies. Deficits can lead to the possible omission
of critical information, inappropriate conclusions, and/or
unbalanced dependence on expert opinion; and each of
these can increase the likelihood of bias or misinterpreta-
tion (21).
Systematic reviews will carry forward the biases (e.g.,
energy-based intake underestimates) of the included
studies.
Reporting and publication biases can be problematic, partic-
ularly if those conducting the reviews do not adequately ac-
count for these issues. The use of a range of effect estimates,
such as ORs or relative risks, or tallies of positive and neg-
ative studies to summarize data can also lead to misleading
results due to publication bias (70). Public solicitation is one
approach to identify unpublished research (i.e., gray litera-
ture) for comparison to published data to help assess the po-
tential impact of publication bias (21).
Meta-analysis. Meta-analysis uses statistical methods to com-
bine the results of several studies to increase statistical power,
improve effect estimates, and resolve disagreements and un-
certainties among studies. These analyses compare and contrast
Text Box 7
An intake biomarker (or dietary recovery biomarker) is
usually a measure of metabolite recovery in urine or blood
used to objectively assess the intake of a food substance
over a prescribed period.
16S of 37S YETLEY ET AL.
study results to identify consistent patterns and sources of dis-
agreement.
Meta-analysis has several advantages, including the following:
It can appropriately weight quantitative relations between
food substances and chronic diseases by the precision of in-
dividual studies, yielding an overall estimate of the benefits
or risks with greater precision than can be achieved with in-
dividual studies.
It can identify differences in relations between food sub-
stances and chronic diseases across studies.
Meta-analysis has several disadvantages, including the fol-
lowing:
Meta-analysis techniques might not be appropriate when
considerable heterogeneity exists across the set of studies
to be combined (70). Heterogeneity across studies is com-
monly related to factors such as differences in intake assess-
ment, intervention protocols, population characteristics,
outcome measures, and analytic procedures (70).
Meta-analyses carry forward biases that are present in the
included studies (e.g., systematic bias in energy intake esti-
mates).
Pooled-effect estimates can be misleading without consider-
ation of study quality, a strong methodologic grasp of the
meta-analysis techniques, extensive content knowledge of
the research question, and commitment to impartial applica-
tion of the approach (70).
Reporting bias may be a problem, because less beneficial
treatment effects are more often observed in unpublished
than in published trials. Studies not published in English
or not indexed in publication databases (e.g., Medline or
Cochrane Central) might have different treatment effects
than more readily available studies (70, 71).
Meta-analyses and systematic reviews can provide succinct,
useful summaries of the available literature that are relevant to the
research question of interest. However, the results will still re-
quire careful interpretation by experts to reach appropriate
conclusions, including conclusions about causation and possible
biases.
Systematic reviews and meta-analysis for nutrition-related
topics
The use of systematic reviews and meta-analyses for nutrition-
related topics is relatively recent (21, 72). The 2011 DRI review
on vitamin D and calcium was the first to use these types of
studies within a DRI context (14, 69). WHO and European
Micronutrient Recommendations Aligned nutrition-related ap-
plications also use these studies (73, 74).
A relatively recent approach is for reviews to include both
observational and trial data on the same relation between food
substances and chronic diseases (75–77). This approach facili-
tates direct comparisons of results between these study designs.
It is then possible to evaluate the strengths and weaknesses of
each study type for a given relation between food substances and
chronic diseases.
Animal and mechanistic studies
In the past, animal and mechanistic studies played an im-
portant role in establishing the essentiality of nutrients, although
similar results in humans were generally necessary to confirm the
findings (7, 12, 31). These studies have also been important in
traditional toxicologic evaluations of environmental contami-
nants and food ingredients when ethical considerations precluded
human testing (11). DRI committees have found that animal and
mechanistic studies provided important supporting information
on biological mechanisms and pathogenesis, but these com-
mittees generally did not consider such studies adequate to infer
causality or to derive intake-response curves for DRIs (14, 23, 24,
29, 36–38). Moreover, until recently, few animal models were
available that could adequately simulate relations between food
substances and human chronic diseases.
Other evidence-related challenges
Evaluations of relations between food substances and chronic
diseases pose a number of challenges in addition to those
mentioned above, including those discussed below (13).
Extrapolations from studied to unstudied groups
DRI committees set reference values for 22 life-stage groups
on the basis of age, sex, pregnancy, and lactation. These values
are intended for apparently healthy populations. Yet, most
available research does not readily fit this framework. Com-
mittees therefore need to consider whether to generalize results
from studied to unstudied groups. For example, this challenge can
arise when attempting to extrapolate results from the following
groups:
persons with diagnosed chronic diseases to persons without
such diagnoses;
persons with metabolic disorders that affect a substantial
proportion of the general population (e.g., obesity) to health-
ier populations;
one life-stage or sex group to a different life-stage or sex group
(e.g., from older adults to children or from young women to
pregnant females) (13); and
a population with a single ethnic origin to a population with
ethnic diversity.
Interactions between study variables
The following interactions of the food substance of interest
with other study variables may make it difficult to isolate the
effect of the food substance on the targeted chronic disease:
food substance and food-substance interactions (e.g.,
between sodium and potassium and vitamin D and calcium);
food substances and physiologic characteristics (e.g.,
responsiveness to a food substance in smokers and non-
smokers or in lean and obese individuals); and
food substances and environmental characteristics (e.g.,
socioeconomic status).
Effects on responsiveness to dietary intervention and effect
sizes
Various inherited and acquired subject characteristics and
contextual factors may influence responsiveness to exposures of
interest.
Differences in baseline characteristics, including baseline
nutritional status
Variations in gene polymorphisms
Duration of the observation and/or intervention
CHRONIC DISEASE ENDPOINTS AND DRIs 17S of 37S
The amount, timing, context, and nature of the food-substance
exposure
These challenges can affect studies in different ways. For
example, they can highlight biologically important interactions
that DRI committees need to take into account when setting
reference values. However, they can also lead to residual con-
founding not accounted for by covariate adjustment. These issues
can also lead to erroneous, misleading findings that form part of
the knowledge base and can misinform interpretations or com-
parisons of study results. Past reviews of the use of chronic
disease endpoints in DRI contexts have not identified effective
strategies for addressing these challenges (13, 22).
V-B. JUDGING THE EVIDENCE: TOOLS FOR ASSESSING
THE EVIDENCE
This section describes tools to assess the quality of individual
studies and the overall nature and strength of the evidence.
Tools for assessing the quality of individual studies
Bradford Hill criteria
The Bradford Hill criteria are a guide to making causal in-
ferences (78) (Table 7). The National Research Council’s 1989
report on diet and health (12) and the first 6 DRI reports (23, 24,
29, 36–38) used these criteria. As with most assessment tools,
these criteria do not address dietary intake measurement issues
[e.g., poor correlation of subjective measures of intake with
objective measures (67)], which are fundamental to consider-
ations of causality and intake-response relations.
Study quality-assessment tools
The main types of tools for evaluating evidence from RCTs
and observational studies are as follows: 1) quality-assessment
instruments that assess the quality of a study from conception to
interpretation as a whole and 2) risk-of-bias schema that assess
the accuracy of estimates of benefit and risk (Table 8) (40).
There is also a move toward conducting quality assessments at
the outcome level. Within a particular study, for example,
quality may be higher for subsets of outcomes, or blinding may
be more important to one outcome than another.
After evaluating published quality-assessment instruments,
Bai et al. (79) recommended the use of SIGN 50 methodology;
versions are available for cohort studies, case-control studies, and
RCTs (19). SIGN 50 uses the following 5 domains to assess the
quality of data from cohort and case-control studies: compara-
bility of subjects, assessment of exposure or intervention, assess-
ment of outcome measures, statistical analysis, and funding.
For RCTs, important domains are random allocation, adequate
concealment of participant assignment to groups and blinding to
treatment allocation, comparability of groups, no differences
between groups except for the treatment, and assessment of
outcome measurement. On the basis of these criteria, a study’s
overall assessment may be judged to be of high quality overall
(has little or no risk of bias and conclusions are unlikely to
change after further studies are done), acceptable (some study
flaws with an associated risk of bias and potential for conclu-
sions to change with further studies), or low quality (substantial
flaws in key design aspects and likely changes in conclusions
with further studies). The advantages of SIGN 50 are that it is
simple and includes key criteria for quality, good guidance is
available for its application and interpretation, and there is ex-
tensive experience with its use. Disadvantages are that it is not
outcome specific, not sufficiently inclusive of study character-
istics that are relevant to food substance and dietary studies, and
its assessment of bias domains is considered superficial ac-
cording to some experts (19).
Risk-of-bias tools that are specific to study type are available to
assess degree of bias (40). They provide a systematic way to
organize and present available evidence relating to the risk of bias
in studies and focus on internal validity. The Cochrane Collab-
oration’s risk-of-bias tool can be used to assess risk of bias for
randomized studies (41). Domains of this tool include random-
sequence generation; allocation concealment; blinding of
participants, personnel, and outcome assessors; incomplete
outcome data; selective outcome reporting; and other sources of
bias (41). A risk-of-bias tool for nonrandomized studies, Risk of
Bias in Nonrandomized Studies (ROBINS), is similar to the
Cochrane risk-of-bias tool for randomized studies (42). Ad-
vantages of ROBINS are that it can be outcome specific, it
provides a detailed assessment of bias domains, and good
guidance is available for its application and interpretation (42).
Disadvantages are that it is complex, not sufficiently inclusive of
study characteristics that are relevant to food substances and
dietary patterns, and there is little experience with its use.
Development of a quality-assessment instrument that is specific
to food substances
It is possible to develop a quality-assessment instrument that is
specific to food substances by adding food-substance–specific
aspects of quality to currently available algorithms for quality
assessment for use in conjunction with a general study-quality
tool (e.g., SIGN 50 or AMSTAR) (21, 40). Food-substance
quality-assessment instruments could take into account co-
variates, confounders, and sources of error that are especially
relevant to food substances. For intervention studies, these ad-
ditional items could be the nature of food-substance interventions,
doses of the food-substance interventions, and baseline food-
substance exposures in the study population (21). For observa-
tional studies, food-substance–specific quality factors might be
methods or instruments used to assess intakes of food-substance
exposures, ranges or distributions of the food-substance expo-
sures, errors in assessing food-substance exposures, and poten-
tial impacts of errors from assessing food-substance exposures
on the food-substance–outcome association (21). Other food-
substance–specific items are assessment of dietary intakes, in-
cluding longitudinal patterns, and mapping of dietary intakes to
food-substance intakes (40). The need for quality assessments
related to food-substance exposure in observational studies
speaks to the dominant effect of random and nonrandom intake
errors on assessments of magnitude, and even direction, of intake-
response associations.
Food substances and dietary applications
The Agency for Healthcare Research and Quality (AHRQ) has
produced systematic evidence reviews of associations between
food substances and health outcomes (e.g., for vitamin D and
calcium) (69, 76, 77, 80) that experts have used to develop DRIs
18S of 37S YETLEY ET AL.
and for other applications. In a recent food-substance review to
assess the quality or risk of bias of individual studies, the AHRQ
(77) used the Cochrane risk-of-bias tool for RCTs that identifies
biases related to selection, performance, detection, attrition,
reporting, and other factors (41). For observational studies, the
AHRQ used questions from the Newcastle Ottawa Scale (81). In
addition, the review included food-substance–specific questions to
address the uncertainty of dietary-assessment measures (21, 72).
Tools for assessing the overall quality of the evidence
Tools for systematic reviews and meta-analyses
The AMSTAR 2007 tool is useful for the development of high-
quality systematic reviews and meta-analyses of RCTs (79, 82,
83). Its methodologic checklist addresses 7 domains: the study
question, search strategy, inclusion and exclusion criteria, data
extraction method, study quality and validity, data synthesis, and
funding. The overall assessment can be high quality, acceptable
quality, or low quality. AMSTAR2, for nonrandomized studies, is
under development (http://amstar.ca/Developments.php). It will
also include confounding and reporting-bias domains. Meth-
odologic checklists for nonrandomized studies are available (84).
The newly released ROBIS (Risk of Bias in Systematic Reviews)
tool, which is similar to ROBINS, can assess risk of bias in both
nonrandomized and randomized studies (85).
The process for evaluating the quality of studies for inclusion
in systematic reviews and meta-analyses involves assessing the
quality or risk of bias of each candidate study, assembling all of
the assessments into a summary table or figure, and assessing the
overall study quality or risk of bias (41). No formal tool to
determine overall quality is currently available.
GRADE criteria for evidence grading
Methods of judging evidence of causation can vary from bi-
nary yes-or-no decisions to ranked approaches. Many systematic
reviews use the GRADE (20) criteria, which are in the latter
group. GRADE uses evidence summaries to systematically grade
the evidence on the basis of risk of bias or study limitations,
directness, consistency of results, precision, publication bias,
effect magnitude, intake-response gradient, and influence of
TABLE 7
Bradford-Hill criteria and application by the Institute of Medicine
1
Bradford-Hill criteria (78)
Diet and health (12) and DRI
reports (14, 23, 24, 29, 36–38)
Strength: effect sizes (not statistical significance) Yes
Consistency: consistency across study types, locations, populations, study times, and
other factors
Yes
Specificity: Is there likely one cause for the effect? Is the association specific to
a particular population, context, or outcome and not observed in other populations,
contexts, or outcomes?
Yes
Temporality: cause before effect with appropriate delay Yes
Biological gradient: dose-response relation (could be curvilinear with a dose-response
relation in part of the curve)
Yes
Biological plausibility: Is the nutrient of interest a biologically plausible cause of the
beneficial effect?
Yes
Coherence: Does cause-and-effect interpretation of data seriously conflict with
generally known facts and laboratory evidence?
No
Analogy: Is it possible to judge by analogy? No
Experiment: Is there experimental evidence from human and/or animal and in vitro
studies that is consistent with the associational findings?
No, with the exception
of the 2011 DRI report (14)
1
DRI, Dietary Reference Intake.
TABLE 8
Study types and tools for quality assessment and risk of bias
1
Quality-assessment tools Risk-of-bias tools
Systematic review of RCTs AMSTAR (http://amstar.ca/) ROBIS (http://www.robis-tool.info)
Systematic review of
nonrandomized studies
AMSTAR2 (in development; http://
amstar.ca/Developments.php)
ROBIS (http://www.robis-tool.info)
RCT SIGN 50 RCT (http://www.sign.ac.uk/
methodology/checklists.html)
Cochrane Collaboration risk-of-bias tool (http://handbook.cochrane.org/chapter_8/
8_5_the_cochrane_collaborations_tool_for_assessing_risk_of_bias.htm)
Cohort study SIGN 50 cohort (http://www.sign.ac.uk/
methodology/checklists.html)
ROBINS (http://ofmpub.epa.gov/eims/eimscomm.getfile?
p_download_id=526737)
Case-control study SIGN 50 case-control (http://www.sign.
ac.uk/methodology/checklists.html)
ROBINS (http://ofmpub.epa.gov/eims/eimscomm.getfile?
p_download_id=526737)
Cross-sectional study SIGN 50 cohort or case-control (http://
www.sign.ac.uk/methodology/
checklists.html)
ROBINS for cross-sectional studies is in development
1
AMSTAR, A Measurement Tool to Assess Systematic Reviews; RCT, randomized controlled trial; ROBINS, Risk of Bias in Nonrandomized Studies;
ROBIS, Risk of Bias in Systematic Reviews; SIGN 50, Scottish Intercollegiate Guidelines 50.
CHRONIC DISEASE ENDPOINTS AND DRIs 19S of 37S
residual plausible confounding and “antagonistic bias.” The
latter refers to bias that can result in underestimates of an ob-
served effect. As noted previously, evidence based on observa-
tional studies will generally be appreciably weaker than
evidence from RCTs and other intervention trials due to the
likelihood of confounding and various biases, in particular di-
etary measurement bias. GRADE also considers study quality in
its algorithms. There may be cases for which evidence from
observational studies is rated as moderate or even high quality,
because extremely large and consistent estimates of an effect’s
magnitude increase confidence in the results. GRADE users
assess and grade the overall quality of evidence for each im-
portant outcome as high, moderate, low, or very low. Users
describe recommendations as weak or conditional (indicating
a lack of confidence in the option) or strong (indicating confi-
dence in the option) (20).
Food-substance applications
The AHRQ uses the AHRQ Methods Guide to grade the
strength of the evidence for each outcome in a systematic review
(86). The AHRQ explores differences in findings between ob-
servational and intervention studies as well as their risks of bias to
offer possible explanations for interstudy disparities. The AHRQ
summarizes ratings of the strength of the evidence in evidence
profile tables that describe the reasoning for the overall rating.
This approach builds on the GRADE method by requiring
information on reporting biases (publication bias, outcome-
reporting bias, and analysis-reporting bias). It incorporates the
domains included in GRADE—the study limitations (risk of
bias), consistency, directness, precision, intake-response asso-
ciation, strength of association, and plausible uncontrolled
confounding—that would diminish an observed effect. AHRQ
evidence reviews use additional guidance for scoring consis-
tency and precision, grading bodies of evidence by study type,
addressing high-risk-of-bias studies, and other topics. AHRQ
evidence reviews grade the strength of the evidence as high,
moderate, low, or insufficient, indicating the level of confidence
in the findings.
Weighing the evidence
Establishing causality requires a careful evaluation of the
weight of the evidence on causal associations between expo-
sures and outcomes. This step can be complex, particularly in
the presence of multiple sources of information, not all of
which are consistent or of equal relevance or reliability. Sys-
tematic reviews can summarize the available evidence in
a comprehensive and reproducible manner (87). However, they
do not evaluate the weight of the evidence, which various DRI
decisions require. Although the Bradford Hill criteria for
evaluating causal associations provide useful general guidance
on weighing the evidence on causality, more detailed guidance
can also be helpful in some circumstances. The International
Agency for Research on Cancer, for example, has a detailed
scheme for identifying agents that can cause cancer in humans
based on a careful evaluation of the available human, animal,
and mechanistic data (88). An option for purposes of the
committee’s charge is to develop an analogous scheme for
assessing relations between food substances and chronic dis-
ease endpoints.
A review of 50 “weight-of-evidence” frameworks identified 4
key phases for assessments: 1) defining the causal question and
developing criteria for study selection, 2) developing and ap-
plying criteria for the review of individual studies, 3) evaluating
and integrating evidence, and 4) drawing conclusions on the
basis of inferences (89). This review identified important attri-
butes of a broadly applicable weight-of-evidence framework,
although the authors did not develop such a framework.
Applicability to food-substance studies
The US National Research Council (90) identified systematic
review, quality assessment, and weight of evidence as key com-
ponents of a qualitative and quantitative risk-assessment paradigm
(Figure 3). Each of these activities is also directly relevant to the
establishment of DRIs, especially for those that are based on
chronic disease endpoints. As with any synthesis of information on
a population health risk issue, there is a need to carefully evaluate
the available information and weigh the available evidence for
causality in reaching conclusions about the association between
food substances and chronic disease endpoints.
V-C. JUDGING THE EVIDENCE: OPTIONS FOR
ADDRESSING EVIDENCE-RELATED CHALLENGES
This section identifies the challenges related to 2 DRI-based
evidentiary decisions involved in assessing whether a food
substance is causally related to a chronic disease. The first
challenge deals with the type of endpoint (outcome or indicator)
that is best suited to these DRI decisions. The second challenge
addresses the desired level of confidence in the available evidence
that the food substance and chronic disease relation is valid.
The decisions about which options to implement to address
these evidence-related challenges need to be made in an inte-
grative manner because decisions about how to address one
challenge have implications for the nature of and responses to
the other challenge.
Options for selecting chronic disease endpoints
An early step in the decision-making process associated with
the development of a DRI value is the identification of potentially
useful measures (indicators) that reflect a health outcome—in
this case a chronic disease outcome—associated with the intake
of the food substance of interest (15). If a DRI reference value is
to be based on a chronic disease outcome, what types of in-
dicators are appropriate to use in making these decisions?
Studies vary in the type of outcome measured, ranging from
direct measures of the chronic disease based on generally ac-
cepted diagnostic criteria to indirect assessment by using either
a qualified surrogate marker of the chronic disease outcome or
a nonqualified disease marker (Figure 2) (15). Guidance on se-
lection of an indicator based on a chronic disease outcome
would inform decisions as to whether newer types of DRI values
specifically focused on chronic disease outcomes are more ap-
propriate than are more traditional reference values (Table 4)
(see section VI on intake-response relations). In addition, it
would clarify applications for some major users of DRIs (e.g.,
regulatory, policy) for whom clear differentiation between chronic
disease and functional endpoints is important for legal and pro-
grammatic purposes.
20S of 37S YETLEY ET AL.
Option 1: Endpoint (outcome) is the incidence of a chronic
disease or a qualified surrogate disease marker
The first option is to only accept study endpoints that are
assessed by a chronic disease event as defined by accepted di-
agnostic criteria, including composite endpoints, when applicable,
or by a qualified surrogate disease marker. These types of end-
points are associated with higher levels of confidence that the
food-substance and chronic disease relation is causal than are
nonqualified disease markers (Figure 2). However, few RCTs
designed to evaluate the relation of food substances to chronic
diseases have used a chronic disease event as the outcome measure.
In addition, only a few qualified surrogate markers of chronic
disease are available for evaluations of the relation between food
substances and chronic disease outcomes. The process of qualifying
a surrogate disease marker for evaluating food-substance and
chronic disease relations requires sound science and expert judg-
ment (6). Much of the evidence in which outcomes are assessed as
a chronic disease event comes from observational studies, and
uncertainty is greater about whether relations are causal with data
from observational studies than from RCTs (Figure 1). In addition,
some of the evidence would likely come from RCTs with chronic
disease outcomes assessed by qualified surrogate disease markers.
These outcome measures would provide a reasonable basis, but not
absolute certainty, that the relation between the food substance and
chronic disease is causal (Figure 2). Depending on the level of
confidence deemed acceptable for chronic disease–based DRI
decisions about causation and intake-response relations (see op-
tions on level of confidence below), the use of this option could
result in either a small body of evidence if high levels of confi-
dence in the validity of the relation are deemed necessary (e.g.,
causality is based on the availability of RCTs with chronic disease
or qualified surrogate disease outcomes) or a larger body of evi-
dence if lower levels of confidence are acceptable (e.g., causality is
inferred from observational studies with outcomes based on chronic
disease events or qualified surrogate disease markers).
Option 2: Endpoint (outcome) may include nonqualified
disease markers
To implement this option, a DRI committee would also accept
studies with outcomes that are possible predictors of the chronic
disease of interest but have not been qualified as surrogate disease
markers because they lack sufficient evidence for this purpose.
Examples of potential biomarkers of chronic disease risk include
brain atrophy as the combination of low Ab
42
and high T-tau and
P-tau levels for Alzheimer disease risk, endothelial dysfunction
for atherosclerosis risk, and certain polymorphisms for neural
tube defects. A large evidence base is available on relations be-
tween food substances and nonqualified chronic disease markers.
However, DRI committees have rarely chosen these types of out-
comes to establish a DRI value on the basis of a chronic disease
endpoint (15).
Compared with option 1, this option increases the number of
relations between food substances and chronic disease outcomes
for which committees could establish DRIs. However, consid-
erable uncertainty exists about whether decisions about causal
relations on the basis of nonqualified disease markers are valid
(6). The use of such outcome measures could therefore lead to
a loss of confidence in the DRI process.
Options for acceptable levels of confidence that the relation
is causal
The overall level of confidence deemed appropriate for DRI
decisions on the relation between a food substance and a chronic
disease is dependent on an integrated consideration of the type of
endpoint that a DRI committee accepts (i.e., a chronic disease
event, qualified surrogate disease marker, or a nonqualified
disease marker) and the overall evidence rating of the totality of
the evidence (Table 9). Establishing whether the evidence is
sufficient to proceed with making a chronic disease–related DRI
decision involves an evaluation of the level of confidence deemed
appropriate to determine that the relation of the food substance of
interest and the chronic disease is valid.
Option 1: Require a high level of confidence
The first option is to require a high level of confidence (e.g.,
level A; Table 9) that a proposed relation is causal. This level of
confidence likely requires at least some evidence from high-
quality RCTs in which the measured outcome is a chronic disease
event or qualified surrogate disease marker.
FIGURE 3 Framework for evidence integration. Adapted from reference 90 with permission.
CHRONIC DISEASE ENDPOINTS AND DRIs 21S of 37S
A major advantage of this option is that it provides a robust
basis for DRI decisions and therefore conclusions about the
relation are unlikely to change substantially when new findings
become available, although conclusions would probably need
minor modifications to integrate the new evidence. This option
would enhance both user and consumer confidence by reducing
the likelihood of major changes in DRI decisions over time.
Initially, DRI committees could only use this approach to es-
tablish DRIs on the basis of a few relations between food sub-
stances and chronic diseases because of the limited number of
high-quality studies with primary chronic disease outcomes that
are currently available or likely to become available in the near
future.
Past experience shows the value of this option. For example,
consistent results from several observational studies and evidence
of biological plausibility suggested that b-carotene reduces the
risk of lung cancer, vitamin E lowers the risk of both CVD and
prostate cancer, and B vitamins reduce the risk of CVD. How-
ever, subsequent large clinical trials failed to support these ini-
tial conclusions (53–58). Therefore, conclusions of benefit based
almost exclusively on strong and consistent evidence from ob-
servational studies would have been overturned by the sub-
sequent availability of evidence from large RCTs.
Option 2: Use level B evidence
A second option is to also include level B evidence (defined in
Table 9) as a basis for DRI decisions about causation. This level
of evidence suggests a moderate degree of confidence that the
relation of interest is causal, but new findings could change the
DRI decision. This approach allows committees to establish DRI
reference values for more topics than in option 1 that are related
to chronic diseases. However, early conclusions based on strong
observational evidence and trials that used nonqualified outcomes
often need to change because of the conflicting results of sub-
sequent RCTs, as the examples for option 1 show. This option
therefore has a risk of a loss of confidence in DRI decisions.
Option 3: Use actual level of certainty
The third option is to identify the actual level of certainty [e.g.,
levels A, B, C, or D, as defined in Table 9, or GRADE levels of
high, moderate, low, or very low (insufficient)] for each DRI
reference value based on a chronic disease endpoint. The ad-
vantage of this approach is that it provides more information than
do options 1 and 2 about the scientific evidence that supports
a given relation between a food substance and a chronic disease
endpoint. A disadvantage is that DRI values may become sep-
arated from grading scores as they are used and applied, thus
inadvertently suggesting that all DRI values are based on evi-
dence of similar strength. Decisions about this option would
benefit from evidence that shows that users take the evidence
grades into account when they use such DRI reference values.
Option 4: Make decisions on a case-by-case basis
The fourth option is to make decisions about the strength of
evidence appropriate to support a conclusion about the relation
between a given food substance and a chronic disease endpoint on
a case-by-case basis. This option maximizes flexibility for DRI
committees and can enable them to consider other factors (e.g.,
the public health importance of the relation). However, a major
disadvantage is that this option could lead to inconsistency
among DRI reviews, which could reduce the confidence of users
in DRI reference values. This approach is also inconsistent with
the grading-of-evidence approach that many health professional
organizations and government agencies are now using.
VI. INTAKE-RESPONSE RELATIONS
Once a DRI committee establishes a causal relation between
the intake of a food substance and the risk of $1 chronic disease,
it must determine the intake-response relation so that it can
establish a DRI. Ultimately, the reference value and how users
can apply it depend on the decisions that the committee made
when it established the intake-response relation between
a chronic disease indicator and the observed intakes of a food
substance. A number of conceptual challenges have made it
difficult to apply the traditional DRI framework to chronic dis-
ease endpoints, including how risk is expressed for chronic
diseases, the multifactorial nature of chronic diseases, and the
diversity of intake-response relations between food substances
and chronic diseases. This section describes options for defining
an acceptable level of confidence in the data that a DRI com-
mittee uses to determine intake-response relations after estab-
lishing causality, the types of reference values that could be set,
and the types of indicators that could be used to set reference
values and for avoiding overlap between beneficial intakes and
intakes associated with harm.
Conceptual challenges
Previous committees have based DRIs on the intakes necessary
to avoid classical nutritional deficiencies (i.e., EARs and RDAs)
and unsafe intakes associated with toxicities or adverse events
(i.e., ULs) (Figure 4, Table 1). Intake-response relations be-
tween traditional endpoints for nutrient requirements (i.e., de-
ficiency diseases) and adverse events are often different from
those between food substances and chronic disease endpoints
(Table 4).
Absolute compared with relative risk
Previous DRI committees based their reference values on
direct evidence from human studies that measured both intakes
and outcomes, which allowed committees to develop quantitative
intake-response relations on the basis of absolute risk, which is
the risk of developing a given disease over time. At “low in-
takes,” these essential nutrients have intake-response-relation
characteristics in which the known health risks, which are dis-
eases of deficiency for essential nutrients, occur at very low
intakes and can affect up to 100% of a population at a specified
life stage, and the risk declines with increasing intakes. In-
adequate intakes of essential nutrients are necessary to develop
diseases of deficiency. The risk of a disease of deficiency is 0%
when intakes are adequate, and an adequate level of intake is
necessary to treat a deficiency disease. For example, chronically
inadequate intakes of vitamin C are necessary and sufficient to
develop scurvy, and the entire population is at risk of scurvy
when intakes are inadequate. An adequate level of intake of
vitamin C is necessary and sufficient to reverse the deficiency.
At “high intakes,” it is assumed that these essential nutrients
cause adverse health effects, including toxicity (Figure 4). As
22S of 37S YETLEY ET AL.
with inadequate intakes, the absolute risk of an adverse effect
from excessive intakes is represented as increasing from 0% to
100% with increasing intakes of the nutrient. All members of
a population are assumed to be at risk of the adverse effect at
sufficiently high intakes.
In contrast, DRI values based on chronic disease endpoints
have been based on relative risk, which is risk in relation to
another group. Past DRI committees used data from observational
studies, which contain the biases described earlier in this report,
primarily to calculate the relations between food substances
(essential or otherwise) and chronic diseases because only a limited
number of relevant RCTs are available in the published literature.
The risk of the chronic disease based on observational and in-
tervention studies is usually reported as relative to a baseline risk
and is therefore not absolute. The baseline risk is never 0% or
100% within a population, and it can vary by subgroup [e.g., those
with high blood pressure and/or high LDL cholesterol have
a higher risk of CVD death than do those with lower blood
pressure and LDL-cholesterol concentrations (91, 92)]. The in-
take of a given food substance might alter the risk of a disease by
a small amount (e.g., ,10%) compared with the baseline risk,
but these changes could be very important from a public health
perspective depending on the prevalence of the chronic disease
(e.g., a 5% reduction in a highly prevalent disease could have
a meaningful public health impact), severity, impact on quality
of life, cost, and other factors. Conversely, the intake of a given
food substance might alter the relative risk by a large amount
compared with baseline risk, but changes in absolute risk
could be small and have a less meaningful impact on public
health (35).
Interactions of multiple factors
The pathogenesis of chronic disease is complex and often
involves the interaction of multiple factors, in contrast to
traditional endpoints that commonly are associated with in-
teractions of fewer factors. Intakes of a group of food sub-
stances might contribute to the risk of a chronic disease, for
example. The magnitude of risk might vary by intake, and
several factors (e.g., behaviors or physiologic characteristics)
might influence the risk. Furthermore, $1foodsubstances
might be associated with .1 chronic disease. Finally, although
a given food substance might contribute independently to the
development of a chronic disease, changes in intake might not
be necessary or sufficient to increase or decrease the risk of the
chronic disease due to the complex interacting factors in the
disease’s pathogenesis.
Shape of the intake-response curve
The shape of the intake-response relation curve can vary
depending on whether the relation is between an essential
nutrient and a deficiency disease or between a food substance
and a chronic disease endpoint. The intake-response relation
between a nutrient and a deficiency disease is often depicted
as linear or monotonic within the range of inadequacy,
whereas the relation between a food substance and a chronic
disease indicator can be more diverse (e.g., linear, mono-
tonic, or nonmonotonic). Nonmonotonic intake-response re-
lation curves can be U-shaped, J-shaped, or asymptotic.
Furthermore, a single food substance can have a causal re-
lation with .1 chronic disease, and the intake-response
curves for each relation can differ (30, 93). The effect of
a nutrient intake on chronic disease risk might be saturable
in some cases. Figure 5 shows examples of diverse intake-
response relations between a food substance and a chronic
disease or diseases.
DRI committees must take into account the statistical fit of
the intake-response curve to the available data and its adherence
or relevance to underlying biological mechanisms when de-
termining the shape of the intake-response curve for a food
substance and a chronic disease outcome. Deriving intake-
response curves when single food substances affect multiple
chronic diseases can be particularly challenging. Future DRI
committees will need to determine whether to apply available
statistical methods or to develop new ones to address these
challenges (13). Ideally, future expressions of reference values
will include estimates of uncertainties and interindividual
variability.
TABLE 9
Level of confidence in DRI decisions
1
Chronic disease endpoint
Overall evidence rating based on evidence review
High Medium Low
Chronic disease event Level A Level B Levels C or D
Qualified surrogate disease marker Levels A or B Levels B or C Levels C or D
Nonqualified outcome Level C Levels C or D Level D
1
Level A: highest degree of confidence that results are valid (e.g., “high”); level B: some uncertainty about validity of
results (e.g., “moderate”); level C: considerable uncertainty about validity of results (e.g., “low”); level D: substantial
uncertainty about validity of results (e.g., “insufficient”). DRI, Dietary Reference Intake.
FIGURE 4 Relations of intakes and adverse effects of substances that
are nutritionally necessary. EAR, Estimated Average Requirement; RDA,
Recommended Dietary Allowance; UL, Tolerable Upper Intake Level. Re-
produced from reference 7 with permission.
CHRONIC DISEASE ENDPOINTS AND DRIs 23S of 37S
Examples of diverse intake-response relations between food
substances and chronic disease endpoints that show the com-
plexity of ensuring the statistical fit of the intake-response curve
to the data include the following:
The relative risk of coronary heart disease has a linear in-
take-response relation to fiber intakes and no apparent
threshold for the beneficial effect. The DRI committee based
the AI for fiber on the mean intake associated with the high-
est relative effect (24).
The relation between the risk of dental caries and fluoride
intake appears to have an inflection point and a critical value
for statistically detectable risk reduction (dental caries pre-
vention), but the range of intakes associated with benefit
overlaps with the range of intakes associated with harm
(fluorosis) (29).
Omega-3 fatty acids and multiple chronic diseases, as sug-
gested by results from observational studies, have several
intake-response relations, depending on the chronic dis-
ease (30).
DRIs based on intake-response relations involving chronic
diseases
DRI users include a wide range of organizations (e.g., health
professional groups and societies and government agencies),
many of which rely on DRI values to make decisions and to
develop policies for their organization. These varied user groups
have requested information to help them interpret findings in DRI
reports (13). These groups have also asked DRI committees to
present the information in a way that supports flexible applica-
tions while informing users of the nature of the available evidence
and public health implications.
The approach to setting DRI values would be enhanced by
transparency. Clear descriptions of the scientific and public
health characteristics of the benefits and risks of the intake of
a food substance are also valuable. For example, for each
benefit and risk, descriptions could include the strength of the
evidence, the sizes and characteristics of groups at risk, and the
likelihood and severity of the risks. Users could then evaluate
these descriptions to decide how to apply the DRIs in ways that
address their organizational mission and decision-making
framework.
Acceptable level of confidence in the intake-response data
Several options are available for determining the acceptable
level of confidence in the data that a DRI committee uses to
determine intake-response relations once it has data that establish
a causal relation.
Option 1: Require a high confidence level
One option is to require a high level of confidence by, for
example, using RCTs with a chronic disease event or a qualified
surrogate disease marker as the outcome measure (Table 9). This
approach typically requires usable intake-response data from
RCTs, which is probably impractical because most RCTs have
only 1 intervention dose or a limited number of doses. This option
could result in failure to establish a DRI even though the data
have established a causal relation. The use of this option is
therefore unlikely to be optimal for public health because no
reference value, or even a reasonable estimate of one, would be
available for a documented relation between a food substance and
a chronic disease.
FIGURE 5 Intake-response relations between the intake of a food sub-
stance and chronic disease risks can vary. The intake of a food substance
could decrease (A) or increase (B) chronic disease risk. The intake of a food
substance could be independently related to multiple chronic diseases that
show different and overlapping dose-response relations (C). The relation or
relations between the intake of the food substance and chronic disease or
diseases might not be monotonic. The background risk of a given chronic
disease is not zero. “Substances” could be individual food substances or
groups of interacting substances. UL, Tolerable Upper Intake Level.
24S of 37S YETLEY ET AL.
Option 2: Accept a moderate confidence level
Another option is to accept a moderate level of confidence in
the data for decisions about intake-response relations (Table 9).
DRI committees could then expand the types of data being
considered to include high-quality observational studies with
outcomes based on chronic disease events or qualified surrogate
disease markers.
These data would likely be associated with some uncertainty
(Table 9). For example, systematic biases in intake estimates are
likely to affect intake-response data from observational studies.
Intake-response data from intervention trials would likely lack
some details on baseline intakes, making total intake estimates
difficult. DRI committees would need to determine how much
and what type of uncertainty are acceptable.
Option 3: Piecemeal approach
A third option is to “piece together” different relations in
which the biomarker of interest is a common factor when direct
evidence of the biomarker’s presence on the causal pathway
between the food substance and a chronic disease is lacking. For
example, if data show a quantitative relation between a food-
substance intake and the biomarker of interest and other data
show a quantitative relation between the biomarker of interest
and the chronic disease, this evidence could be combined to
establish a quantitative reference intake value for the chronic
disease risk. This option has the advantage of relying on a wider
breadth of the available evidence than the first 2 options and
likely would enable DRI committees to consider more nutrient–
chronic disease relations, but the approach would be fraught
with uncertainties. Among its major disadvantages is its heavy
reliance on expert judgments, which limit objectivity in its ap-
plication.
Different types of reference values
Because of the conceptual issues discussed earlier in this
section, reference values based on chronic disease endpoints
likely need to be different from the traditional reference values
for essential nutrients. Because many food substances share
metabolic pathways, DRI committees could consider joint DRI
values for groups of related food substances. Similarly, because
a single dietary source might supply .1 food substance, DRI
committees could base reference values on groups of food
substances to prevent harm (e.g., to minimize the risk that
limiting the intake of 1 food substance will produce undesirable
changes in intakes of other food substances). If a DRI committee
uses a variety of chronic disease endpoints or a family of tar-
geted food-substance-intake reductions to establish reference
intake values, this process is likely to be strengthened by en-
hanced transparency and the estimation of associated uncer-
tainties. Providing information on how benefits and risks are
weighted would also likely assist users in their applications of
derived values.
The impact of DRI values would likely be strengthened if their
potential uses are considered in their derivation. Previous DRI
committees have identified differences in the applicability and
use of different types of reference values for planning and as-
sessment in groups and individuals (8, 10). For example, the AI
has limited applicability to dietary assessments of groups (13).
As DRI committees consider possible approaches to establish
reference values for chronic disease endpoints, how the different
types of reference values could meet user needs and how users
could apply these values will remain critical considerations.
Types of reference values associated with benefit
Option 1: Establish chronic disease risk-reduction intake values
(e.g., CD
CVD
). DRI committees could modify the traditional EAR/
RDA approach to estimate the mean intakes of individuals and the
interindividual variability associated with specified disease risk
reductions. This option is conceptually very similar to the tra-
ditional EAR/RDA approach, but the definitions and interpre-
tations of reference values based on chronic disease endpoints are
different from those based on classical deficiency endpoints. This
option uses relative risks and requires knowledge of baseline
disease prevalence, whereas the traditional approach is based on
absolute risks and is independent of baseline prevalence. The
mean intake values and associated variances for given magni-
tudes of risk reduction give valuable information on the “typical”
person and population variability. These values could, therefore,
be useful for assessing population and group prevalence. Several
adaptations of this option are possible, depending on the nature
of the available data.
Adaptation 1 is to set a single chronic disease risk-reduction
(CD) value at the level above which higher intakes are unlikely to
further reduce the risk of a specific disease. Such values would be
similar to traditional EAR/RDA values in that they would be
a point estimate with some known variation (Figure 4). An ad-
vantage of this kind of reference value is its similarity to the
traditional EAR/RDA, which could help users understand the
value as well as its use and application. Furthermore, this ap-
proach requires a high level of evidence and an understanding of
the uncertainty around the value, which could maximize confi-
dence in the value and its uses. The data required to establish
a single CD value of this type are probably very limited. However,
the possibility of developing this type of value may guide
research.
Adaptation 2 is to establish multiple reference values on the
basis of the expected degree of disease-risk reduction across
a spectrum of intakes to yield a family of targeted reductions for
a given chronic disease outcome and potentially for a variety of
disease indicators with distinct intake-response relations to the
disease. If a DRI committee uses this adaptation, it may find it
useful to consider such factors as the severity and prevalence of
outcomes. For a given distribution of intake within the population
that has a given mean and some variability, a DRI committee
could establish the expected risk reduction and identify an ex-
pression of uncertainty. Multiple values could be established on
the basis of .1 level of risk reduction.
Future DRI committees could establish reference values for
different degrees of disease risk reduction and for different
groups with different risk levels within a population. An ad-
vantage of this adaptation is that it gives users flexibility to
choose reference values that meet their needs and are suitable for
the risk profiles of individuals or groups to whom they apply the
reference values. However, users could be confused about when
and how to apply the different values. For this reason, the use of
this adaptation requires careful attention to implementation
guidance.
Adaptation 3 is for food substances that have causal relations
at different intake levels to multiple chronic diseases. This adaptation
CHRONIC DISEASE ENDPOINTS AND DRIs 25S of 37S
involves establishing different reference values for different
diseases (e.g., CD
CVD
,CD
cancer
). In addition, for each relation
with a different chronic disease, a DRI committee could identify
a family of targeted risk reductions to establish multiple CD
values, each of which would be associated with a specific degree
of risk reduction.
An advantage of this option is that DRI committees could
establish CD point estimates for specified risk reductions for $1
chronic disease, which would provide flexibility to both com-
mittees and users. This adaptation may make it easier than the
other 2 adaptations for users to understand when (e.g., for a life
stage with a higher risk of the disease) and to which population
or populations (e.g., those at higher risk of a given disease) to
apply the values. Establishing reference values for multiple
chronic diseases requires the same level of evidence, or an
equivalent kind of evidence, for each disease to ensure that
committees can develop all values and users can apply them
with the same level of confidence. A disadvantage is that es-
tablishing several values could confuse users about their ap-
propriate application. DRI committees could minimize this
confusion by developing appropriate guidance on how to im-
plement the values.
Option 2: Identify ranges of beneficial intakes. In some cases,
available data might be adequate only for deriving an intake range
that can reduce the relative risk of a chronic disease to a specified
extent. One end of this intake range is close to the point at which
risk begins to decline or increase, depending on the relation, and
the other end extends as far as the available evidence permits. The
DRI committee could establish the range so that it does not
increase the risk of adverse health effects (Figure 5A, B; see also
section entitled “Options for resolving overlaps between benefit
and harm” below and Figure 5C).
These reference values have a purpose similar to the estimated
risk-reduction intake value for a chronic disease (option 1),
except that data for making point estimates or for estimating
interindividual variation are not available, making a point esti-
mate impossible to develop. Advantages of this option are that the
level of evidence it requires is less stringent than that required for
option 1 and it provides flexibility to users. A disadvantage is that
the value is associated with lower confidence but users might
apply it with confidence if they are unaware of its limitations. The
use of a range to assess the prevalence of beneficial intakes in
a population might also be challenging. Users would need clear
guidance on how to apply these kinds of values. This approach
could incorporate the AMDRs because the AMDRs represent
a range of intakes associated with macronutrient adequacy. Fu-
ture committees could be charged to review how users could
apply such an approach to intakes of macronutrients or their
constituents (e.g., a protein compared with a specific amino acid).
ULs and reduction in chronic disease risk
The UL (Figure 4) is the highest average daily intake level
likely to pose no risk of adverse health effects for nearly all
people in a particular group (7). The UL is not a recommended
level of intake but rather the highest intake that people can
tolerate without the possibility of ill effects (7).
DRI committees have based most ULs on (often limited)
evidence of toxicity or adverse events at a high nutrient intake
level. Past DRI committees used a threshold model to calculate
ULs, in which the intake-response relation has an inflection
(threshold) point (11). Because of the paucity of evidence, most
ULs were not based on chronic disease endpoints, although DRI
committees tried to do so for a few nutrients (e.g., saturated and
trans fats as well as sodium) with limited success (13). A key
reason why basing ULs on chronic disease endpoints is so
challenging is that the traditional UL definition is based on an
intake level associated with no increase in absolute risk, whereas
most data related to chronic disease risk are expressed as relative
risk. When the interval between intakes associated with benefit
and harm is wide and intakes associated with benefit do not
overlap with those associated with harm (see below), options for
setting the UL include the use of 1 or both traditional adverse
events and chronic disease endpoints, depending on the nature
and strength of the available evidence.
Option 1: Base ULs on the traditional threshold model. One
option is to continue to base ULs on the traditional threshold
model when UL values based on chronic disease endpoints are
higher than those based on traditional adverse effects. The ad-
vantages of this approach are that it allows the DRI committee to
evaluate and consider the evidence available for setting a UL on
the basis of chronic disease risk, while also allowing the com-
mittee to set a traditional UL, which has an established process
and its limitations and applications are well understood. How-
ever, many traditional ULs are based on (very) limited data.
Therefore, a disadvantage is that this option could prevent a DRI
committee from establishing a UL on the basis of chronic disease
risk (UL
CD
) that is higher than the intake levels associated with
a traditional adverse effect regardless of the evidence available
to support the UL or public health implications of the chronic
disease. To date, DRI committees have not set any EAR or RDA
at intakes higher than the traditional UL to ensure safe intakes
across the population. This has been the case even if the intake
of a food substance has a beneficial effect on chronic disease
risk that is continuous above the UL. A more detailed discussion
of the issue of overlapping beneficial and risk curves is given
below under the section entitled “Overlaps between benefits and
harms.”
Option 2: Base UL
CD
on intakes associated with chronic disease
risk. When the risk of a chronic disease increases at an intake
below the traditional or current UL, a DRI committee could base
a UL on chronic disease endpoints by using approaches analo-
gous to the derivation of CD values (e.g., the development of 1 or
multiple values for specified levels of relative risk reduction) or
a threshold approach (e.g., identifying the inflection point at
which absolute or relative risk increases). These values could be
denoted as a chronic disease UL (UL
CD
) to distinguish them from
a traditional UL. The UL
CD
would be set at a level below which
lower intakes are unlikely to achieve additional risk reduction
for a specified disease. The traditional UL definition would have
to be expanded to include intakes associated with changes in
relative risk (in contrast to absolute risk) of an adverse effect.
Because the UL
CD
is based on changes in the relative risk of the
chronic disease, intakes below the UL
CD
might reduce but not
necessarily eliminate disease risk, reflecting the multifactorial
nature of chronic diseases.
An advantage of basing ULs on chronic disease endpoints is
that it maximizes public health benefit. In addition, this approach
is straightforward, and users could apply such a UL in a similar
manner to a traditional UL. Elimination or limits on intake (e.g.,
26S of 37S YETLEY ET AL.