Content uploaded by David Laibson
Author content
All content in this area was uploaded by David Laibson on Sep 16, 2014
Content may be subject to copyright.
DOI: 10.1126/science.1100907
, 503 (2004);306 Science
, et al.Samuel M. McClure
Rewards
Separate Neural Systems Value Immediate and Delayed Monetary
This copy is for your personal, non-commercial use only.
clicking here.colleagues, clients, or customers by
, you can order high-quality copies for yourIf you wish to distribute this article to others
here.following the guidelines
can be obtained byPermission to republish or repurpose articles or portions of articles
): January 17, 2012 www.sciencemag.org (this infomation is current as of
The following resources related to this article are available online at
http://www.sciencemag.org/content/306/5695/503.full.html
version of this article at:
including high-resolution figures, can be found in the onlineUpdated information and services,
http://www.sciencemag.org/content/suppl/2004/10/13/306.5695.503.DC1.html
can be found at: Supporting Online Material
http://www.sciencemag.org/content/306/5695/503.full.html#related
found at:
can berelated to this article A list of selected additional articles on the Science Web sites
359 article(s) on the ISI Web of Sciencecited by This article has been
http://www.sciencemag.org/content/306/5695/503.full.html#related-urls
97 articles hosted by HighWire Press; see:cited by This article has been
http://www.sciencemag.org/cgi/collection/psychology
Psychology
subject collections:This article appears in the following
registered trademark of AAAS.
is aScience2004 by the American Association for the Advancement of Science; all rights reserved. The title
CopyrightAmerican Association for the Advancement of Science, 1200 New York Avenue NW, Washington, DC 20005.
(print ISSN 0036-8075; online ISSN 1095-9203) is published weekly, except the last week in December, by theScience
on January 17, 2012www.sciencemag.orgDownloaded from
and comparison to these approximate repre-
sentations. This is true even for monolingual
adults and young children who never learned
any formal arithmetic. These data add to
previous evidence that numerical approxima-
tion is a basic competence, independent of
language, and available even to preverbal
infants and many animal species (6, 13–16).
We conclude that sophisticated numerical
competence can be present in the absence of
a well-developed lexicon of number words.
This provides an important qualification of
Gordon_s(23) version of Whorf_s hypothesis
according to which the lexicon of number
words drastically limits the ability to entertain
abstract number concepts.
What the MundurukU appear to lack,
however, is a procedure for fast apprehension
of exact numbers beyond 3 or 4. Our results
thus support the hypothesis that language plays
a special role in the emergence of exact
arithmetic during child development (9–11).
What is the mechanism for this developmental
change? It is noteworthy that the MundurukU
have number names up to 5, and yet use them
approximately in naming. Thus, the availabil-
ity of number names, in itself, may not suffice
to promote a mental representation of exact
number. More crucial, perhaps, is that the
MundurukU do not have a counting routine.
Although some have a rudimentary ability to
count on their fingers, it is rarely used. By
requiring an exact one-to-one pairing of
objects with the sequence of numerals,
counting may promote a conceptual integra-
tion of approximate number representations,
discrete object representations, and the verbal
code (10, 11). Around the age of 3, Western
children exhibit an abrupt change in number
processing as they suddenly realize that each
count word refers to a precise quantity (9).
This Bcrystallization[ of discrete numbers out
of an initially approximate continuum of
numerical magnitudes does not seem to
occur in the MundurukU.
References and Notes
1. J. R. Hurford, Language and Number (Blackwell,
Oxford, 1987).
2. N. Chomsky, Language and the Problems of Knowl-
edge (MIT Press, Cambridge, MA, 1988), p. 169.
3. S. Dehaene, The Number Sense (Oxford Univ. Press,
New York, 1997).
4. C. R. Gallistel, R. Gelman, Cognition 44, 43 (1992).
5. S. Dehaene, G. Dehaene-Lambertz, L. Cohen, Trends
Neurosci. 21, 355 (1998).
6. L. Feigenson, S. Dehaene, E. Spelke, Trends Cognit.
Sci. 8, 307 (2004).
7. P. Bloom, How Children Learn the Meanings of Words
(MIT Press, Cambridge, MA, 2000).
8. H. Wiese, Numbers, Language, and the Human Mind
(Cambridge Univ. Press, Cambridge, 2003).
9. K. Wynn, Cognition 36, 155 (1990).
10. S. Carey, Science 282, 641 (1998).
11. E. Spelke, S. Tsivkin, in Language Acquisition and
Conceptual Development,M.Bowerman,S.C.
Levinson, Eds. (Cambridge Univ. Press, Cambridge,
2001), pp. 70–100.
12. S. Dehaene, E. Spelke, P. Pinel, R. Stanescu, S. Tsivkin,
Science 284, 970 (1999).
13. K. Wynn, Nature 358, 749 (1992).
14. G. M. Sulkowski, M. D. Hauser, Cognition 79, 239
(2001).
15. A. Nieder, E. K. Miller, Proc. Natl. Acad. Sci. U.S.A.
101, 7457 (2004).
16. E. M. Brannon, H. S. Terrace, J. Exp. Psychol. Anim.
Behav. Processes 26, 31 (2000).
17. E. S. Spelke, S. Tsivkin, Cognition 78, 45 (2001).
18. C. Lemer, S. Dehaene, E. Spelke, L. Cohen, Neuro-
psychologia 41, 1942 (2003).
19. S. Dehaene, L. Cohen, Neuropsychologia 29, 1045
(1991).
20. H. Barth, N. Kanwisher, E. Spelke, Cognition 86, 201
(2003).
21. J. Whalen, C. R. Gallistel, R. Gelman, Psychol. Sci. 10,
130 (1999).
22. B. Butterworth, The Mathematical Brain (Macmillan,
London, 1999).
23. P. Gordon, Science 306, 496 (2004); published online
19 August 2004 (10.1126/science.1094492).
24. C. Stro
¨
mer, Die sprache der Munduruku
´
(Verlag der
Internationalen Zeitschrift ‘‘Anthropos,’’ Vienna,
1932).
25. M. Crofts, Aspectos da lı
´
ngua Munduruku
´
(Summer
Institute of Linguistics, Brasilia, 1985).
26. See supporting data on Science Online.
27. T. Pollmann, C. Jansen, Cognition 59, 219 (1996).
28. R. S. Moyer, T. K. Landauer, Nature 215, 1519 (1967).
29. P. B. Buckley, C. B. Gillman, J. Exp. Psychol. 103,
1131 (1974).
30. Comparison performance remained far above chance
in two independent sets of trials where the two sets
were equalized either on intensive parameters (such
as dot size) or on extensive parameters (such as total
luminance) [see (26)]. Thus, subjects did not base
their responses on a single non-numerical parameter.
Performance was, however, worse for extensive-
matched pairs (88.3% versus 76.3% correct, P G
0.0001). We do not know the origins of this effect,
but it is likely that, like Western subjects, the
Munduruku
´
estimate number via some simple
relation such as the total occupied screen area
divided by the average space around the items,
which can be subject to various biases [see (32)].
31. Performance remained above chance for both intensive-
matched and extensive-matched sets (89.5 and 81.8%
correct, respectively; both P G 0.0001). Although the
difference between stimulus sets was again signif-
icant (P G 0.0001), it was identical in Munduruku
´
and French subjects. Furthermore, performance was
significantly above chance for a vast majority of
items (44/51) and was never significantly below
chance, making it unlikely that participants were
using a simple shortcut other than mental addition.
For instance, they did not merely compare n1 with
n3 or n2 with n3, because when n1 and n2 were
both smaller than n3, they still discerned accurately
whether their sum was larger or smaller than the
proposed number n3, even when both differed by
only 30% (76.3 and 67.4% correct, respectively;
both P G 0.005).
32. J. Allik, T. Tuulmets, Percept. Psychophys. 49, 303
(1991).
33. This work was developed as part of a larger project
on the nature of quantification and functional
categories developed jointly with the linguistic
section of the Department of Anthropology of the
National Museum of Rio de Janeiro and the Unite
´
Mixte de Recherche 7023 of the CNRS, with the
agreement of Fundac¸a
˜
o Nacional do I
´
ndio (FUNAI)
and Conselho Nacional de Desenvolvimento Cientı
´
-
fico e Tecnolo
´
gico (CNPQ) of Brazil. It was supported
by INSERM, CNRS, the French Ministry of Foreign
Affairs (P.P.), and a McDonnell Foundation centennial
fellowship (S.D.). We thank E. Spelke and M. Piazza
for discussions, A. Ramos for constant advice, and V.
Poxo
˜
, C. Tawe
´
, and F. de Assis for help in testing.
Movies illustrating the difficulty of counting for the
Munduruku
´
can be viewed at http://video.rap.prd.fr/
videotheques/cnrs/grci.html.
Supporting Online Material
www.sciencemag.org/cgi/content/full/306/5695/499/
DC1
Materials and Methods
References
Documentary Photos
Movie S1
28 June 2004; accepted 3 September 2004
Separate Neural Systems
Value Immediate and Delayed
Monetary Rewards
Samuel M. McClure,
1
*
David I. Laibson,
2
George Loewenstein,
3
Jonathan D. Cohen
1,4
When humans are offered the choice between rewards available at different
points in time, the relative values of the options are discounted according to
their expected delays until delivery. Using functional magnetic resonance
imaging, we examined the neural correlates of time discounting while subjects
made a series of choices between monetary reward options that varied by
delay to delivery. We demonstrate that two separate systems are involved in
such decisions. Parts of the limbic system associated with the midbrain do-
pamine system, including paralimbic cortex, are preferentially activated by
decisions involving immediately available rewards. In contrast, regions of the
lateral prefrontal cortex and posterior parietal cortex are engaged uniformly
by intertemporal choices irrespective of delay. Furthermore, the relative en-
gagement of the two systems is directly associated with subjects’ choices,
with greater relative fronto-parietal activity when subjects choose longer term
options.
In Aesop_s classic fable, the ant and the
grasshopper are used to illustrate two famil-
iar, but disparate, approaches to human inter-
temporal decision making. The grasshopper
luxuriates during a warm summer day, in-
attentive to the future. The ant, in contrast,
R EPORTS
www.sciencemag.org SCIENCE VOL 306 15 OCTOBER 2004
503
on January 17, 2012www.sciencemag.orgDownloaded from
stores food for the upcoming winter. Human
decision makers seem to be torn between an
impulse to act like the indulgent grasshopper
and an awareness that the patient ant often
gets ahead in the long run. An active line of
research in both psychology and economics
has explored this tension. This research is
unified by the idea that consumers behave
impatiently today but prefer/plan to act pa-
tiently in the future (1, 2). For example, some-
one offered the choice between /10 today and
/11 tomorrow might be tempted to choose the
immediate option. However, if asked today
to choose between /10 in a year and /11 in a
year and a day, the same person is likely to
prefer the slightly delayed but larger amount.
Economists and psychologists have theo-
rized about the underlying cause of these
dynamically inconsistent choices. It is well
accepted that rationality entails treating each
moment of delay equally, thereby discount-
ing according to an exponential function
(1–3). Impulsive preference reversals are be-
lieved to be indicative of disproportionate
valuation of rewards available in the imme-
diate future (4–6). Some authors have argued
that such dynamic inconsistency in prefer-
ence is driven by a single decision-making
system that generates the temporal inconsist-
ency (7–9), while other authors have argued
that the inconsistency is driven by an inter-
action between two different decision-making
systems (5, 10, 11). We hypothesize that the
discrepancy between short-run and long-run
preferences reflects the differential acti-
vation of distinguishable neural systems.
Specifically, we hypothesize that short-run
impatience is driven by the limbic system,
which responds preferentially to immediate
rewards and is less sensitive to the value of
future rewards, whereas long-run patience is
mediated by the lateral prefrontal cortex and
associated structures, which are able to eval-
uate trade-offs between abstract rewards, in-
cluding rewards in the more distant future.
A variety of hints in the literature suggest
that this might be the case. First, there is the
large discrepancy between time discounting
in humans and in other species (12, 13). Hu-
mans routinely trade off immediate costs/
benefits against costs/benefits that are de-
layed by as much as decades. In contrast,
even the most advanced primates, which dif-
fer from humans dramatically in the size of
their prefrontal cortexes, have not been ob-
served to engage in unpreprogrammed delay
of gratification involving more than a few
minutes (12, 13). Although some animal be-
havior appears to weigh trade-offs over longer
horizons (e.g., seasonal food storage), such
behavior appears invariably to be stereo-
typed and instinctive, and hence unlike the
generalizable nature of human planning. Sec-
ond, studies of brain damage caused by sur-
gery, accidents, or strokes consistently point
to the conclusion that prefrontal damage often
leads to behavior that is more heavily influ-
enced by the availability of immediate re-
wards, as well as failures in the ability to plan
(14, 15). Third, a Bquasi-hyperbolic[ time-
discounting function (16 ) that splices together
two different discounting functions—one that
distinguishes sharply between present and
future and another that discounts exponen-
tially and more shallowly—has been found
to provide a good fit to experimental data and
to shed light on a wide range of behaviors,
such as retirement saving, credit-card borrow-
ing, and procrastination (17, 18). However,
despite these and many other hints that time
discounting may result from distinct pro-
cesses, little research to date has attempted
to directly identify the source of the tension
between short-run and long-run preferences.
The quasi-hyperbolic time-discounting
function—sometimes referred to as beta-delta
preference—was first proposed by Phelps
and Pollack (19) to model the planning of
wealth transfers across generations and ap-
plied to the individual_s time scale by Elster
(20) and Laibson (16). It posits that the pres-
ent discounted value of a reward of value u
received at delay t is equal to u for t 0 0 and
to "%
t
u for t 9 0, where 0 G $ e 1 and & e 1.
The $ parameter (actually its inverse) rep-
resents the special value placed on immediate
rewards relative to rewards received at any
other point in time. When $ G 1, all future
rewards are uniformly downweighted rela-
tive to immediate rewards. The & parameter is
simply the discount rate in the standard ex-
ponential formula, which treats a given delay
equivalently regardless of when it occurs.
Our key hypothesis is that the pattern of
behavior that these two parameters summa-
rize—$, which reflects the special weight
placed on outcomes that are immediate, and
&, which reflects a more consistent weighting
of time periods—stems from the joint influ-
ence of distinct neural processes, with $
mediated by limbic structures and & by the
lateral prefrontal cortex and associated struc-
tures supporting higher cognitive functions.
To test this hypothesis, we measured the
brain activity of participants as they made a
series of intertemporal choices between early
monetary rewards (/R available at delay d)
and later monetary rewards (/R¶ available at
1
Department of Psychology and Center for the Study
of Brain, Mind, and Behavior, Princeton University,
Princeton, NJ 08544, USA.
2
Department of Econom-
ics, Harvard University, and National Bureau of
Economic Research, Cambridge, MA 02138, USA.
3
Department of Social and Decision Sciences, Carne-
gie Mellon University, Pittsburgh, PA 15213, USA.
4
Department of Psychiatry, University of Pittsburgh,
Pittsburgh, PA 15260, USA.
*To whom correspondence should be addressed.
E-mail: smcclure@princeton.edu
0
10
T
13
A
d = Today
d = 2 weeks d = 1 month
B
VStr
MOFC MPFC
PCC
% Signal Change
–0.2
0.0
0.2
0.4
Time (s)
–4 0 4 8
z = –4mm
MOFC
y = 8mm
VStr
x = 4mm
MPFC
PCC
Fig. 1. Brain regions that are preferentially activated for choices in which money is available
immediately ($ areas). (A) A random effects general linear model analysis revealed five regions
that are significantly more activated by choices with immediate rewards, implying d 0 0 (at P G
0.001, uncorrected; five contiguous voxels). These regions include the ventral striatum (VStr),
medial orbitofrontal cortex (MOFC), medial prefrontal cortex (MPFC), posterior cingulate cortex
(PCC), and left posterior hippocampus (table S1). (B) Mean event-related time courses of $ areas
(dashed line indicates the time of choice; error bars are SEM; n 0 14 subjects). BOLD signal changes
in the VStr, MOFC, MPFC, and PCC are all significantly greater when choices involve money
available today (d 0 0, red traces) versus when the earliest choice can be obtained only after a 2-
week or 1-month delay (d 0 2 weeks and d 0 1 month, green and blue traces, respectively).
R EPORTS
15 OCTOBER 2004 VOL 306 SCIENCE www.sciencemag.org
504
on January 17, 2012www.sciencemag.orgDownloaded from
delay d ¶; d ¶ 9 d). The early option always
had a lower (undiscounted) value than the
later option (i.e., /R G/R¶). The two options
were separated by a minimum time delay of
2 weeks. In some choice pairs, the early
option was available Bimmediately[ (i.e., at
the end of the scanning session; d 0 0). In
other choice pairs, even the early option was
available only after a delay (d 9 0).
Our hypotheses led us to make three cri-
tical predictions: (i) choice pairs that include
a reward today (i.e., d 0 0) will preferentially
engage limbic structures relative to choice
pairs that do not include a reward today (i.e.,
d 9 0); (ii) lateral prefrontal areas will ex-
hibit similar activity for all choices, as com-
pared with rest, irrespective of reward delay;
(iii) trials in which the later reward is se-
lected will be associated with relatively
higher levels of lateral prefrontal activation,
reflecting the ability of this system to value
greater rewards even when they are delayed.
Participants made a series of binary
choices between smaller/earlier and larger/
later money amounts while their brains were
scanned using functional magnetic resonance
imaging. The specific amounts (ranging from
/5to/40) and times of availability (ranging
from the day of the experiment to 6 weeks
later) were varied across choices. At the end
of the experiment, one of the participant_s
choices was randomly selected to count; that
is, they received one of the rewards they had
selected at the designated time of delivery.
To test our hypotheses, we estimated a
general linear model (GLM) using standard
regression techniques (21). We included two
primary regressors in the model, one that
modeled decision epochs with an immediacy
option in the choice set (the Bimmediacy[
variable) and another that modeled all deci-
sion epochs (the Ball decisions[ variable).
We defined $ areas as voxels that loaded
on the Bimmediacy[ variable. These are pref-
erentially activated by experimental choices
that included an option for a reward today
(d 0 0) as compared with choices involving
only delayed outcomes (d 9 0). As shown in
Fig. 1, brain areas disproportionately acti-
vated by choices involving an immediate out-
come ($ areas) include the ventral striatum,
medial orbitofrontal cortex, and medial pre-
frontal cortex. As predicted, these are classic
limbic structures and closely associated
paralimbic cortical projections. These areas
are all also heavily innervated by the
midbrain dopamine system and have been
shown to be responsive to reward expec-
tation and delivery by the use of direct
neuronal recordings in nonhuman species
(22–24) and brain-imaging techniques in
humans (25–27) (Fig. 1). The time courses
of activity for these areas are shown in Fig.
1B (28, 29).
We considered voxels that loaded on the
Ball decisions[ variable in our GLM to be
candidate & areas. These were activated by
all decision epochs and were not preferen-
tially activated by experimental choices that
included an option for a reward today. This
criterion identified several areas (Fig. 2), some
of which are consistent with our predictions
about the & system (such as lateral prefrontal
cortex). However, others (including primary
visual and motor cortices) more likely reflect
nonspecific aspects of task performance en-
gaged during the decision-making epoch, such
as visual processing and motor response.
Therefore, we carried out an additional anal-
ysis designed to identify areas among these
candidate & regions that were more specif-
ically associated with the decision process.
Specifically, we examined the relationship
of activity to decision difficulty, under the
assumption that areas involved in decision
making would be engaged to a greater de-
gree (and therefore exhibit greater activity)
by more difficult decisions (30). As expected,
the areas of activity observed in visual, pre-
motor, and supplementary motor cortex were
not influenced by difficulty, consistent with
their role in non–decision-related processes.
In contrast, all of the other regions in pre-
frontal and parietal cortex identified in our
initial screen for & areas showed a signifi-
cant effect of difficulty, with greater activ-
ity associated with more difficult decisions
(Fig. 3) (31). These findings are consistent
with a large number of neurophysiological and
neuroimaging studies that have implicated
these areas in higher level cognitive func-
tions (32, 33). Furthermore, the areas iden-
tified in inferior parietal cortex are similar to
those that have been implicated in numerical
processing, both in humans and in nonhuman
species (34). Therefore, our findings are con-
sistent with the hypothesis that lateral pre-
frontal (and associated parietal) areas are
activated by all types of intertemporal choices,
not just by those involving immediate rewards.
If this hypothesis is correct, then it makes
an additional strong prediction: For choices
between immediate and delayed outcomes
(d 0 0), decisions should be determined by
the relative activation of the $ and & systems
(35). More specifically, we assume that when
the $ system is engaged, it almost always
favors the earlier option. Therefore, choices
for the later option should reflect a greater
influence of the & system. This implies that
choices for the later option should be asso-
ciated with greater activity in the & system
than in the $ system. To test this prediction,
we examined activity in $ and & areas for all
choices involving the opportunity for a reward
today (d 0 0) to ensure some engagement of
the $ system. Figure 4 shows that our pre-
diction is confirmed: & areas were signifi-
cantly more active than were $ areas when
participants chose the later option, whereas
activity was comparable (with a trend toward
greater $-system activity) when participants
chose the earlier option.
x = 44mm
x = 0mm
d = Today
d = 2 weeks d = 1 month
0
10
T
13
VCtx PMA RPar
DLPFC
AB
VLPFC LOFC
Time (s)
–4840
% Signal Change
0.0
0.4
0.8
1.2
RPar
DLPFC
LOFC
VCtx
PMA
SMA
Fig. 2. Brain regions that are active while making choices independent of the delay (d) until the
first available reward (& areas). (A) A random effects general linear model analysis revealed eight
regions that are uniformly activated by all decision epochs (at P G 0.001, uncorrected; five con-
tiguous voxels). These areas include regions of visual cortex (VCtx), premotor area (PMA), and
supplementary motor area (SMA). In addition, areas of the right and left intraparietal cortex (RPar,
LPar), right dorsolateral prefrontal cortex (DLPFC), right ventrolateral prefrontal cortex (VLPFC), and
right lateral orbitofrontal cortex (LOFC) are also activated (table S2). (B) Mean event-related time
courses for & areas (dashed line indicates the time of choice; error bars are SEM; n 0 14 subjects). A
three-way analysis of variance indicated that the brain regions identified by this analysis are
differentially affected by delay (d) than are those regions identified in Fig. 1 (P G 0.0001).
R EPORTS
www.sciencemag.org SCIENCE VOL 306 15 OCTOBER 2004
505
on January 17, 2012www.sciencemag.orgDownloaded from
In economics, intertemporal choice has
long been recognized as a domain in which
Bthe passions[ can have large sway in af-
fecting our choices (36). Our findings lend
support to this intuition. Our analysis shows
that the $ areas, which are activated dis-
proportionately when choices involve an op-
portunity for near-term reward, are asso-
ciated with limbic and paralimbic cortical
structures, known to be rich in dopaminergic
innervation. These structures have con-
sistently been implicated in impulsive be-
havior (37), and drug addiction is commonly
thought to involve disturbances of dopaminer-
gic neurotransmission in these systems (38).
Our results help to explain why many
factors other than temporal proximity, such
as the sight or smell or touch of a desired
object, are associated with impulsive behav-
ior. If impatient behavior is driven by limbic
activation, it follows that any factor that pro-
duces such activation may have effects sim-
ilar to that of immediacy (10). Thus, for
example, heroin addicts temporally discount
not only heroin but also money more steeply
when they are in a drug-craving state (im-
mediately before receiving treatment with an
opioid agonist) than when they are not in a
drug-craving state (immediately after treat-
ment) (39). Immediacy, it seems, may be
only one of many factors that, by producing
limbic activation, engenders impatience. An
important question for future research will be
to consider how the steep discounting ex-
hibited by limbic structures in our study of
intertemporal preferences relates to the in-
volvement of these structures (and the stri-
atum in particular) in other time-processing
tasks, such as interval timing (40) and tem-
poral discounting in reinforcement learning
paradigms (41).
Our analysis shows that the & areas,
which are activated uniformly during all de-
cision epochs, are associated with lateral
prefrontal and parietal areas commonly impli-
cated in higher level deliberative processes
and cognitive control, including numerical
computation (34). Such processes are likely
to be engaged by the quantitative analysis
of economic options and the valuation of
future opportunities for reward. The degree
of engagement of the & areas predicts de-
ferral of gratification, consistent with a key
role in future planning (32, 33, 42).
More generally, our present results con-
verge with those of a series of recent imaging
studies that have examined the role of limbic
structures in valuation and decision making
(26, 43, 44) and interactions between prefron-
tal cortex and limbic mechanisms in a variety
of behavioral contexts, ranging from econom-
ic and moral decision making to more visceral
responses, such as pain and disgust (45–48).
Collectively, these studies suggest that human
behavior is often governed by a competition
between lower level, automatic processes that
may reflect evolutionary adaptations to par-
ticular environments, and the more recently
evolved, uniquely human capacity for ab-
stract, domain-general reasoning and future
planning. Within the domain of intertemporal
choice, the idiosyncrasies of human prefer-
ences seem to reflect a competition between
the impetuous limbic grasshopper and the
provident prefrontal ant within each of us.
References and Notes
1. G. Ainslie, Psychol. Bull. 82, 463 (1975).
2. S. Frederick, G. Loewenstein, T. O’Donoghue, J. Econ.
Lit. 40, 351 (2002).
3. T. C. Koopmans, Econometrica 32, 82 (1960).
4. G. Ainslie, Picoeconomics (Cambridge Univ. Press,
Cambridge, 1992).
5. H. M. Shefrin, R. H. Thaler, Econ. Inq. 26, 609 (1988).
6. R. Benabou, M. Pycia, Econ. Lett. 77, 419 (2002).
7. R. J. Herrnstein, The Matching Law: Papers in
Psychology and Economics, H. Rachlin, D. I. Laibson,
Eds. (Harvard Univ. Press, Cambridge, MA, 1997).
8. H. Rachlin, The Science of Self-Control (Harvard
Univ. Press, Cambridge, MA, 2000).
9.P.R.Montague,G.S.Berns,Neuron 36,265
(2002).
10. G. Loewenstein, Org. Behav. Hum. Decis. Proc. 65,
272 (1996).
11. J. Metcalfe, W. Mischel, Psychol. Rev. 106, 3 (1999).
12. H. Rachlin, Judgment, Decision and Choice: A Cognitive/
Behavioral Synthesis (Freeman, New York, 1989),
chap. 7.
13. J. H. Kagel, R. C. Battalio, L. Green, Economic Choice
Theory: An Experimental Analysis of Animal Behavior
(Cambridge Univ. Press, Cambridge, 1995).
14. M. Macmillan, Brain Cogn. 19, 72 (1992).
15. A. Bechara, A. R. Damasio, H. Damasio, S. W. Anderson,
Cognition 50, 7 (1994).
16. D. Laibson, Q. J. Econ. 112, 443 (1997).
17. G. Angeletos, D. Laibson, A. Repetto, J. Tobacman,
S. Weinberg, J. Econ. Perspect. 15, 47 (2001).
18. T. O’Donoghue, M. Rabin, Am. Econ. Rev. 89, 103
(1999).
19. E. S. Phelps, R. A. Pollak, Rev. Econ. Stud. 35, 185
(1968).
20. J. Elster, Ulysses and the Sirens: Studies in Rationality
and Irrationality (Cambridge Univ. Press, Cambridge,
1979).
21. Materials and methods are available as supporting
material on Science Online.
22. J. Olds, Science 127, 315 (1958).
23. B. G. Hoebel, Am. J. Clin. Nutr. 42, 1133 (1985).
24. W. Schultz, P. Dayan, P. R. Montague, Science 275,
1593 (1997).
C
VCtx RPar
DLPFC VLPFC
LOFC
PMA
0
25
50
75
100
1-3% 5-25%
35-50%
2.5
3.0
3.5
4.0
4.5
Difficult
Easy
A
B
Time (s)
–4408
% Signal Change
0.0
0.4
0.8
1.2
R' vs. R
Decision
Difficult Easy
% P(choose early)RT (s)
Fig. 3. Differences in brain activity while making easy versus difficult decisions separate & areas
associated with decision making from those associated with non–decision-related aspects of task
performance. (A) Difficult decisions were defined as those for which the difference in dollar
amounts was between 5% and 25%. (B) Response times (RT) were significantly longer for difficult
choices than for easy choices (P G 0.005). (C) Difficult choices are associated with greater BOLD
signal changes in the DLPFC, VLPFC, LOFC, and inferoparietal cortex (time by difficulty interaction
significant at P G 0.05 for all areas).
Choose Early
Choose Late
β areas
δ areas
Normalized Signal Change
0.0
–0.05
0.05
Fig. 4. Greater activity in & than $ areas is as-
sociated with the choice of later larger rewards.
To assess overall activity among $ and & areas
and to make appropriate comparisons, we first
normalized the percent signal change (using a
z-score correction) within each area and each
subject, so that the contribution of each brain
area was determined relative to its own range
of signal variation. Normalized signal change
scores were then averaged across areas and sub-
jects separately for the $ and & areas (as iden-
tified in Figs. 1 and 2). The average change
scores are plotted for each system and each
choice outcome. Relative activity in $ and &
brain regions correlates with subjects’ choices
for decisions involving money available today.
There was a significant interaction between
area and choice (P G 0.005), with & areas
showing greater activity when the choice was
made for the later option.
R EPORTS
15 OCTOBER 2004 VOL 306 SCIENCE www.sciencemag.org
506
on January 17, 2012www.sciencemag.orgDownloaded from
25. H. C. Breiter, B. R. Rosen, Ann. N.Y. Acad. Sci. 877,
523 (1999).
26. B.Knutson,G.W.Fong,C.M.Adams,J.L.Varner,
D. Hommer, Neuroreport 12, 3683 (2001).
27. S. M. McClure, G. S. Berns, P. R. Montague, Neuron
38, 339 (2003).
28. Our analysis also identified a region in the dorsal
hippocampus as responding preferentially in the d 0
today condition. However, the mean event-related
response in these voxels was qualitatively different
from that in the other regions identified by the $
analysis (fig. S2). To confirm this, for each area we
conducted paired t tests comparing d 0 today with
d 0 2 weeks and d 0 1 month at each time point
after the time of choice. All areas showed at least
two time points at which activity was significantly
greater for d 0 today (P G 0.01; Bonferroni
correction for five comparisons) except the hippo-
campus, which, by contrast, is not significant for
any individual time point. For these reasons, we do
not include this region in further analyses. Results
are available in (21)(fig.S2).
29. One possible explanation for increased activity
associated with choice sets that contain immediate
rewards is that the discounted value for these choice
sets is higher than the discounted value of choice
sets that contain only delayed rewards. To rule out
this possibility, we estimated discounted value for
each choice as the maximum discounted value
among the two options. We made the simplifying
assumption that subjects maintain a constant weekly
discount rate and estimated this value based on ex-
pressed preferences (best-fitting value was 7.5% dis-
count rate per week). We then regressed out effects
of value from our data with two separate mecha-
nisms. First, we included value as a separate control
variable in our baseline GLM model and tested for $
and & effects. Second, we performed a hierarchical
analysis in which the effect of value was estimated in
a first-stage GLM; this source of variation was then
partialed out of the data and the residual data was
used to identify $ and & regions in a second-stage
GLM. Both of these procedures indicate that value has
minimal effects on our results, with all areas of ac-
tivation remaining significant at P G 0.001, uncorrected.
30. Difficulty was assessed by appealing to the variance
in preferences indicated by participants. In particular,
when the percent difference between dollar amounts
of the options in each choice pair was 1% or 3%,
subjects invariably opted for the earlier reward, and
when the percent difference was 35% or 50%, sub-
jects always selected the later, larger amount. Given
this consistency in results, we call these choices
‘‘easy.’’ For all other differences, subjects show large
variability in preference, and we call these choices
‘‘difficult’’ (Fig. 3A). These designations are further jus-
tified by analyzing the mean response time for dif-
ficult and easy questions. Subjects required on average
3.95 s to respond to difficult questions and 3.42 s
to respond to easy questions (Fig. 3B) (P G 0.005).
We assume that these differences in response time
reflect prolonged decision-making processes for the
difficult choices. Based on these designations, we
calculated mean blood oxygenation level—dependent
(BOLD) responses for easy and difficult choices (Fig. 3C).
31. Because difficulty was associated with longer RT, it
was necessary to rule out nonspecific (i.e., non–
decision-related) effects of RT as a confound in
producing our results. We performed analyses
controlling for RT analogous to those performed for
discounted value as described above (29). This is a
conservative test because, as noted above (30), we
hypothesize that at least some of the variance in RT
was related to the decision-making processes of
interest. Nevertheless, these analyses indicated that
removing the effects of RT does not qualitatively
affect our results.
32. E. K. Miller, J. D. Cohen, Annu. Rev. Neurosci. 24, 167
(2001).
33. E. E. Smith, J. Jonides, Science 283, 1657 (1999).
34. S. Dehaene, G. Dehaene-Lambertz, L. Cohen, Trends
Neurosci. 21, 355 (1998).
35. This prediction requires only that we assume that
activity in each system reflects its overall engage-
ment by the decision and, therefore, its contribution
to the outcome. Specifically, it does not require that
we assume that the level of activity in either system
reflects the value assigned to a particular choice.
36. A. Smith, Theory of Moral Sentiments (A. Millar,
A. Kinkaid, J. Bell, London and Edinburgh, 1759).
37. J. Biederman, S. V. Faraone, J. Atten. Disord. 6, S1 (2002).
38. G. F. Koob, F. E. Bloom, Science 242, 715 (1988).
39. L. A. Giordano et al., Psychopharmacology (Berl.) 163,
174 (2002).
40. W. H. Meck, A. M. Benson, Brain Cogn. 48, 195 (2002).
41. S. C. Tanaka et al., Nature Rev. Neurosci. 7, 887 (2004).
42. Our results are also consistent with the hypothesis
that the fronto-parietal system inhibits the impulse
to choose more immediate rewards. However, this
hypothesis does not easily account for the fact that
this system is recruited even when both rewards are
substantially delayed (e.g., 1 month versus 1 month
and 2 weeks) and the existence of an impulsive re-
sponse seems unlikely. Therefore, we favor the hypoth-
esis that fronto-parietal regions may project future
benefits (through abstract reasoning or possibly
‘‘simulation’’ with imagery), providing top-down sup-
port for responses that favor greater long-term re-
ward and allowing them to compete effectively with
limbically mediated responses when these are present.
43. I. Aharon et al., Neuron 32, 537 (2001).
44. B. Seymour et al., Nature 429, 664 (2004).
45. J. D. Greene, R. B. Sommerville, L. E. Nystrom, J. M.
Darley, J. D. Cohen, Science 293, 2105 (2001).
46. A. G. Sanfey, J. K. Rilling, J. A. Aronson, L. E. Nystrom,
J. D. Cohen, Science 300, 1755 (2003).
47. T. D. Wager et al., Science 303, 1162 (2004).
48. K. N. Ochsner, S. A. Bunge, J. J. Gross, J. D. Gabrieli, J.
Cogn. Neurosci. 14, 1215 (2002).
49. We thank K. D’Ardenne, L. Nystrom, and J. Lee for
help with the experiment and J. Schooler for inspiring
discussions in the early planning phases of this work.
This work was supported by NIH grants MH132804
(J.D.C.), MH065214 (S.M.M.), National Institute on
Aging grant AG05842 (D.I.L.), and NSF grant SES-
0099025 (D.I.L.).
Supporting Online Material
www.sciencemag.org/cgi/content/full/306/5695/503/
DC1
Materials and Methods
Figs. S1 and S2
Tables S1 and S2
References
1 June 2004; accepted 26 August 2004
R EPORTS
www.sciencemag.org SCIENCE VOL 306 15 OCTOBER 2004
507
on January 17, 2012www.sciencemag.orgDownloaded from