Article

The Neurobiological Foundations of Valuation in Human Decision Making Under Uncertainty

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Acknowledgments,362 References,363 s0010 s0010 s0020 s0020 s0030 s0030 s0040 s0040 s0050 s0050 s0060 s0060 OUTLINE INTRODUCTION The goal of this chapter ,is to review ,recent neuro- biological evidence ,to improve ,our understanding ,of human,valuation ,under ,uncertainty. Although ,ulti- mately interested in human behavior, we will borrow from studies of animals with related brain structures, namely, non-human primates. Specifically, we wish to explore,how ,valuation ,is accomplished., As ,we shall see, the evidence rejects a pure “ retrieval from mem- ory ” model; instead, values are computed . This raises the issue: what ,computational ,model(s) are being used? Since actual choice can be summarized,in terms

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The third section, ―Social Decision Making, Neuroeconomics and Emotions‖ also has relevant material for accountants interested in the role of affect in decision making. The last two sections, ―Understanding Valuation-Learning Valuations‖ and ―The Neural Mechanisms for Choice,‖ tend to be more technical from a neuroscience point of view and less likely to interest the general BAR audience, although bits and pieces within these sections are likely to benefit BAR researchers, e.g., the evidence from studies on learning (Niv and Montague, 2009H) and on valuation under uncertainty (Bossaerts et al., 2009H). Due to the close relationship between neuroeconomics and BAR, our discussion of the Handbook's content will concentrate primarily on those identified as being relevant to behavioral economists and psychologists. ...
... Also, evidence on how the brain actually perceives and evaluates rewards and losses, and how it directs the human animal to achieve the intended rewards and avoid losses, e.g. the dopamine studies of Schultz (2009H), is expected to help in fine-tuning utility theory including its major variant, Prospect Theory (Fox & Poldrack, 2009H). Similarly, neural-level evidence on how values translate to choices (Rangel, 2009H; Glimcher, 2009H; Bossaerts et al., 2009H) and how primates, both those close to humans, e.g., chimpanzees in Silk (2009H), and further from humans, e.g., capuchin monkeys in Brosnan (2009H) perceive inequities, is likely to help understand and model choices and decisions at the individual level. Lastly, the ability of neuroeconomics methods to measure a person's well-being in a variety of ways has the potential to shift the focus of behavioral researchers from choice to welfare (Jamison, 2008). ...
... Behavioral researchers have known for a while that when people are presented with a choice between a sure thing and a gamble, people prefer the sure thing if the outcomes are framed as gains, but they prefer the gamble if the outcomes are framed as losses. Dayan & Seymour (2009H) and Bossaerts et al. (2009H) suggest that these preferences could be caused by a simple Pavlovian reflex. Imaging studies of framing effects (De Martino et al., 2006; Kahneman & Frederick, 2007) and framing studies in capuchin monkeys (Santos & Chen, 2009H) also provide support for this hypothesis. ...
Article
Full-text available
This paper discusses a recently published handbook on neuroeconomics (Glimcher et al., 2009H) and extends the discussion to reasons why this newly emerging discipline should be of interest to behavioral accounting researchers. We evaluate the achieved and potential contribution of neuroeconomics to the study of human economic behavior, and examine what behavioral accounting researchers can learn from neuroeconomics and whether we should expect to see a similar sub-field emerge within behavioral accounting in the near future. We conclude that a separate sub-field within behavioral accounting is not likely in the near future due mostly to practical reasons. However, the behavioral accounting researcher would do well to follow research in this discipline closely, and behavioral accountants in the near future are likely to collaborate with neuroscientists and neuroeconomists on questions of mutual interest.
... It is therefore critical to understand the role of risk aversion in post-decision wagering in order to fully dissect the role of non-conscious processes in decision making. There are additional aspects in the task design of the IGT itself (Fellows, 2004; Dunn et al., 2006; Bossaerts et al., 2008), which preclude an unequivocal interpretation either for or against non-conscious decision making. Notably, the IGT has at most one onset of awareness and is essentially a one-shot experiment, where subjects are not allowed to practice the task and they are not informed of any critical information about the task structure (e.g., the possible payoff structure for each deck, when the task ends, etc.). ...
... This is statistically inefficient, yielding effects that are sometimes unreliable even in healthy normal controls (Dunn et al., 2006). While a previous study (Oya et al., 2005) applied a reinforcement learning algorithm to the IGT to solve some of these difficulties, it remains unclear how to incorporate risk aversion effects into reinforcement learning under the unconstrained parameters of the original IGT (Bossaerts et al., 2008). The goal of our study was to test for non-conscious decision making while ruling out other explanations. ...
... This improvement was also critical for our Bayesian modeling analysis. If subjects did not know anything about the task structure we could still have used a reinforcement learning algorithm (Oya et al., 2005 ), but it is unclear how to combine such a model with risk aversion (Bossaerts et al., 2008). In fact, the model comparison (see Appendix) suggests that our Bayesian model with knowledge of the task structure performs better in predicting subjects' behavior than the one without this knowledge and other related reinforcement learning models (Busemeyer and Stout, 2002). ...
Article
Full-text available
To what extent can people choose advantageously without knowing why they are making those choices? This hotly debated question has capitalized on the Iowa Gambling Task (IGT), in which people often learn to choose advantageously without appearing to know why. However, because the IGT is unconstrained in many respects, this finding remains debated and other interpretations are possible (e.g., risk aversion, ambiguity aversion, limits of working memory, or insensitivity to reward/punishment can explain the finding of the IGT). Here we devised an improved variant of the IGT in which the deck-payoff contingency switches after subjects repeatedly choose from a good deck, offering the statistical power of repeated within-subject measures based on learning the reward contingencies associated with each deck. We found that participants exhibited low confidence in their choices, as probed with post-decision wagering, despite high accuracy in selecting advantageous decks in the task, which is putative evidence for non-conscious decision making. However, such a behavioral dissociation could also be explained by risk aversion, a tendency to avoid risky decisions under uncertainty. By explicitly measuring risk aversion for each individual, we predicted subjects’ post-decision wagering using Bayesian modeling. We found that risk aversion indeed does play a role, but that it did not explain the entire effect. Moreover, independently measured risk aversion was uncorrelated with risk aversion exhibited during our version of the IGT, raising the possibility that the latter risk aversion may be non-conscious. Our findings support the idea that people can make optimal choices without being fully aware of the basis of their decision. We suggest that non-conscious decision making may be mediated by emotional feelings of risk that are based on mechanisms distinct from those that support cognitive assessment of risk.
... Heuristics do not account for all attributes of prospects but focus instead on selective information to simplify decisions (Cox et al. 2015). This is consistent with a large body of empirical evidence on decision makers' use of heuristics when faced with complexity and uncertainty (Kahneman et al. 1982;Beshears et al. 2008;Katsikopoulos and Gigerenzer 2008;Bossaerts et al. 2009;Mousavi and Gigerenzer 2014). In the setup we introduce in the next sections, agents circumvent the challenge of computing (1) by adopting a heuristic strategy that relies on the polar cases described by the minimum possible keep and by retention of the regulatory bag limit. ...
Article
Recreational fishing is among the most popular outdoor recreational activities in the world. However, uncertainty in angler response to changes in regulation has limited managers' ability to prevent overfishing. We need to understand the heuristics anglers use to overcome informational and cognitive constraints that may limit their ability to assess stochastic attributes such as catch and environmental amenities. Using data from choice experiments, we specify and estimate preferences that rely on the theory of decision under unknown risks or ambiguity. We build on the observation that anglers interpret possession limits as targets or signals on stock productivity that anchor their expectations on retained catch, to specify a multiple prior model that relies on less onerous assumptions on anglers' information and numeracy than conventional demand models. We integrate the economic sub-model into a bioeconomic model to show that our specification provides better out-of-sample predictions than linear and CARA utility models.
... Heuristics do not account for all attributes of prospects but focus instead on selective information to simplify decisions (Cox et al. 2015). This is consistent with a large body of empirical evidence on decision makers' use of heuristics when faced with complexity and uncertainty (Kahneman et al. 1982, Katsikopoulos and Gigerenzer 2008, Beshears et al. 2008, Bossaerts et al. 2009, Mousavi and Gigerenzer 2014. In the setup we introduce in the next sections, agents circumvent the challenge of computing (1) by adopting a heuristic strategy that relies on the polar cases described by the minimum possible keep and by retention of the regulatory bag limit. ...
Preprint
Recreational fishing is among the most popular outdoor recreational activities in the world. However, uncertainty in angler response to changes in regulation has limited managers' ability to prevent overfishing. We need to understand the heuristics anglers use to overcome informational and cognitive constraints that may limit their ability to assess stochastic attributes such as catch and environmental amenities. Using data from choice experiments, we specify and estimate preferences that rely on the theory of decision under unknown risks or ambiguity. We build on the observation that anglers interpret possession limits as targets or signals on stock productivity that anchor their expectations on retained catch, to specify a multiple prior model that relies on less onerous assumptions on anglers' information and numeracy than conventional demand models. We integrate the economic sub-model into a bioeconomic model to show that our specification provides better out-of-sample predictions than linear and CARA utility models. JEL: C9, D61, D80, D81, Q20, Q22, Q26
... Nonetheless, emotions can also hamper judgment, for instance when there is an overload of emotions or when they stop the thinking process too soon. Furthermore, strong aversions and strong desires alike distort upwards any assessment of likelihood Bossaerts et al. 2009). For instance, if animal spirits particularly fear financial crises, they will overestimate the likelihood of such event. ...
Article
Full-text available
The author proposes an updated theory of animal spirits that builds on Keynes’s insights and extends them by incorporating recent developments in neuroscience and psychology. He places animal spirits in a broader perspective of two systems of reasoning, each pertaining to different degrees of uncertainty. He examines how animal spirits form beliefs by way of analogical reasoning, the mental shortcuts they use, and the role of emotions. He outlines how confidence is biased and distorts business forecasts. The brain seems to seek to achieve the highest level of confidence, and maintains this level by distorting subsequent cognitions so that they substantiate the initial view. Animal spirits are ambivalent. Emotions signal whether we under- or overachieve. Animal spirits push us to take greater or fewer risks in order to reach and secure our objectives. At the same time, they provide incentives to update or change our beliefs and goals, so that they may be more sensible. However, animal spirits also conclude hastily and seek confirmation of their beliefs; they elaborate patterns out of nothing and rely on stereotypes; and emotions may distort judgment. Interestingly, so long as confidence remains high, animal spirits continue to rule the roost; when confidence plummets, logic and calculation enter the fray.
... The first of these stages is concerned with the valuation of all goods and actions; the second is concerned with choosing… [from the] choice set." Bossaerts, Preuschoff, and Hsu (2009) discuss evidence that there are at least two imperfectly correlated brain signals involved in the choice process, one for assessing value, the other for the choice itself. More broadly, Glimcher (2005) surveys a body of evidence suggesting fundamental randomness in the activity of the brain. ...
Article
When an agent chooses between prospects, noise in information processing generates an effect akin to the winner's curse. Statistically unbiased perception systematically overvalues the chosen action because it fails to account for the possibility that noise is responsible for making the preferred action appear to be optimal. The optimal perception pattern exhibits a key feature of prospect theory, namely, overweighting of small probability events (and corresponding underweighting of high probability events). This bias arises to correct for the winner's curse effect.
... Theories of this sort, if they consider utility at all, view it as entering at a fairly late stage of the processing stream; and experiments on vision throughout the hierarchy tend to consider perceptual variables in isolation of affective ones. All this is licensed, or indeed mandated, by the segregation of probability and utility in statistical decision theory (Bossaerts et al., 2008). ...
Article
Statistical decision theory seems to offer a clear framework for the integration of perception and action. In particular, it defines the problem of maximizing the utility of one’s decisions in terms of two subtasks: inferring the likely state of the world, and tracking the utility that would result from different candidate actions in different states. This computational-level description underpins more processlevel research in neuroscience about the brain’s dynamic mechanisms for, on the one hand, inferring states and, on the other hand, learning action values. However, a number of different strands of recent work on this more algorithmic level have cast doubt on the basic shape of the decision-theoretic formulation, specifically the clean separation between states ’ probabilities and utilities. We consider the complex interrelationship between perception, action, and utility implied by these accounts. Normative theories of learning and decision making are motivated by a computational-level analysis of the task facing an organism: What should
... According to the first theory, costly norm enforcement is driven by fairness preference, i.e. 'something equal should be given to those who are equal' (Aristotle, 1998; Fehr and Schmidt, 1999). Neurally, it has been demonstrated that the preference for fairness is implemented in the brain valuation system (Bossaerts et al., 2009; Bartra et al., 2013), most notably the ventral striatum (VS) (Tabibnia et al., 2008; Tricomi et al., 2010). The increased demand for fairness and the increased costly norm enforcement under adversity may therefore be associated with enhanced subjective value of fairness and its representation in VS. ...
Article
Full-text available
Humans are willing to punish norm violations even at a substantial personal cost. Using fMRI and a variant of the ultimatum game and functional magnetic resonance imaging, we investigated how the brain differentially responds to fairness in loss and gain domains. Participants (responders) received offers from anonymous partners indicating a division of an amount of monetary gain or loss. If they accept, both get their shares according to the division; if they reject, both get nothing or lose the entire stake. We used a computational model to derive perceived fairness of offers and participant-specific inequity aversion. Behaviorally, participants were more likely to reject unfair offers in the loss (vs gain) domain. Neurally, the positive correlation between fairness and activation in ventral striatum was reduced, whereas the negative correlations between fairness and activations in dorsolateral prefrontal cortex were enhanced in the loss domain. Moreover, rejection-related dorsal striatum activation was higher in the loss domain. Furthermore, the gain–loss domain modulates costly punishment only when unfair behavior was directed toward the participants and not when it was directed toward others. These findings provide neural and computational accounts of increased costly norm enforcement under adversity and advanced our understanding of the context-dependent nature of fairness preference.
... Recent evidence of activation relating to distinct regions in the human brain depending on context, (De Martino et al., 2006; Tzieropoulos et al., 2011) is based on inter-subject comparisons. While these findings are consistent with dual system theory (Bossaerts et al., 2008; Kahneman and Frederick, 2007), they leave open the possibility that different subjects engage unique albeit group-specific decision making modules. Different patterns of activations across contexts (Hsu et al., 2005; McClure et al., 2004) likewise have been deemed insufficient evidence in favor of dual system theory because they could merely reflect divergence in measurement of relevant components , and not fundamental differences in the way these components are integrated to determine choice (Kable and Glimcher, 2007; Levy et al., 2010). ...
... In the framework of expected utility theory, people's willingness to take risk depends on the concavity of the utility function. In prospect theory, it additionally depends on the shape of the probability weighting function (for review, see Bossaerts et al., 2009; Fox and Poldrack, 2009). Recent approaches, however, highlighted the role of emotions in decision making. ...
Article
Full-text available
In our everyday life, we often have to make decisions with risky consequences, such as choosing a restaurant for dinner or choosing a form of retirement saving. To date, however, little is known about how the brain processes risk. Recent conceptualizations of risky decision making highlight that it is generally associated with emotions but do not specify how emotions are implicated in risk processing. Moreover, little is known about risk processing in non-choice situations and how potential losses influence risk processing. Here we used quantitative meta-analyses of functional magnetic resonance imaging experiments on risk processing in the brain to investigate (1) how risk processing is influenced by emotions, (2) how it differs between choice and non-choice situations, and (3) how it changes when losses are possible. By showing that, over a range of experiments and paradigms, risk is consistently represented in the anterior insula, a brain region known to process aversive emotions such as anxiety, disappointment, or regret, we provide evidence that risk processing is influenced by emotions. Furthermore, our results show risk-related activity in the dorsolateral prefrontal cortex and the parietal cortex in choice situations but not in situations in which no choice is involved or a choice has already been made. The anterior insula was predominantly active in the presence of potential losses, indicating that potential losses modulate risk processing.
... doi:10.1371/journal.pone.0008516.t002 especially previous studies on decision making under risk [29,30]. As observed in Benjamin et al., [31], introducing a molecular genetics approach would potentially enhance the predictive power of economic theory. ...
Article
Full-text available
Decision making often entails longshot risks involving a small chance of receiving a substantial outcome. People tend to be risk preferring (averse) when facing longshot risks involving significant gains (losses). This differentiation towards longshot risks underpins the markets for lottery as well as for insurance. Both lottery and insurance have emerged since ancient times and continue to play a useful role in the modern economy. In this study, we observe subjects' incentivized choices in a controlled laboratory setting, and investigate their association with a widely studied, promoter-region repeat functional polymorphism in monoamine oxidase A gene (MAOA). We find that subjects with the high activity (4-repeat) allele are characterized by a preference for the longshot lottery and also less insurance purchasing than subjects with the low activity (3-repeat) allele. This is the first result to link attitude towards longshot risks to a specific gene. It complements recent findings on the neurobiological basis of economic risk taking.
... Rather, the question is whether different systems encode different and inconsistent values for the same actions, such that these different valuations would lead to diverging conclusions about the best action to take. Many proposals along these lines have been made (Balleine et al., 2009; Balleine and Dickinson, 1998; Bossaerts et al., 2009; Daw et al., 2005; Dayan and Balleine, 2002; Rangel et al., 2008). One set builds upon a distinction made in the psychological literature between: Pavlovian systems, which learn a relationship between stimuli and outcomes and activate simple approach and withdrawal responses; habitual systems, which learn a relationship between stimuli and responses and therefore do not adjust quickly to changes in contingency or devaluation of rewards; and goal-directed systems, which learn a relationship between responses and outcomes and therefore do adjust quickly to changes in contingency or devaluation of rewards. ...
Article
We review and synthesize recent neurophysiological studies of decision making in humans and nonhuman primates. From these studies, the basic outline of the neurobiological mechanism for primate choice is beginning to emerge. The identified mechanism is now known to include a multicomponent valuation stage, implemented in ventromedial prefrontal cortex and associated parts of striatum, and a choice stage, implemented in lateral prefrontal and parietal areas. Neurobiological studies of decision making are beginning to enhance our understanding of economic and social behavior as well as our understanding of significant health disorders where people's behavior plays a key role.
Conference Paper
Artificial Intelligence (AI) systems frequently exhibit systematic blind spots, often referred to as hallucinations in Large Language Models (LLMs), posing risks in high-stakes applications such as autonomous systems, security, and military operations. This paper explores how human intuitive responses, along with conscious reasoning processes, can be integrated to mitigate AI blind spots and enhance decision-making effectiveness within Human-AI teams. We introduce Human-Guided Artificial Intelligence (HGAI) as a framework for achieving this goal. Specifically, we examine the role of System-1 intuitive processing, as captured through physiological signals such as electroencephalography (EEG) and electrocardiography (ECG), System-2 reasoning-based decision-making, and Multi-Modal Fusion (MMF) mechanisms while assessing the knowledge required to develop more reliable, context-aware, and ethically aligned intelligent decision-making systems for highly complex environments.
Article
Full-text available
I present a theory of creative and destructive value state referring to abstract art. Value is a probabilistic state held as a mixture of its expectation and information forces that coexist in a give-and-take relationship. Expectations are driven by the disclosure of novel information about the value state of various events of desire. Each bit of accumulated information contributes to the improvement of perception up to a threshold level, beyond which begin conscious states. The desire to disclose a value state triggers a triadic system of evaluation which uses concepts, observables and approaches. While the triadic valuation mechanisms can be used to assess various commodities, the scope of this work is limited to the case of artworks, in particular abstract paintings. I assume that art value is basically mediated by the interplay between these value state mechanisms of creation and destruction. Expectations in artwork develop attraction by challenging its contemplator to evaluate (predict) its meaning. Once the relevant information, corresponding to its creative expectations, is acquired (and conditioned), emotional states of indifference, disinterest and desensitization develop.
Article
Full-text available
By the late 1990s, several converging trends in economics, psychology, and neuroscience had set the stage for the birth of a new scientific field known as “neuroeconomics”. Without the availability of an extensive variety of experimental designs for dealing with individual and social decision-making provided by experimental economics and psychology, many neuroeconomics studies could not have been developed. At the same time, without the significant progress made in neuroscience for grasping and understanding brain functioning, neuroeconomics would have never seen the light of day. The paper is an overview of the main significant advances in the knowledge of brain functioning by neuroscience that have contributed to the emergence of neuroeconomics and its rise over the past two decades. These advances are grouped over three non-independent topics referred to as the “emo-rational” brain, “social” brain, and “computational” brain. For each topic, it emphasizes findings considered as critical to the birth and development of neuroeconomics while highlighting some of prominent questions about which knowledge should be improved by future research. In parallel, it shows that the boundaries between neuroeconomics and several recent sub-fields of cognitive neuroscience, such as affective, social, and more generally, decision neuroscience, are rather porous.
Article
Full-text available
Perception is often categorical: the perceptual system selects one interpretation of a stimulus even when evidence in favor of other interpretations is appreciable. Such categorization is potentially in conflict with normative decision theory, which mandates that the utility of various courses of action should depend on the probabilities of all possible states of the world, not just that of the one perceived. If these probabilities are lost as a result of categorization, choice will be suboptimal. Here we test for such irrationality in a task that requires human observers to combine perceptual evidence with the uncertain consequences of action. Observers made rapid pointing movements to targets on a touch screen, with rewards determined by perceptual and motor uncertainty. Across both visual and auditory decision tasks, observers consistently placed too much weight on perceptual uncertainty relative to action uncertainty. We show that this suboptimality can be explained as a consequence of categorical perception. Our findings indicate that normative decision making may be fundamentally constrained by the architecture of the perceptual system.
Article
Full-text available
The neurotransmitter dopamine is central to the emerging discipline of neuroeconomics; it is hypothesized to encode the difference between expected and realized rewards and thereby to mediate belief formation and choice. We develop the first formal tests of this theory of dopaminergic function, based on a recent axiomatization by Caplin and Dean (Quarterly Journal of Economics, 123 (2008), 663–702). These tests are satisfied by neural activity in the nucleus accumbens, an area rich in dopamine receptors. We find evidence for separate positive and negative reward prediction error signals, suggesting that behavioral asymmetries in responses to losses and gains may parallel asymmetries in nucleus accumbens activity.
Article
As obesity rates increase worldwide, healthcare providers require methods to instill the lifestyle behaviours necessary for sustainable weight loss. Designing effective weight-loss interventions requires an understanding of how these behaviours are elicited, how they relate to each other and whether they are supported by common neurocognitive mechanisms. This may provide valuable insights to optimize existing interventions and develop novel approaches to weight control. Researchers have begun to investigate the neurocognitive underpinnings of eating behaviour and the impact of physical activity on cognition and the brain. This review attempts to bring these somewhat disparate, yet interrelated lines of literature together in order to examine a hypothesis that eating behaviour and physical activity share a common neurocognitive link. The link pertains to executive functions, which rely on brain circuits located in the prefrontal cortex. These advanced cognitive processes are of limited capacity and undergo relentless strain in the current obesogenic environment. The increased demand on these neurocognitive resources as well as their overuse and/or impairment may facilitate impulses to over-eat, contributing to weight gain and obesity. This impulsive eating drive may be counteracted by physical activity due to its enhancement of neurocognitive resources for executive functions and goal-oriented behaviour. By enhancing the resources that facilitate 'top-down' inhibitory control, increased physical activity may help compensate and suppress the hedonic drive to over-eat. Understanding how physical activity and eating behaviours interact on a neurocognitive level may help to maintain a healthy lifestyle in an obesogenic environment.
Article
Many decisions people make can be described as decisions under risk. Understanding the mechanisms that drive these decisions is an important goal in decision neuroscience. Two competing classes of risky decision making models have been proposed to describe human behavior, namely utility-based models and risk-return models. Here we used a novel investment decision task that uses streams of (past) returns as stimuli to investigate how consistent the two classes of models are with the neurobiological processes underlying investment decisions (where outcomes usually follow continuous distributions). By showing (a) that risk-return models can explain choices behaviorally and (b) that the components of risk-return models (value, risk, and risk attitude) are represented in the brain during choices, we provide evidence that risk-return models describe the neural processes underlying investment decisions well. Most importantly, the observed correlation between risk and brain activity in the anterior insula during choices supports risk-return models more than utility-based models because risk is an explicit component of risk-return models but not of the utility-based models.
Article
Full-text available
The objective of this paper is to show how ambiguity, and a decision maker (DM)'s response to it, can be modelled formally in the context of a general decision model. We introduce a relation derived from the DM's preferences, called “unambiguous preference”, and show that it can be represented by a set of probabilities. We provide such set with a simple differential characterization, and argue that it is a behavioral representation of the “ambiguity” that the DM may perceive. Given such revealed ambiguity, we provide a representation of ambiguity attitudes. We also characterize axiomatically a special case of our decision model, the “α-maxmin” expected utility model.
Conference Paper
Full-text available
Inference and adaptation in noisy and changing, rich sensory environ- ments are rife with a variety of specific sorts of variability. Experimental and theoretical studies suggest that these different forms of variability play different behavioral, neural and computational roles, and may be reported by different (notably neuromodulatory) systems. Here, we re- fine our previous theory of acetylcholine's role in cortical inference in the (oxymoronic) terms of expected uncertainty, and advocate a theory for norepinephrine in terms of unexpected uncertainty. We suggest that norepinephrine reports the radical divergence of bottom-up inputs from prevailing top-down interpretations, to influence inference and plasticity. We illustrate this proposal using an adaptive factor analysis model.
Article
Full-text available
Paradoxes are useful in science because they hint at errors or inconsistencies in our models of the world. In this thesis, I study two well-known and long-standing paradoxes in decision theory from the point of view of neuroeconomics. This approach combines tools and ideas from economics and neuroscience, and tries to understand the neural mechanisms and the causal structures behind these paradoxes. Since its introduction in Ellsberg (1961), the Ellsberg Paradox has been one of the most studied violations of subjective expected utility theory (SEU). In one version of the paradox, a decision-maker is confronted containing two urns with 100 balls that are either red or blue. In the first (risky) urn, she is told there are 50 red and 50 blue; whereas no further information is given about the second (ambiguous) urn. A commonly observed choice pattern is for decision makers to choose to bet on both red and blue in the first urn. Clearly, if probabilities are additive, such rankings are inconsistent with SEU. First, I present brain imaging that shows that the brain treats risky and ambiguous choices differently. This is done through the use of functional magnetic resonance imaging (fMRI), a method that measures brain activity indirectly through blood flow. I find evidence that the brain regions respond differently to ambiguity and risk. Furthermore, the region that is correlated with expected money value of choices are activated more under risk than ambiguity, confirming that expected utility of ambiguous gambles are lower than those of equivalent risk gambles. Finally, the temporal relationship between the regions suggests a network where one brain region signals the level of uncertainty (amygdala), sent through another region (orbitofrontal cortex), and increases (decreases) expected utility of the choices, represented in the activity of a third region (striatum). Brain imaging results, however, is limited by its correlational nature. To assess necessity, if a particular brain region causes a certain behavior, taking it out should remove that behavior. Conversely, to assess sufficiency, stimulating the brain region should create that behavior. In the former, I study patients who have damage to the orbitofrontal cortex (same region found in the brain scans). I find that these patients were both ambiguity- and risk-neutral. This compares to ambiguity- and risk-averse behavior of patients with damage to other parts of the brain not found in the brain scans, similar to normal individuals. This confirms the idea that specific brain regions are necessary for distinguishing between risk and ambiguity. In the latter, I activate amygdala of (normal) subjects through mild electrical stimulation (a method known to elicit activation of the region). This allows us to test whether this method increases the ambiguity/risk aversion of subjects. The third chapter studies the Allais Paradox and the probability weighting function. The fact that people do not appear to weight probabilities linearly as dictated by subjective expected utility theory has been known since the 1950s. More specifically, people have been found to overweight small probabilities, and underweight large probabilities. This chapter has two goals. First, I attempt to find the neural correlate of the probability weighting function: that is, is the probability weighting function as discussed in the decision theory literature found in the brain? Second, I posit a hypothesis for the generation of the probability weighting function with data from psychophysics and neuroscience. Together they shed light on how the brain encodes probabilities as a physical quantity as well as how it might combine decision weights and rewards to calculate expected utility.
Article
Full-text available
We develop a theoretical framework that shows how mesencephalic dopamine systems could distribute to their targets a signal that represents information about future expectations. In particular, we show how activity in the cerebral cortex can make predictions about future receipt of reward and how fluctuations in the activity levels of neurons in diffuse dopamine systems above and below baseline levels would represent errors in these predictions that are delivered to cortical and subcortical targets. We present a model for how such errors could be constructed in a real brain that is consistent with physiological results for a subset of dopaminergic neurons located in the ventral tegmental area and surrounding dopaminergic neurons. The theory also makes testable predictions about human choice behavior on a simple decision-making task. Furthermore, we show that, through a simple influence on synaptic plasticity, fluctuations in dopamine release can act to change the predictions in an appropriate manner.
Article
Full-text available
The somatic marker hypothesis proposes that decision-making is a process that depends on emotion. Studies have shown that damage of the ventromedial prefrontal (VMF) cortex precludes the ability to use somatic (emotional) signals that are necessary for guiding decisions in the advantageous direction. However, given the role of the amygdala in emotional processing, we asked whether amygdala damage also would interfere with decision-making. Furthermore, we asked whether there might be a difference between the roles that the amygdala and VMF cortex play in decision-making. To address these two questions, we studied a group of patients with bilateral amygdala, but not VMF, damage and a group of patients with bilateral VMF, but not amygdala, damage. We used the "gambling task" to measure decision-making performance and electrodermal activity (skin conductance responses, SCR) as an index of somatic state activation. All patients, those with amygdala damage as well as those with VMF damage, were (1) impaired on the gambling task and (2) unable to develop anticipatory SCRs while they pondered risky choices. However, VMF patients were able to generate SCRs when they received a reward or a punishment (play money), whereas amygdala patients failed to do so. In a Pavlovian conditioning experiment the VMF patients acquired a conditioned SCR to visual stimuli paired with an aversive loud sound, whereas amygdala patients failed to do so. The results suggest that amygdala damage is associated with impairment in decision-making and that the roles played by the amygdala and VMF in decision-making are different.
Article
Full-text available
Uncertainty is critical in the measure of information and in assessing the accuracy of predictions. It is determined by probability P, being maximal at P = 0.5 and decreasing at higher and lower probabilities. Using distinct stimuli to indicate the probability of reward, we found that the phasic activation of dopamine neurons varied monotonically across the full range of probabilities, supporting past claims that this response codes the discrepancy between predicted and actual reward. In contrast, a previously unobserved response covaried with uncertainty and consisted of a gradual increase in activity until the potential time of reward. The coding of uncertainty suggests a possible role for dopamine signals in attention-based learning and risk-taking behavior.
Article
Full-text available
Habits are controlled by antecedent stimuli rather than by goal expectancy. Interval schedules of feedback have been shown to generate habits, as revealed by the insensitivity of behaviour acquired under this schedule to outcome devaluation treatments. Two experiments were conducted to assess the role of the dorsolateral striatum in habit learning. In Experiment 1, sham operated controls and rats with dorsolateral striatum lesions were trained to press a lever for sucrose under interval schedules. After training, the sucrose was devalued by inducing taste aversion to it using lithium chloride, whereas saline injections were given to the controls. Only rats given the devaluation treatment reduced their consumption of sucrose and this reduction was similar in both the sham and the lesioned groups. All rats were then returned to the instrumental chamber for an extinction test, in which the lever was extended but no sucrose was delivered. In contrast to sham operated controls, rats with dorsolateral striatum lesions refrained from pressing the lever if the outcome was devalued. To assess the specificity of the role of dorsolateral striatum in this effect a second experiment was conducted in which a group with lesions of dorsomedial striatum was added. In relation now to both the sham and the dorsomedial lesioned groups, only rats with lesions of dorsolateral striatum significantly reduced responding after outcome devaluation. In conclusion, this study provides direct evidence that the dorsolateral striatum is necessary for habit formation. Furthermore, it suggests that, when the habit system is disrupted, control over instrumental performance reverts to the system controlling the performance of goal-directed instrumental actions.
Article
Full-text available
When people have access to information sources such as newspaper weather forecasts, drug-package inserts, and mutual-fund brochures, all of which provide convenient descriptions of risky prospects, they can make decisions from description. When people must decide whether to back up their computer's hard drive, cross a busy street, or go out on a date, however, they typically do not have any summary description of the possible outcomes or their likelihoods. For such decisions, people can call only on their own encounters with such prospects, making decisions from experience. Decisions from experience and decisions from description can lead to dramatically different choice behavior. In the case of decisions from description, people make choices as if they overweight the probability of rare events, as described by prospect theory. We found that in the case of decisions from experience, in contrast, people make choices as if they underweight the probability of rare events, and we explored the impact of two possible causes of this underweighting--reliance on relatively small samples of information and overweighting of recently sampled information. We conclude with a call for two different theories of risky choice.
Article
Full-text available
It is important for animals to estimate the value of rewards as accurately as possible. Because the number of potential reward values is very large, it is necessary that the brain's limited resources be allocated so as to discriminate better among more likely reward outcomes at the expense of less likely outcomes. We found that midbrain dopamine neurons rapidly adapted to the information provided by reward-predicting stimuli. Responses shifted relative to the expected reward value, and the gain adjusted to the variance of reward value. In this way, dopamine neurons maintained their reward sensitivity over a large range of reward values.
Article
Full-text available
Many decisions are made under uncertainty; that is, with limited information about their potential consequences. Previous neuroimaging studies of decision making have implicated regions of the medial frontal lobe in processes related to the resolution of uncertainty. However, a different set of regions in dorsal prefrontal and posterior parietal cortices has been reported to be critical for selection of actions to unexpected or unpredicted stimuli within a sequence. In the current study, we induced uncertainty using a novel task that required subjects to base their decisions on a binary sequence of eight stimuli so that uncertainty changed dynamically over time (from 20 to 50%), depending on which stimuli were presented. Activation within prefrontal, parietal, and insular cortices increased with increasing uncertainty. In contrast, within medial frontal regions, as well as motor and visual cortices, activation did not increase with increasing uncertainty. We conclude that the brain response to uncertainty depends on the demands of the experimental task. When uncertainty depends on learned associations between stimuli and responses, as in previous studies, it modulates activation in the medial frontal lobes. However, when uncertainty develops over short time scales as information is accumulated toward a decision, dorsal prefrontal and posterior parietal contributions are critical for its resolution. The distinction between neural mechanisms subserving different forms of uncertainty resolution provides an important constraint for neuroeconomic models of decision making.
Article
Full-text available
Much is known about how people make decisions under varying levels of probability (risk). Less is known about the neural basis of decision-making when probabilities are uncertain because of missing information (ambiguity). In decision theory, ambiguity about probabilities should not affect choices. Using functional brain imaging, we show that the level of ambiguity in choices correlates positively with activation in the amygdala and orbitofrontal cortex, and negatively with a striatal system. Moreover, striatal activity correlates positively with expected reward. Neurological subjects with orbitofrontal lesions were insensitive to the level of ambiguity and risk in behavioral choices. These data suggest a general neural circuit responding to degrees of uncertainty, contrary to decision theory.
Article
Full-text available
Human choices are remarkably susceptible to the manner in which options are presented. This so-called “framing effect” represents a striking violation of standard economic accounts of human rationality, although its underlying neurobiology is not understood. We found that the framing effect was specifically associated with amygdala activity, suggesting a key role for an emotional system in mediating decision biases. Moreover, across individuals, orbital and medial prefrontal cortex activity predicted a reduced susceptibility to the framing effect. This finding highlights the importance of incorporating emotional processes within models of human choice and suggests how the brain may modulate the effect of these biasing influences to approximate rationality.
Article
Full-text available
When deciding between different options, individuals are guided by the expected (mean) value of the different outcomes and by the associated degrees of uncertainty. We used functional magnetic resonance imaging to identify brain activations coding the key decision parameters of expected value (magnitude and probability) separately from uncertainty (statistical variance) of monetary rewards. Participants discriminated behaviorally between stimuli associated with different expected values and uncertainty. Stimuli associated with higher expected values elicited monotonically increasing activations in distinct regions of the striatum, irrespective of different combinations of magnitude and probability. Stimuli associated with higher uncertainty (variance) elicited increasing activations in the lateral orbitofrontal cortex. Uncertainty-related activations covaried with individual risk aversion in lateral orbitofrontal regions and risk-seeking in more medial areas. Furthermore, activations in expected value-coding regions in prefrontal cortex covaried differentially with uncertainty depending on risk attitudes of individual participants, suggesting that separate prefrontal regions are involved in risk aversion and seeking. These data demonstrate the distinct coding in key reward structures of the two basic and crucial decision parameters, expected value, and uncertainty.
Article
Full-text available
While mainstream economic models assume that individuals treat probabilities objectively, many people tend to overestimate the likelihood of improbable events and underestimate the likelihood of probable events. However, a biological account for why probabilities would be treated this way does not yet exist. While undergoing fMRI, we presented individuals with a series of lotteries, defined by the voltage of an impending cutaneous electric shock and the probability with which the shock would be received. During the prospect phase, neural activity that tracked the probability of the expected outcome was observed in a circumscribed network of brain regions that included the anterior cingulate, visual, parietal, and temporal cortices. Most of these regions displayed responses to probabilities consistent with nonlinear probability weighting. The neural responses to passive lotteries predicted 79% of subsequent decisions when individuals were offered choices between different lotteries, and exceeded that predicted by behavior alone near the indifference point.
Article
Full-text available
Understanding how organisms deal with probabilistic stimulus-reward associations has been advanced by a convergence between reinforcement learning models and primate physiology, which demonstrated that the brain encodes a reward prediction error signal. However, organisms must also predict the level of risk associated with reward forecasts, monitor the errors in those risk predictions, and update these in light of new information. Risk prediction serves a dual purpose: (1) to guide choice in risk-sensitive organisms and (2) to modulate learning of uncertain rewards. To date, it is not known whether or how the brain accomplishes risk prediction. Using functional imaging during a simple gambling task in which we constantly changed risk, we show that an early-onset activation in the human insula correlates significantly with risk prediction error and that its time course is consistent with a role in rapid updating. Additionally, we show that activation previously associated with general uncertainty emerges with a delay consistent with a role in risk prediction. The activations correlating with risk prediction and risk prediction errors are the analogy for risk of activations correlating with reward prediction and reward prediction errors for reward expectation. As such, our findings indicate that our understanding of the neural basis of reward anticipation under uncertainty needs to be expanded to include risk prediction.
Article
We present a model of portfolio choice and stock trading volume with loss-averse investors. The demand function for risky assets is discontinuous and nonmonotonic: As wealth rises beyond a threshold, investors follow a generalized portfolio insurance strategy, which is consistent with the disposition effect. In addition, loss-averse investors hold no stocks unless the equity premium is quite high. The elasticity of the aggregate demand curve changes substantially, depending on the distribution of wealth across investors. In an equilibrium setting, the model generates positive correlation between trading volume and stock return volatility but suggests that this relationship is nonlinear.
Article
In subjective expected utility (SEU), the decision weights people attach to events are their beliefs about the likelihood of events. Much empirical evidence, inspired by Ellsberg (1961) and others, shows that people prefer to bet on events they know more about, even when their beliefs are held constant. (They are averse to ambiguity, or uncertainty about probability.) We review evidence, recent theoretical explanations, and applications of research on ambiguity and SEU.
Article
The article, written in 1973, examines what comparisons of income distributions can be made when Lorenz curves cross, employing the concept of third-order stochastic dominance.
Article
The conditional mean and variance of return on the market portfolio play a central role in Merton's (1973) intertemporal capital asset pricing model (ICAPM). Although theoretical models suggest a positive relation between risk and return for the aggregate stock market, the existing empirical literature fails to agree on the intertemporal relation between expected return and volatility. There is a long literature that has tried to identify the existence of such a tradeoff between risk and return, but the results are far from being conclusive. This paper examines the intertemporal relation between downside risk and expected stock returns. Value at risk (VaR), is used as a measure of downside risk to determine the existence and significance of a risk-return tradeoff for several stock market indices. We find a positive and significant relation between VaR and the value-weighted and equal-weighted portfolio returns on the NYSE/AMEX/Nasdaq stocks. This result also holds for the NYSE/AMEX, NYSE, Nasdaq, and S&P 500 index portfolios. As an alternative measure of downside risk, we also consider expected shortfall and tail risk, which measure the mean and variance of losses beyond some value at risk level. We show that the strong positive relation between downside risk and excess market return is robust across different left-tail risk measures. Moreover, VaR remains to be a superior measure of risk even when it is compared to the traditional risk measures which have significant predictive power for market returns. These results are robust across different loss probability levels, estimation techniques and after controlling for macroeconomic variables associated with business cycle fluctuations.
Article
This paper proposes a choice-theoretic framework for evaluating economic welfare with the following features. (1) In principle, it is applicable irrespective of the positive model used to describe behavior. (2) It subsumes standard welfare economics both as a special case (when standard choice axioms are satis…ed) and as a limiting case (when behavioral anomalies are small). (3) Like standard welfare economics, it requires only data on choices. (4) It is easily applied in the context of speci…c behavioral theories, such as the �;� model of time inconsistency, for which it has novel normative implications. (5) It generates natural counterparts for the standard tools of applied welfare analysis, including compensating and equivalent variation, consumer surplus, Pareto optimality, and the contract curve, and permits a broad generalization of the of the …rst welfare theorem. (6) Though not universally discerning, it lends itself to principled re…nements.
Article
I. Are there uncertainties that are not risks? 643. — II. Uncertainties that are not risks, 647. — III. Why are some uncertainties not risks? — 656.
Article
Modern economic theory ignores the influence of emotions on decision-making. Emerging neuroscience evidence suggests that sound and rational decision making, in fact, depends on prior accurate emotional processing. The somatic marker hypothesis provides a systems-level neuroanatomical and cognitive framework for decision-making and its influence by emotion. The key idea of this hypothesis is that decision-making is a process that is influenced by marker signals that arise in bioregulatory processes, including those that express themselves in emotions and feelings. This influence can occur at multiple levels of operation, some of which occur consciously, and some of which occur non-consciously. Here we review studies that confirm various predictions from the hypothesis, and propose a neural model for economic decision, in which emotions are a major factor in the interaction between environmental conditions and human decision processes, with these emotional systems providing valuable implicit or explicit knowledge for making fast and advantageous decisions.
Article
We discuss frequency properties of Bayes rules, paying special attention to consistency. Some new and fairly natural counterexamples are given, involving nonparametric estimates of location. Even the Dirichlet prior can lead to inconsistent estimates if used too aggressively. Finally, we discuss reasons for Bayesians to be interested in frequency properties of Bayes rules. As a part of the discussion we give a subjective equivalent to consistency and compute the derivative of the map taking priors to posteriors.
Article
We present a model of portfolio choice and stock trading volume with loss-averse investors. The demand function for risky assets is discontinuous and nonmonotonic: As wealth rises beyond a threshold, investors follow a generalized portfolio insurance strategy, which is consistent with the disposition effect. In addition, loss-averse investors hold no stocks unless the equity premium is quite high. The elasticity of the aggregate demand curve changes substantially, depending on the distribution of wealth across investors. In an equilibrium setting, the model generates positive correlation between trading volume and stock return volatility but suggests that this relationship is nonlinear.
Article
We propose a broad generalization of standard choice-theoretic welfare economics that encompasses a wide variety of nonstandard behavioral models. Our approach exploits the coherent aspects of choice that those positive models typically attempt to capture. It replaces the standard revealed preference relation with an unambiguous choice relation: roughly, x is (strictly) unambiguously chosen over y (written xP*y) iff y is never chosen when x is available. Under weak assumptions, P* is acyclic and therefore suitable for welfare analysis; it is also the most discerning welfare criterion that never overrules choice. The resulting framework generates natural counterparts for the standard tools of applied welfare economics and is easily applied in the context of specific behavioral theories, with novel implications. Though not universally discerning, it lends itself to principled refinements.
Article
Two core meanings of “utility” are distinguished. “Decision utility” is the weight of an outcome in a decision. “Experienced utility” is hedonic quality, as in Bentham's usage. Experienced utility can be reported in real time (instant utility), or in retrospective evaluations of past episodes (remembered utility). Psychological research has documented systematic errors in retrospective evaluations, which can induce a preference for dominated options. We propose a formal normative theory of the total experienced utility of temporally extended outcomes. Measuring the experienced utility of outcomes permits tests of utility maximization and opens other Unes of empirical research.
Article
Deciding advantageously in a complex situation is thought to require overt reasoning on declarative knowledge, namely, on facts pertaining to premises, options for action, and outcomes of actions that embody the pertinent previous experience. An alternative possibility was investigated: that overt reasoning is preceded by a nonconscious biasing step that uses neural systems other than those that support declarative knowledge. Normal participants and patients with prefrontal damage and decision-making defects performed a gambling task in which behavioral, psychophysiological, and self-account measures were obtained in parallel. Normals began to choose advantageously before they realized which strategy worked best, whereas prefrontal patients continued to choose disadvantageously even after they knew the correct strategy. Moreover, normals began to generate anticipatory skin conductance responses (SCRs) whenever they pondered a choice that turned out to be risky, before they knew explicitly that it was a risky choice, whereas patients never developed anticipatory SCRs, although some eventually realized which choices were risky. The results suggest that, in normal individuals, nonconscious biases guide behavior before conscious knowledge does. Without the help of such biases, overt knowledge may be insufficient to ensure advantageous behavior.
Article
Many behaviors are affected by rewards, undergoing long-term changes when rewards are different than predicted but remaining unchanged when rewards occur exactly as predicted. The discrepancy between reward occurrence and reward prediction is termed an 'error in reward prediction'. Dopamine neurons in the substantia nigra and the ventral tegmental area are believed to be involved in reward-dependent behaviors. Consistent with this role, they are activated by rewards, and because they are activated more strongly by unpredicted than by predicted rewards they may play a role in learning. The present study investigated whether monkey dopamine neurons code an error in reward prediction during the course of learning. Dopamine neuron responses reflected the changes in reward prediction during individual learning episodes; dopamine neurons were activated by rewards during early trials, when errors were frequent and rewards unpredictable, but activation was progressively reduced as performance was consolidated and rewards became more predictable. These neurons were also activated when rewards occurred at unpredicted times and were depressed when rewards were omitted at the predicted times. Thus, dopamine neurons code errors in the prediction of both the occurrence and the time of rewards. In this respect, their responses resemble the teaching signals that have been employed in particularly efficient computational learning models.
Article
We used functional magnetic resonance neuroimaging to measure brain activity during delay between reward-related decisions and their outcomes, and the modulation of this delay activity by uncertainty and arousal. Feedback, indicating financial gain or loss, was given following a fixed delay. Anticipatory arousal was indexed by galvanic skin conductance. Delay-period activity was associated with bilateral activation in orbital and medial prefrontal, temporal, and right parietal cortices. During delay, activity in anterior cingulate and orbitofrontal cortices was modulated by outcome uncertainty, whereas anterior cingulate, dorsolateral prefrontal, and parietal cortices activity was modulated by degree of anticipatory arousal. A distinct region of anterior cingulate was commonly activated by both uncertainty and arousal. Our findings highlight distinct contributions of cognitive uncertainty and autonomic arousal to anticipatory neural activity in prefrontal cortex.
Article
A subset of caudate neurons fires before cues that instruct the monkey what he should do. To test the hypothesis that the anticipatory activity of such neurons depends on the context of stimulus-reward mapping, we examined their activity while the monkeys performed a memory-guided saccade task in which either the position or the color of a cue indicated presence or absence of reward. Some neurons showed anticipatory activity only when a particular position was associated with reward, while others fired selectively for color-reward associations. The functional segregation suggests that caudate neurons participate in feature-based anticipation of visual information that predicts reward. This neuronal code influences the general activity level in response to visual features without improving the quality of visual discrimination.
Article
Functional MRI experiments in human subjects strongly suggest that the striatum participates in processing information about the predictability of rewarding stimuli. However, stimuli can be unpredictable in character (what stimulus arrives next), unpredictable in time (when the stimulus arrives), and unpredictable in amount (how much arrives). These variables have not been dissociated in previous imaging work in humans, thus conflating possible interpretations of the kinds of expectation errors driving the measured brain responses. Using a passive conditioning task and fMRI in human subjects, we show that positive and negative prediction errors in reward delivery time correlate with BOLD changes in human striatum, with the strongest activation lateralized to the left putamen. For the negative prediction error, the brain response was elicited by expectations only and not by stimuli presented directly; that is, we measured the brain response to nothing delivered (juice expected but not delivered) contrasted with nothing delivered (nothing expected).
Article
Temporal difference learning has been proposed as a model for Pavlovian conditioning, in which an animal learns to predict delivery of reward following presentation of a conditioned stimulus (CS). A key component of this model is a prediction error signal, which, before learning, responds at the time of presentation of reward but, after learning, shifts its response to the time of onset of the CS. In order to test for regions manifesting this signal profile, subjects were scanned using event-related fMRI while undergoing appetitive conditioning with a pleasant taste reward. Regression analyses revealed that responses in ventral striatum and orbitofrontal cortex were significantly correlated with this error signal, suggesting that, during appetitive conditioning, computations described by temporal difference learning are expressed in the human brain.
Article
Decision making and risk taking are interrelated processes that are important for daily functioning. The somatic marker hypothesis has provided a conceptual basis for processes involved in risk-taking decision making and has been used to link discrete neural substrates to risk-related behaviors. This investigation examined the hypothesis that the degree of risk-taking is related to the degree of activation in the insular cortex. Seventeen healthy, right-handed subjects performed a risk-taking decision-making task during functional magnetic resonance imaging (fMRI) using a fast event-related design. This investigation yielded three main findings. First, right insula (BA 13) activation was significantly stronger when subjects selected a "risky" response versus selecting a "safe" response. Second, the degree of insula activation was related to the probability of selecting a "safe" response following a punished response. Third, the degree of insula activation was related to the subjects' degree of harm avoidance and neuroticism as measured by the TCI and NEO personality questionnaires, respectively. These results are consistent with the hypothesis that insula activation serves as a critical neural substrate to instantiate aversive somatic markers that guide risk-taking decision-making behavior.
Article
The somatic marker hypothesis (SMH; [Damasio, A. R., Tranel, D., Damasio, H., 1991. Somatic markers and the guidance of behaviour: theory and preliminary testing. In Levin, H.S., Eisenberg, H.M., Benton, A.L. (Eds.), Frontal Lobe Function and Dysfunction. Oxford University Press, New York, pp. 217-229]) proposes that emotion-based biasing signals arising from the body are integrated in higher brain regions, in particular the ventromedial prefrontal cortex (VMPFC), to regulate decision-making in situations of complexity. Evidence for the SMH is largely based on performance on the Iowa Gambling Task (IGT; [Bechara, A., Tranel, D., Damasio, H., Damasio, A.R., 1996. Failure to respond autonomically to anticipated future outcomes following damage to prefrontal cortex. Cerebral Cortex 6 (2), 215-225]), linking anticipatory skin conductance responses (SCRs) to successful performance on a decision-making paradigm in healthy participants. These 'marker' signals were absent in patients with VMPFC lesions and were associated with poorer IGT performance. The current article reviews the IGT findings, arguing that their interpretation is undermined by the cognitive penetrability of the reward/punishment schedule, ambiguity surrounding interpretation of the psychophysiological data, and a shortage of causal evidence linking peripheral feedback to IGT performance. Further, there are other well-specified and parsimonious explanations that can equally well account for the IGT data. Next, lesion, neuroimaging, and psychopharmacology data evaluating the proposed neural substrate underpinning the SMH are reviewed. Finally, conceptual reservations about the novelty, parsimony and specification of the SMH are raised. It is concluded that while presenting an elegant theory of how emotion influences decision-making, the SMH requires additional empirical support to remain tenable.
Article
People often prefer the known over the unknown, sometimes sacrificing potential rewards for the sake of surety. Overcoming impulsive preferences for certainty in order to exploit uncertain but potentially lucrative options may require specialized neural mechanisms. Here, we demonstrate by functional magnetic resonance imaging (fMRI) that individuals' preferences for risk (uncertainty with known probabilities) and ambiguity (uncertainty with unknown probabilities) predict brain activation associated with decision making. Activation within the lateral prefrontal cortex was predicted by ambiguity preference and was also negatively correlated with an independent clinical measure of behavioral impulsiveness, suggesting that this region implements contextual analysis and inhibits impulsive responses. In contrast, activation of the posterior parietal cortex was predicted by risk preference. Together, this novel double dissociation indicates that decision making under ambiguity does not represent a special, more complex case of risky decision making; instead, these two forms of uncertainty are supported by distinct mechanisms.
Article
The ability to distinguish novel from familiar stimuli allows nervous systems to rapidly encode significant events following even a single exposure to a stimulus. This detection of novelty is necessary for many types of learning. Neurons in the medial temporal lobe (MTL) are critically involved in the acquisition of long-term declarative memories. During a learning task, we recorded from individual MTL neurons in vivo using microwire electrodes implanted in human epilepsy surgery patients. We report here the discovery of two classes of neurons in the hippocampus and amygdala that exhibit single-trial learning: novelty and familiarity detectors, which show a selective increase in firing for new and old stimuli, respectively. The neurons retain memory for the stimulus for 24 hr. Thus, neurons in the MTL contain information sufficient for reliable novelty-familiarity discrimination and also show rapid plasticity as a result of single-trial learning.
Article
In decision-making under uncertainty, economic studies emphasize the importance of risk in addition to expected reward. Studies in neuroscience focus on expected reward and learning rather than risk. We combined functional imaging with a simple gambling task to vary expected reward and risk simultaneously and in an uncorrelated manner. Drawing on financial decision theory, we modeled expected reward as mathematical expectation of reward, and risk as reward variance. Activations in dopaminoceptive structures correlated with both mathematical parameters. These activations differentiated spatially and temporally. Temporally, the activation related to expected reward was immediate, while the activation related to risk was delayed. Analyses confirmed that our paradigm minimized confounds from learning, motivation, and salience. These results suggest that the primary task of the dopaminergic system is to convey signals of upcoming stochastic rewards, such as expected reward and risk, beyond its role in learning, motivation, and salience.
Article
This article analyzes the simple Rescorla-Wagner learning rule from the vantage point of least squares learning theory. In particular, it suggests how measures of risk, such as prediction risk, can be used to adjust the learning constant in reinforcement learning. It argues that prediction risk is most effectively incorporated by scaling the prediction errors. This way, the learning rate needs adjusting only when the covariance between optimal predictions and past (scaled) prediction errors changes. Evidence is discussed that suggests that the dopaminergic system in the (human and nonhuman) primate brain encodes prediction risk, and that prediction errors are indeed scaled with prediction risk (adaptive encoding).
Article
Our decisions are guided by outcomes that are associated with decisions made in the past. However, the amount of influence each past outcome has on our next decision remains unclear. To ensure optimal decision-making, the weight given to decision outcomes should reflect their salience in predicting future outcomes, and this salience should be modulated by the volatility of the reward environment. We show that human subjects assess volatility in an optimal manner and adjust decision-making accordingly. This optimal estimate of volatility is reflected in the fMRI signal in the anterior cingulate cortex (ACC) when each trial outcome is observed. When a new piece of information is witnessed, activity levels reflect its salience for predicting future outcomes. Furthermore, variations in this ACC signal across the population predict variations in subject learning rates. Our results provide a formal account of how we weigh our different experiences in guiding our future actions.
Article
How the brain integrates signals from specific areas has been a longstanding critical question for neurobiologists. Two recent observations suggest a new approach to fMRI data analysis of this question. First, in many instances, the brain analyzes inputs by decomposing the information along several salient dimensions. For example, earlier work demonstrated that the brain splits a monetary gamble in terms of expected reward (ER) and variance of the reward (risk) [Preuschoff, K., Bossaerts, P., Quartz, S., 2006. Neural differentiation of expected reward and risk in human subcortical structures. Neuron 51, 381-390]. However, since ER and risk activate separate brain regions, the brain needs to integrate these activations to obtain an overall evaluation of the gamble. Second, recent evidence suggests that the correlation of the activity between neurons may serve a specific organizational purpose [Romo, R., Hernandez, A., Zainos, A., Salinas, E., 2003. Correlated neuronal discharges that increase coding efficiency during perceptual discrimination. Neuron 38, 649-657; Salinas, E., Sejnowski, T.J., 2001. Correlated neuronal activity and the flow of neural information. Nat. Rev. Neurosci. 2, 539]. Specifically, it is hypothesized that correlations allow brain regions to integrate several signals in a way that minimizes noise. Under this hypothesis, we show here that canonical correlation analysis of fMRI data identifies how the signals from several regions are combined. A general linear model then verifies whether the identified combination indeed activates a projection area in the brain. We illustrate the proposed procedure on data recorded while human subjects played a simple card game. We show that the brain adds the signals of ER and risk to form a measure that activates the medial prefrontal cortex, consistent with the role of this brain structure in the evaluation of monetary gambles.