Article

Explicit neural signals reflecting reward uncertainty

Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3DY, UK.
Philosophical Transactions of The Royal Society B Biological Sciences (Impact Factor: 6.31). 11/2008; 363(1511):3801-11. DOI: 10.1098/rstb.2008.0152
Source: PubMed

ABSTRACT The acknowledged importance of uncertainty in economic decision making has stimulated the search for neural signals that could influence learning and inform decision mechanisms. Current views distinguish two forms of uncertainty, namely risk and ambiguity, depending on whether the probability distributions of outcomes are known or unknown. Behavioural neurophysiological studies on dopamine neurons revealed a risk signal, which covaried with the standard deviation or variance of the magnitude of juice rewards and occurred separately from reward value coding. Human imaging studies identified similarly distinct risk signals for monetary rewards in the striatum and orbitofrontal cortex (OFC), thus fulfilling a requirement for the mean variance approach of economic decision theory. The orbitofrontal risk signal covaried with individual risk attitudes, possibly explaining individual differences in risk perception and risky decision making. Ambiguous gambles with incomplete probabilistic information induced stronger brain signals than risky gambles in OFC and amygdala, suggesting that the brain's reward system signals the partial lack of information. The brain can use the uncertainty signals to assess the uncertainty of rewards, influence learning, modulate the value of uncertain rewards and make appropriate behavioural choices between only partly known options.

Full-text

Available from: Colin Farrell Camerer, Jun 15, 2015
0 Followers
 · 
143 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Both normative and many descriptive theories of decision making under risk are based on the notion that outcomes are weighted by their probability, with subsequent maximization of the (subjective) expected outcome. Numerous investigations from psychology, economics, and neuroscience have produced evidence consistent with this notion. However, this research has typically investigated choices involving relatively affect-poor, monetary outcomes. We compared choice in relatively affect-poor, monetary lottery problems with choice in relatively affect-rich medical decision problems. Computational modeling of behavioral data and model-based neuroimaging analyses provide converging evidence for substantial differences in the respective decision mechanisms. Relative to affect-poor choices, affect-rich choices yielded a more strongly curved probability weighting function of cumulative prospect theory, thus signaling that the psychological impact of probabilities is strongly diminished for affect-rich outcomes. Examining task-dependent brain activation, we identified a region-by-condition interaction indicating qualitative differences of activation between affect-rich and affect-poor choices. Moreover, brain activation in regions that were more active during affect-poor choices (e.g., the supramarginal gyrus) correlated with individual trial-by-trial decision weights, indicating that these regions reflect processing of probabilities. Formal reverse inference Neurosynth meta-analyses suggested that whereas affect-poor choices seem to be based on brain mechanisms for calculative processes, affect-rich choices are driven by the representation of outcomes' emotional value and autobiographical memories associated with them. These results provide evidence that the traditional notion of expectation maximization may not apply in the context of outcomes laden with affective responses, and that understanding the brain mechanisms of decision making requires the domain of the decision to be taken into account.
    PLoS ONE 04/2015; 10(4):e0122475. DOI:10.1371/journal.pone.0122475 · 3.53 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Findings from previous transcranial magnetic stimulation (TMS) experiments suggest that the primary motor cortex (M1) is sensitive to reward conditions in the environment. However, the nature of this influence on M1 activity is poorly understood. The dopamine neuron response to conditioned stimuli encodes reward probability and outcome uncertainty, or the extent to which the outcome of a situation is known. Reward uncertainty and probability are related: uncertainty is maximal when probability is 0.5 and minimal when probability is 0 or 1 (i.e., certain outcome). Previous TMS-reward studies did not examine these factors independently. Here, we used single-pulse TMS to measure corticospinal excitability in 40 individuals while they performed a simple computer task, making guesses to find or avoid a hidden target. The task stimuli implied three levels of reward probability and two levels of uncertainty. We found that reward probability level interacted with the trial search condition. That is, motor evoked potential (MEP) amplitude, a measure of corticospinal neuron excitability, increased with increasing reward probability when participants were instructed to "find" a target, but not when they were instructed to "avoid" a target. There was no effect of uncertainty on MEPs. Response times varied with the number of choices. A subset of participants also received paired-pulse stimulation to evaluate changes in short-intracortical inhibition (SICI). No effects of SICI were observed. Taken together, the results suggest that the reward-contingent modulation of M1 activity reflects reward probability or a related aspect of utility, not outcome uncertainty, and that this effect is sensitive to the conceptual framing of the task. Copyright © 2014. Published by Elsevier Ltd.
    Neuropsychologia 12/2014; 68. DOI:10.1016/j.neuropsychologia.2014.12.021 · 3.45 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: It has been proposed that the general function of the brain is inference, which corresponds quantitatively to the minimization of uncertainty (or the maximization of information). However, there has been a lack of clarity about exactly what this means. Efforts to quantify information have been in agreement that it depends on probabilities (through Shannon entropy), but there has long been a dispute about the definition of probabilities themselves. The "frequentist" view is that probabilities are (or can be) essentially equivalent to frequencies, and that they are therefore properties of a physical system, independent of any observer of the system. E.T. Jaynes developed the alternate "Bayesian" definition, in which probabilities are always conditional on a state of knowledge through the rules of logic, as expressed in the maximum entropy principle. In doing so, Jaynes and others provided the objective means for deriving probabilities, as well as a unified account of information and logic (knowledge and reason). However, neuroscience literature virtually never specifies any definition of probability, nor does it acknowledge any dispute concerning the definition. Although there has recently been tremendous interest in Bayesian approaches to the brain, even in the Bayesian literature it is common to find probabilities that are purported to come directly and unconditionally from frequencies. As a result, scientists have mistakenly attributed their own information to the neural systems they study. Here I argue that the adoption of a strictly Jaynesian approach will prevent such errors and will provide us with the philosophical and mathematical framework that is needed to understand the general function of the brain. Accordingly, our challenge becomes the identification of the biophysical basis of Jaynesian information and logic. I begin to address this issue by suggesting how we might identify a probability distribution over states of one physical system (an "object") conditional only on the biophysical state of another physical system (an "observer"). The primary purpose in doing so is not to characterize information and inference in exquisite, quantitative detail, OPEN ACCESS Information 2012, 3 176 but to be as clear and precise as possible about what it means to perform inference and how the biophysics of the brain could achieve this goal.
    12/2012; 3(4). DOI:10.3390/info3020175