Article

Striatal Activity Underlies Novelty-Based Choice in Humans

Wellcome Trust Centre for Neuroimaging, University College London, 12 Queen Square, London WC1N3BG, UK.
Neuron (Impact Factor: 15.98). 07/2008; 58(6):967-73. DOI: 10.1016/j.neuron.2008.04.027
Source: PubMed

ABSTRACT The desire to seek new and unfamiliar experiences is a fundamental behavioral tendency in humans and other species. In economic decision making, novelty seeking is often rational, insofar as uncertain options may prove valuable and advantageous in the long run. Here, we show that, even when the degree of perceptual familiarity of an option is unrelated to choice outcome, novelty nevertheless drives choice behavior. Using functional magnetic resonance imaging (fMRI), we show that this behavior is specifically associated with striatal activity, in a manner consistent with computational accounts of decision making under uncertainty. Furthermore, this activity predicts interindividual differences in susceptibility to novelty. These data indicate that the brain uses perceptual novelty to approximate choice uncertainty in decision making, which in certain contexts gives rise to a newly identified and quantifiable source of human irrationality.

Download full-text

Full-text

Available from: Ben Seymour, Jul 06, 2015
0 Followers
 · 
95 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In reinforcement learning (RL), a decision maker searching for the most rewarding option is often faced with the question: What is the value of an option that has never been tried before? One way to frame this question is as an inductive problem: How can I generalize my previous experience with one set of options to a novel option? We show how hierarchical Bayesian inference can be used to solve this problem, and we describe an equivalence between the Bayesian model and temporal difference learning algorithms that have been proposed as models of RL in humans and animals. According to our view, the search for the best option is guided by abstract knowledge about the relationships between different options in an environment, resulting in greater search efficiency compared to traditional RL algorithms previously applied to human cognition. In two behavioral experiments, we test several predictions of our model, providing evidence that humans learn and exploit structured inductive knowledge to make predictions about novel options. In light of this model, we suggest a new interpretation of dopaminergic responses to novelty. Copyright © 2015 Cognitive Science Society, Inc.
    Topics in Cognitive Science 03/2015; DOI:10.1111/tops.12138 · 2.88 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Adolescence is associated with quickly changing environmental demands which require excellent adaptive skills and high cognitive flexibility. Feedback-guided adaptive learning and cognitive flexibility are driven by reward prediction error (RPE) signals, which indicate the accuracy of expectations and can be estimated using computational models. Despite the importance of cognitive flexibility during adolescence, only little is known about how RPE processing in cognitive flexibility deviates between adolescence and adulthood. In this study, we investigated the developmental aspects of cognitive flexibility by means of computational models and functional magnetic resonance imaging (fMRI). We compared the neural and behavioral correlates of cognitive flexibility in healthy adolescents (12–16 years) to adults performing a probabilistic reversal learning task. Using a modified risk-sensitive reinforcement learning model, we found that adolescents learned faster from negative RPEs than adults. The fMRI analysis revealed that within the RPE network, the adolescents had a significantly altered RPE-response in the anterior insula. This effect seemed to be mainly driven by increased responses to negative prediction errors. In summary, our findings indicate that decision making in adolescence goes beyond merely increased reward-seeking behavior and provides a developmental perspective to the behavioral and neural mechanisms underlying cognitive flexibility in the context of reinforcement learning.
    NeuroImage 09/2014; 104. DOI:10.1016/j.neuroimage.2014.09.018 · 6.13 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Novelty seeking refers to the tendency of humans and animals to explore novel and unfamiliar stimuli and environments. The idea that dopamine modulates novelty seeking is supported by evidence that novel stimuli excite dopamine neurons and activate brain regions receiving dopaminergic input. In addition, dopamine is shown to drive exploratory behavior in novel environments. It is not clear whether dopamine promotes novelty seeking when it is framed as the decision to explore novel options versus the exploitation of familiar options. To test this hypothesis, we administered systemic injections of saline or GBR-12909, a selective dopamine transporter (DAT) inhibitor, to monkeys and assessed their novelty seeking behavior during a probabilistic decision making task. The task involved pseudorandom introductions of novel choice options. This allowed monkeys the opportunity to explore novel options or to exploit familiar options that they had already sampled. We found that DAT blockade increased the monkeys' preference for novel options. A reinforcement learning (RL) model fit to the monkeys' choice data showed that increased novelty seeking after DAT blockade was driven by an increase in the initial value the monkeys assigned to novel options. However, blocking DAT did not modulate the rate at which the monkeys learned which cues were most predictive of reward or their tendency to exploit that knowledge. These data demonstrate that dopamine enhances novelty-driven value and imply that excessive novelty seeking-characteristic of impulsivity and behavioral addictions-might be caused by increases in dopamine, stemming from less reuptake. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Behavioral Neuroscience 06/2014; 128(5):556-566. DOI:10.1037/a0037128 · 3.25 Impact Factor