Input-specific control of reward and aversion in the ventral tegmental area

1] Nancy Pritzker Laboratory, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, 265 Campus Drive, Stanford, California 94305, USA [2].
Nature (Impact Factor: 41.46). 10/2012; 491(7423). DOI: 10.1038/nature11527
Source: PubMed


Ventral tegmental area (VTA) dopamine neurons have important roles in adaptive and pathological brain functions related to reward and motivation. However, it is unknown whether subpopulations of VTA dopamine neurons participate in distinct circuits that encode different motivational signatures, and whether inputs to the VTA differentially modulate such circuits. Here we show that, because of differences in synaptic connectivity, activation of inputs to the VTA from the laterodorsal tegmentum and the lateral habenula elicit reward and aversion in mice, respectively. Laterodorsal tegmentum neurons preferentially synapse on dopamine neurons projecting to the nucleus accumbens lateral shell, whereas lateral habenula neurons synapse primarily on dopamine neurons projecting to the medial prefrontal cortex as well as on GABAergic (γ-aminobutyric-acid-containing) neurons in the rostromedial tegmental nucleus. These results establish that distinct VTA circuits generate reward and aversion, and thereby provide a new framework for understanding the circuit basis of adaptive and pathological motivated behaviours.

56 Reads
  • Source
    • "Since the initial demonstration by Olds and Milner using the approach of electrical intracranial self-stimulation (ICSS) in rats (Olds and Milner 1954), numerous studies have identified the so-called brain reward system—a set of discrete brain structures that are important for processing reward signals. Within the reward system, dopamine neurons in the midbrain ventral tegmental area (VTA) play crucial roles (Schultz et al. 1997; Dayan and Balleine 2002; Cohen et al. 2012; Lammel et al. 2012). The VTA forms strong reciprocal connections with several brain areas, such as the nucleus accumbens (NAc), lateral hypothalamus, and prefrontal cortex (Calabresi et al. 2007). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The dorsal raphe nucleus (DRN) represents one of the most sensitive reward sites in the brain. However, the exact relationship between DRN neuronal activity and reward signaling has been elusive. In this review, we will summarize anatomical, pharmacological, optogenetics, and electrophysiological studies on the functions and circuit mechanisms of DRN neurons in reward processing. The DRN is commonly associated with serotonin (5-hydroxytryptamine; 5-HT), but this nucleus also contains neurons of the neurotransmitter phenotypes of glutamate, GABA and dopamine. Pharmacological studies indicate that 5-HT might be involved in modulating reward- or punishment-related behaviors. Recent optogenetic stimulations demonstrate that transient activation of DRN neurons produces strong reinforcement signals that are carried out primarily by glutamate. Moreover, activation of DRN 5-HT neurons enhances reward waiting. Electrophysiological recordings reveal that the activity of DRN neurons exhibits diverse behavioral correlates in reward-related tasks. Studies so far thus demonstrate the strong power of DRN neurons in reward signaling and at the same time invite additional efforts to dissect the roles and mechanisms of different DRN neuron types in various processes of reward-related behaviors. © 2015 Luo et al.; Published by Cold Spring Harbor Laboratory Press.
    Learning & memory (Cold Spring Harbor, N.Y.) 09/2015; 22(9):452-60. DOI:10.1101/lm.037317.114 · 3.66 Impact Factor
  • Source
    • "The activation and inhibition of dopamine neurons apparently mimics positive and negative reward prediction error signals, suggesting a sufficient role of phasic dopamine signals in learning and approach. Habenula stimulation induces place dispreference, either by activating supposedly aversive dopamine neurons (Lammel et al., 2012) or by transsynaptically inhibiting dopamine neurons (Stopper et al., 2015), although the limited specificity of the TH:cre mice used (Lammel et al., 2015) and the known habenula inhibition of dopamine neurons (Matsumoto and Hikosaka, 2007) make the latter mechanism more likely. Dopamine receptor stimulation also seems necessary for these functions. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Rewards are defined by their behavioral functions in learning (positive reinforcement), approach behavior, economic choices and emotions. Dopamine neurons respond to rewards with two components, similar to higher order sensory and cognitive neurons. The initial, rapid, unselective dopamine detection component reports all salient environmental events irrespective of their reward association. It is highly sensitive to factors related to reward and thus detects a maximal number of potential rewards. It senses also aversive stimuli but reports their physical impact rather than their aversiveness. The second response component processes reward value accurately and starts early enough to prevent confusion with unrewarded stimuli and objects. It codes reward value as a numeric, quantitative utility prediction error, consistent with formal concepts of economic decision theory. Thus, the dopamine reward signal is fast, highly sensitive and appropriate for driving and updating economic decisions. This article is protected by copyright. All rights reserved. © 2015 Wiley Periodicals, Inc.
    The Journal of Comparative Neurology 08/2015; DOI:10.1002/cne.23880 · 3.23 Impact Factor
  • Source
    • "Rather, it is more likely that the long process of slow reward­modulated synaptic modification describes the process of cortical molding in early development, whereby specific cortical areas slowly learn to perform a generic type of task, as determined by which other areas they receive information from, and by their specific pattern of reward­based plasticity modulation (for example, certain cortical areas, but not others, receive specific dopamine input on aversive trials, which is critical for aversive learning ​ (Lammel et al., 2012)​ ). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Recurrent neural networks exhibit complex dynamics reminiscent of high-level cortical activity during behavioral tasks. However, existing training methods for such networks are either biologically implausible, or require a real-time continuous error signal to guide the learning process. This is in contrast with most behavioral tasks, which only provide time-sparse, delayed rewards. Here we introduce a biologically plausible reward-modulated Hebbian learning algorithm, that can train recurrent networks based solely on delayed, phasic reward signals at the end of each trial. The method requires no dedicated feedback or readout network: the whole network connectivity is subject to learning, and the network's output is read from one arbitrarily chosen network cell. We use this method to successfully train recurrent networks on a simple flexible response task, the sequential XOR. The resulting networks exhibit dynamic coding of task-relevant information, with neural encodings of various task features fluctuating widely over the course of a trial. Furthermore, network activity moves from a stimulus-specific representation to a response-specific representation during response time, in accordance with neural recordings for similar tasks. We conclude that recurrent neural networks, trained with reward-modulated Hebbian learning, offer a plausible model of cortical dynamics during both learning and performance of flexible association.
Show more


56 Reads
Available from