Role of rodent secondary motor cortex in value-based action selection

Neuroscience Laboratory, Institute for Medical Sciences, Ajou University School of Medicine, Suwon, Korea.
Nature Neuroscience (Impact Factor: 14.98). 08/2011; 14(9):1202-8. DOI: 10.1038/nn.2881
Source: PubMed

ABSTRACT Despite widespread neural activity related to reward values, signals related to upcoming choice have not been clearly identified in the rodent brain. Here we examined neuronal activity in the lateral (AGl) and medial (AGm) agranular cortex, corresponding to the primary and secondary motor cortex, respectively, in rats performing a dynamic foraging task. Choice signals, before behavioral manifestation of the rat's choice, arose in the AGm earlier than in any other areas of the rat brain previously studied under free-choice conditions. The AGm also conveyed neural signals for decision value and chosen value. By contrast, upcoming choice signals arose later, and value signals were weaker, in the AGl. We also found that AGm lesions made the rats' choices less dependent on dynamically updated values. These results suggest that rodent secondary motor cortex might be uniquely involved in both representing and reading out value signals for flexible action selection.

Download full-text


Available from: Suhyun Jo, Jul 04, 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Shifting between motor plans is often necessary for adaptive behavior. When faced with changing consequences of one's actions, it is often imperative to switch from automatic actions to deliberative and controlled actions. The pre-supplementary motor area (pre-SMA) in primates, akin to the premotor cortex (M2) in mice, has been implicated in motor learning and planning, and action switching. We hypothesized that M2 would be differentially involved in goal-directed actions, which are controlled by their consequences vs. habits, which are more dependent on their past reinforcement history and less on their consequences. To investigate this, we performed M2 lesions in mice and then concurrently trained them to press the same lever for the same food reward using two different schedules of reinforcement that differentially bias towards the use of goal-directed versus habitual action strategies. We then probed whether actions were dependent on their expected consequence through outcome revaluation testing. We uncovered that M2 lesions did not affect the acquisition of lever-pressing. However, in mice with M2 lesions, lever-pressing was insensitive to changes in expected outcome value following goal-directed training. However, habitual actions were intact. We confirmed a role for M2 in goal-directed but not habitual actions in separate groups of mice trained on the individual schedules biasing towards goal-directed versus habitual actions. These data indicate that M2 is critical for actions to be updated based on their consequences, and suggest that habitual action strategies may not require processing by M2 and the updating of motor plans.
    Frontiers in Computational Neuroscience 08/2013; 7:110. DOI:10.3389/fncom.2013.00110
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Our choices often require appropriate actions to obtain a preferred outcome, but the neural underpinnings that link decision making and action selection remain largely undetermined. Recent theories propose that action selection occurs simultaneously, i.e., parallel in time, with the decision process. Specifically, it is thought that action selection in motor regions originates from a competitive process that is gradually biased by evidence signals originating in other regions, such as those specialized in value computations. Biases reflecting the evaluation of choice options should thus emerge in the motor system before the decision process is complete. Using transcranial magnetic stimulation, we sought direct physiological evidence for this prediction by measuring changes in corticospinal excitability in human motor cortex during value-based decisions. We found that excitability for chosen versus unchosen actions distinguishes the forthcoming choice before completion of the decision process. Both excitability and reaction times varied as a function of the subjective value-difference between chosen and unchosen actions, consistent with this effect being value-driven. This relationship was not observed in the absence of a decision. Our data provide novel evidence in humans that internally generated value-based decisions influence the competition between action representations in motor cortex before the decision process is complete. This is incompatible with models of serial processing of stimulus, decision, and action.
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 06/2012; 32(24):8373-82. DOI:10.1523/JNEUROSCI.0270-12.2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Behavioral changes driven by reinforcement and punishment are referred to as simple or model-free reinforcement learning. Animals can also change their behaviors by observing events that are neither appetitive nor aversive when these events provide new information about payoffs available from alternative actions. This is an example of model-based reinforcement learning and can be accomplished by incorporating hypothetical reward signals into the value functions for specific actions. Recent neuroimaging and single-neuron recording studies showed that the prefrontal cortex and the striatum are involved not only in reinforcement and punishment, but also in model-based reinforcement learning. We found evidence for both types of learning, and hence hybrid learning, in monkeys during simulated competitive games. In addition, in both the dorsolateral prefrontal cortex and orbitofrontal cortex, individual neurons heterogeneously encoded signals related to actual and hypothetical outcomes from specific actions, suggesting that both areas might contribute to hybrid learning.
    Annals of the New York Academy of Sciences 12/2011; 1239:100-8. DOI:10.1111/j.1749-6632.2011.06223.x