Generating Coherent Patterns of Activity from Chaotic Neural Networks

Department of Neuroscience, Department of Physiology and Cellular Biophysics, Columbia University College of Physicians and Surgeons, New York, NY 10032-2695, USA.
Neuron (Impact Factor: 15.05). 09/2009; 63(4):544-57. DOI: 10.1016/j.neuron.2009.07.018
Source: PubMed


Neural circuits display complex activity patterns both spontaneously and when responding to a stimulus or generating a motor output. How are these two forms of activity related? We develop a procedure called FORCE learning for modifying synaptic strengths either external to or within a model neural network to change chaotic spontaneous activity into a wide variety of desired activity patterns. FORCE learning works even though the networks we train are spontaneously chaotic and we leave feedback loops intact and unclamped during learning. Using this approach, we construct networks that produce a wide variety of complex output patterns, input-output transformations that require memory, multiple outputs that can be switched by control inputs, and motor patterns matching human motion capture data. Our results reproduce data on premovement activity in motor and premotor cortex, and suggest that synaptic plasticity may be a more rapid and powerful modulator of network activity than generally appreciated.

Download full-text


Available from: David Sussillo, Mar 28, 2014
  • Source
    • "Here we used a modified version of the original recursive least squares ( RLS ) algorithm ( Simon , 2002 ; Jaeger and Haas , 2004 ) based on the FORCE learning formulation ( Sussillo and Abbott , 2009 ) , in order to learn the reservoir - to - readout connection weights W out at each time step , while the CPG input u ( t ) is being fed into the reservoir . The readout weights W out are calculated such that the overall error at the readout neurons is minimized ; thereby the network can learn to accurately transform the CTr - motor signal to the expected foot contact signal , for each walking gait . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated movements, (2) distributed (at each leg) recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and (3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps, leg damage adaptations, as well as climbing over high obstacles. Furthermore, we demonstrate that the newly developed recurrent network based approach to online forward models outperforms the adaptive neuron forward models, which have hitherto been the state of the art, to model a subset of similar walking behaviors in walking robots.
    Frontiers in Neurorobotics 10/2015; 9(10). DOI:10.3389/fnbot.2015.00010
  • Source
    • "Our model is a fully­connected continuous­time recurrent neural network of N neurons, governed by the following equations ​ (Sompolinsky et al., 1988; Jaeger, 2001; Maass et al., 2002; Sussillo and Abbott, 2009)​ : "
    [Show abstract] [Hide abstract]
    ABSTRACT: Recurrent neural networks exhibit complex dynamics reminiscent of high-level cortical activity during behavioral tasks. However, existing training methods for such networks are either biologically implausible, or require a real-time continuous error signal to guide the learning process. This is in contrast with most behavioral tasks, which only provide time-sparse, delayed rewards. Here we introduce a biologically plausible reward-modulated Hebbian learning algorithm, that can train recurrent networks based solely on delayed, phasic reward signals at the end of each trial. The method requires no dedicated feedback or readout network: the whole network connectivity is subject to learning, and the network's output is read from one arbitrarily chosen network cell. We use this method to successfully train recurrent networks on a simple flexible response task, the sequential XOR. The resulting networks exhibit dynamic coding of task-relevant information, with neural encodings of various task features fluctuating widely over the course of a trial. Furthermore, network activity moves from a stimulus-specific representation to a response-specific representation during response time, in accordance with neural recordings for similar tasks. We conclude that recurrent neural networks, trained with reward-modulated Hebbian learning, offer a plausible model of cortical dynamics during both learning and performance of flexible association.
  • Source
    • "Computational models of cortical networks typically derive irregular spiking by establishing a balance in fast E/I (Shadlen and Newsome, 1998; Renart et al., 2010), which allows inhibitory and excitatory currents to track each other closely in time, resulting in an active de-correlation in spiking (Renart et al., 2010). However, since independent, stationary Poisson processes are insufficient to explain the high variability of spiking (CV > 1) observed typically in vivo (Softky and Koch, 1993; Shadlen and Newsome, 1998), alternative mechanisms beyond external Poisson inputs (Brunel, 2000) have been proposed to further increase variability, such as intrinsic chaotic dynamics (Van Vreeswijk and Sompolinsky, 1996; Sussillo and Abbott, 2009; Ostojic, 2014), conductance-based synapses (Kumar et al., 2008), clustered network architecture (Litwin-Kumar and Doiron, 2012), external synchronous inputs (Stevens and Zador, 1998), and 'doubly stochastic' approaches using nonstationary Poisson processes (Churchland et al., 2010), among others. Our results confirm the high variability of single neuron firing in the AW state with CV > 1. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Spontaneous fluctuations in neuronal activity emerge at many spatial and temporal scales in cortex. Population measures found these fluctuations to organize as scale-invariant neuronal avalanches, suggesting cortical dynamics to be critical. Macroscopic dynamics, though, depend on physiological states and are ambiguous as to their cellular composition, spatiotemporal origin, and contributions from synaptic input or action potential (AP) output. Here, we study spontaneous firing in pyramidal neurons (PNs) from rat superficial cortical layers in vivo and in vitro using 2-photon imaging. As the animal transitions from the anesthetized to awake state, spontaneous single neuron firing increases in irregularity and assembles into scale-invariant avalanches at the group level. In vitro spike avalanches emerged naturally yet required balanced excitation and inhibition. This demonstrates that neuronal avalanches are linked to the global physiological state of wakefulness and that cortical resting activity organizes as avalanches from firing of local PN groups to global population activity.
    eLife Sciences 07/2015; 4. DOI:10.7554/eLife.07224 · 9.32 Impact Factor
Show more