Generating coherent patterns of activity from chaotic neural networks.

Department of Neuroscience, Department of Physiology and Cellular Biophysics, Columbia University College of Physicians and Surgeons, New York, NY 10032-2695, USA.
Neuron (Impact Factor: 15.77). 09/2009; 63(4):544-57. DOI: 10.1016/j.neuron.2009.07.018
Source: PubMed

ABSTRACT Neural circuits display complex activity patterns both spontaneously and when responding to a stimulus or generating a motor output. How are these two forms of activity related? We develop a procedure called FORCE learning for modifying synaptic strengths either external to or within a model neural network to change chaotic spontaneous activity into a wide variety of desired activity patterns. FORCE learning works even though the networks we train are spontaneously chaotic and we leave feedback loops intact and unclamped during learning. Using this approach, we construct networks that produce a wide variety of complex output patterns, input-output transformations that require memory, multiple outputs that can be switched by control inputs, and motor patterns matching human motion capture data. Our results reproduce data on premovement activity in motor and premotor cortex, and suggest that synaptic plasticity may be a more rapid and powerful modulator of network activity than generally appreciated.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns-both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity.
    Frontiers in Computational Neuroscience 01/2014; 8:159. · 2.23 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present a control framework for the control of a hydraulic excavator. An excavator can be viewed as a robotic manipulator that interacts with the environment. It follows that the control method employed to control the excavator must take into account the complex soil–tool interaction in order to achieve the desired trajectory of the manipulator. Impedance control has been proven to be effective in this aspect in that it provides an unified approach to constrained and unconstrained motion. Another important aspect when considering the automation of an excavator is the control of the hydraulic servo system. Obtaining a useful explicit model for the control of hydraulic servo systems is not a simple task due to their inherent complex nonlinearity. Therefore, control techniques that do not require an explicit representation of the plant are required. In this work, we integrate two controllers for the automation of an excavator. To control the rigid-body motion of the excavator, impedance control and sliding mode control are applied. The results are desired cylinder forces that are required to achieve the desired trajectory. Given the desired cylinder forces, an online learning control method is employed to control the hydraulic servo system so that the desired forces are generated. Echo-state networks, which are a class of recurrent neural networks, are utilized within the online learning control framework in order to learn an inverse model of the hydraulic servo system. Thus, the online learning control framework does not require an explicit model of the plant and also adapts to the plant using only the input and output signals. We present results of the proposed control framework on an excavator simulation environment that has been verified based on operation data from a real hydraulic excavator.
    Mechatronics 11/2014; · 1.82 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Large networks of sparsely coupled, excitatory and inhibitory cells occur throughout the brain. For many models of these networks, a striking feature is that their dynamics are chaotic and thus, are sensitive to small perturbations. How does this chaos manifest in the neural code? Specifically, how variable are the spike patterns that such a network produces in response to an input signal? To answer this, we derive a bound for a general measure of variability-spike-train entropy. This leads to important insights on the variability of multi-cell spike pattern distributions in large recurrent networks of spiking neurons responding to fluctuating inputs. The analysis is based on results from random dynamical systems theory and is complemented by detailed numerical simulations. We find that the spike pattern entropy is an order of magnitude lower than what would be extrapolated from single cells. This holds despite the fact that network coupling becomes vanishingly sparse as network size grows-a phenomenon that depends on "extensive chaos," as previously discovered for balanced networks without stimulus drive. Moreover, we show how spike pattern entropy is controlled by temporal features of the inputs. Our findings provide insight into how neural networks may encode stimuli in the presence of inherently chaotic dynamics.
    Frontiers in Computational Neuroscience 10/2014; 8:123. · 2.23 Impact Factor

Full-text (2 Sources)

Available from
May 20, 2014