Article

Stimulus-dependent suppression of chaos in recurrent neural networks

Lewis-Sigler Institute for Integrative Genomics, Icahn 262, Princeton University, Princeton, New Jersey 08544, USA.
Physical Review E (Impact Factor: 2.29). 07/2010; 82(1 Pt 1):011903. DOI: 10.1103/PHYSREVE.82.011903
Source: PubMed

ABSTRACT

Neuronal activity arises from an interaction between ongoing firing generated spontaneously by neural circuits and responses driven by external stimuli. Using mean-field analysis, we ask how a neural network that intrinsically generates chaotic patterns of activity can remain sensitive to extrinsic input. We find that inputs not only drive network responses, but they also actively suppress ongoing activity, ultimately leading to a phase transition in which chaos is completely eliminated. The critical input intensity at the phase transition is a nonmonotonic function of stimulus frequency, revealing a "resonant" frequency at which the input is most effective at suppressing chaos even though the power spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of our analysis is that the variance of neural responses should be most strongly suppressed at frequencies matching the range over which many sensory systems operate.

Download full-text

Full-text

Available from: Kanaka Rajan
  • Source
    • "Randomly connected, globally balanced networks of leaky integrate-and-fire (LIF) neurons exhibit stable background states (van Vreeswijk and Sompolinsky, 1996; Tsodyks et al., 1997; Brunel, 2000; Vogels et al., 2005; Renart et al., 2010) but cannot autonomously produce the substantial yet reliable, spatially patterned departure from background activity observed in the experiments. Networks with strong recurrent pathways can exhibit ongoing, complex rate fluctuations beyond the population mean (Sompolinsky et al., 1988; Sussillo and Abbott, 2009; Rajan et al., 2010; Litwin-Kumar and Doiron, 2012; Ostojic, 2014) but do not capture the transient nature of movementrelated activity. Moreover, such rate dynamics are chaotic, and sensitivity to noise seems improper in a situation in which the initial conditions dictate the subsequent evolution of the system. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Populations of neurons in motor cortex engage in complex transient dynamics of large amplitude during the execution of limb movements. Traditional network models with stochastically assigned synapses cannot reproduce this behavior. Here we introduce a class of cortical architectures with strong and random excitatory recurrence that is stabilized by intricate, fine-tuned inhibition, optimized from a control theory perspective. Such networks transiently amplify specific activity states and can be used to reliably execute multidimensional movement patterns. Similar to the experimental observations, these transients must be preceded by a steady-state initialization phase from which the network relaxes back into the background state by way of complex internal dynamics. In our networks, excitation and inhibition are as tightly balanced as recently reported in experiments across several brain areas, suggesting inhibitory control of complex excitatory recurrence as a generic organizational principle in cortex.
    Preview · Article · Jun 2014 · Neuron
  • Source
    • "The network nevertheless shows roughly asynchronous spiking behavior and is a reasonable candidate model for rich spontaneous cortical dynamics. We implemented this spiking network in an effort to test the alternative theory for stimulus induced reduction in spike count variability proposed by Rajan et al. (2010). "
    [Show abstract] [Hide abstract]
    ABSTRACT: A signature feature of cortical spike trains is their trial-to-trial variability. This variability is large in the spontaneous state and is reduced when cortex is driven by a stimulus or task. Models of recurrent cortical networks with unstructured, yet balanced, excitation and inhibition generate variability consistent with evoked conditions. However, these models produce spike trains which lack the long timescale fluctuations and large variability exhibited during spontaneous cortical dynamics. We propose that global network architectures which support a large number of stable states (attractor networks) allow balanced networks to capture key features of neural variability in both spontaneous and evoked conditions. We illustrate this using balanced spiking networks with clustered assembly, feedforward chain, and ring structures. By assuming that global network structure is related to stimulus preference, we show that signal correlations are related to the magnitude of correlations in the spontaneous state. Finally, we contrast the impact of stimulation on the trial-to-trial variability in attractor networks with that of strongly coupled spiking networks with chaotic firing rate instabilities, recently investigated by Ostojic (2014). We find that only attractor networks replicate an experimentally observed stimulus-induced quenching of trial-to-trial variability. In total, the comparison of the trial-variable dynamics of single neurons or neuron pairs during spontaneous and evoked activity can be a window into the global structure of balanced cortical networks.
    Full-text · Article · May 2014 · Frontiers in Computational Neuroscience
  • Source
    • "II. MEAN-FIELD THEORY FOR LEAKY ESN ON SPARSE REGULAR GRAPHS A. Mean-field equations From the seminal work [18], recently extended to the framework of stimulus driven RNN [20], [21], one can derive a self-constistent equation describing the statistical properties of the reservoir activity in the large n limit, which is known as the mean-field theory. In this section, we present a straightforward extension of [21] to leaky RNNs on regular graphs. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Echo State Networks are efficient time series predictors which highly depend on the value of the spectral radius of the reservoir connections. Based on recent results on the mean field theory of driven random recurrent neural networks, which allow the computation of the largest Lyapunov exponent of an ESN, we develop a cheap algorithm to establish a local and operational version of the Echo State Property which garantees good prediction performances. The value of the spectral radius tuning the network to the edge of chaos is specific to the considered input and is larger than 1.
    Preview · Article · Feb 2014 · Neural networks: the official journal of the International Neural Network Society
Show more