Stimulus-dependent suppression of chaos in recurrent neural networks.
ABSTRACT Neuronal activity arises from an interaction between ongoing firing generated spontaneously by neural circuits and responses driven by external stimuli. Using mean-field analysis, we ask how a neural network that intrinsically generates chaotic patterns of activity can remain sensitive to extrinsic input. We find that inputs not only drive network responses, but they also actively suppress ongoing activity, ultimately leading to a phase transition in which chaos is completely eliminated. The critical input intensity at the phase transition is a nonmonotonic function of stimulus frequency, revealing a "resonant" frequency at which the input is most effective at suppressing chaos even though the power spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of our analysis is that the variance of neural responses should be most strongly suppressed at frequencies matching the range over which many sensory systems operate.
- SourceAvailable from: Sakyasingha Dasgupta[Show abstract] [Hide abstract]
ABSTRACT: Goal-directed decision making in biological systems is broadly based on associations between conditional and unconditional stimuli. This can be further classified as classical conditioning (correlation-based learning) and operant conditioning (reward-based learning). A number of computational and experimental studies have well established the role of the basal ganglia in reward-based learning, where as the cerebellum plays an important role in developing specific conditioned responses. Although viewed as distinct learning systems, recent animal experiments point towards their complementary role in behavioral learning, and also show the existence of substantial two-way communication between these two brain structures. Based on this notion of co-operative learning, in this paper we hypothesize that the basal ganglia and cerebellar learning systems work in parallel and interact with each other. We envision that such an interaction is influenced by reward modulated heterosynaptic plasticity (RMHP) rule at the thalamus, guiding the overall goal directed behavior. Using a recurrent neural network actor-critic model of the basal ganglia and a feed-forward correlation-based learning model of the cerebellum, we demonstrate that the RMHP rule can effectively balance the outcomes of the two learning systems. This is tested using simulated environments of increasing complexity with a four-wheeled robot in a foraging task in both static and dynamic configurations. Although modeled with a simplified level of biological abstraction, we clearly demonstrate that such a RMHP induced combinatorial learning mechanism, leads to stabler and faster learning of goal-directed behaviors, in comparison to the individual systems. Thus in this paper we provide a computational model for adaptive combination of the basal ganglia and cerebellum learning systems by way of neuromodulated plasticity for goal-directed decision making in biological and bio-mimetic organisms.Frontiers in Neural Circuits 09/2014; 8(126). · 2.95 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Johnson-Lindenstrauss (JL) matrices implemented by sparse random synaptic connections are thought to be a prime candidate for how convergent pathways in the brain compress information. However, to date, there is no complete mathematical support for such implementations given the constraints of real neural tissue. The fact that neurons are either excitatory or inhibitory implies that every so implementable JL matrix must be sign consistent (i.e., all entries in a single column must be either all nonnegative or all nonpositive), and the fact that any given neuron connects to a relatively small subset of other neurons implies that the JL matrix should be sparse. We construct sparse JL matrices that are sign consistent and prove that our construction is essentially optimal. Our work answers a mathematical question that was triggered by earlier work and is necessary to justify the existence of JL compression in the brain and emphasizes that inhibition is crucial if neurons are to perform efficient, correlation-preserving compression.Proceedings of the National Academy of Sciences 11/2014; · 9.81 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Large networks of sparsely coupled, excitatory and inhibitory cells occur throughout the brain. For many models of these networks, a striking feature is that their dynamics are chaotic and thus, are sensitive to small perturbations. How does this chaos manifest in the neural code? Specifically, how variable are the spike patterns that such a network produces in response to an input signal? To answer this, we derive a bound for a general measure of variability-spike-train entropy. This leads to important insights on the variability of multi-cell spike pattern distributions in large recurrent networks of spiking neurons responding to fluctuating inputs. The analysis is based on results from random dynamical systems theory and is complemented by detailed numerical simulations. We find that the spike pattern entropy is an order of magnitude lower than what would be extrapolated from single cells. This holds despite the fact that network coupling becomes vanishingly sparse as network size grows-a phenomenon that depends on "extensive chaos," as previously discovered for balanced networks without stimulus drive. Moreover, we show how spike pattern entropy is controlled by temporal features of the inputs. Our findings provide insight into how neural networks may encode stimuli in the presence of inherently chaotic dynamics.Frontiers in Computational Neuroscience 10/2014; 8:123. · 2.23 Impact Factor
Stimulus-Dependent Suppression of Chaos
in Recurrent Neural Networks
Lewis-Sigler Institute for Integrative Genomics
Icahn 262, Princeton University
Princeton NJ 08544 USA
Center for Neurobiology and Behavior
Department of Physiology and Cellular Biophysics
Columbia University College of Physicians and Surgeons
New York NY 10032-2695 USA
Racah Institute of Physics
Interdisciplinary Center for Neural Computation
Neuronal activity arises from an interaction between ongoing firing generated sponta-
neously by neural circuits and responses driven by external stimuli. Using mean-field
analysis, we ask how a neural network that intrinsically generates chaotic patterns of ac-
tivity can remain sensitive to extrinsic input. We find that inputs not only drive network
responses, they also actively suppress ongoing activity, ultimately leading to a phase tran-
sition in which chaos is completely eliminated. The critical input intensity at the phase
transition is a non-monotonic function of stimulus frequency, revealing a “resonant” fre-
quency at which the input is most effective at suppressing chaos even though the power
spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of
our analysis is that the variance of neural responses should be most strongly suppressed at
frequencies matching the range over which many sensory systems operate.
arXiv:0912.3513v2 [q-bio.NC] 2 Aug 2010
Circuits of the central nervous system exhibit temporally irregular ongoing activity that is
not directly related to sensory or behavioral events. The fact that this spontaneous activity
is not suppressed by averaging over the large number of synaptic inputs to each neuron 
suggests that chaotic network dynamics may represent a substantial local source of fluctuat-
ing activity in cortical and subcortical circuits. Previous modeling studies have shown that
nonlinear random network models with strong recurrent excitatory and inhibitory connec-
tions generically exhibit chaotic dynamics [2,3,4]. In this work, we ask how intrinsically
generated fluctuating activity affects neuronal responses to external stimuli. The nonlin-
ear effects of oscillatory drive, including frequency dependence and phase locking, have
been well explored in low-dimensional chaotic dynamical systems (see e.g. [5,6,7,8,9]).
However, relatively few studies have explored entrainment of extended, high-dimensional
spatiotemporal chaotic systems by external forcing (see e.g. [10,11,12,13,14]). Here, we
explore the locking of large chaotic neuronal networks to external stimuli and study how it
depends on stimulus amplitude and frequency.
We study phenomenological firing-rate network models representing neurons in a localized
circuit that are coupled by relatively strong excitatory and inhibitory connections randomly
distributed in the network. Specifically, we consider a network of N interconnected neu-
rons, each described by an activation variable xifor i=1,2,...N, satisfying
Jijφ(xj) + Hi,
with φ(xi), which is a saturating monotonic function of the total synaptic input xi, repre-
senting a normalized firing rate relative to a fixed background rate, r0. Here we choose
(2 − r0)tanh(x/(2 − r0)) for x > 0,
so that the normalized firing rate varies from zero to 2. For r0=1, we recover the often-used
tanh function, but we use a smaller value of r0=0.1, which is more biologically reasonable
. The time variable in Eq. 1 is defined in units of the single-neuron time constant,
τr= 10 ms. Each element of the network connectivity matrix J is chosen randomly and
independently  from a Gaussian distribution with zero mean and variance g2/N, where
the gain g acts as the control parameter of the network. The external input term is set to
Hi=I cos(ωt+θi), with the phase θichosen randomly and independently for each neuron
from a uniform distribution between 0 and 2π. This corresponds to situations in which
the oscillatory input does not introduce global temporal phase coherence, which occurs,
for example, for a population of neurons with a broad range of preferred spatiotemporal
r0tanh(x/r0) for x ≤ 0
To characterize the activity of the network, we make extensive use of the autocorrelation
function of each neuronal rate averaged across all the units of the network,
?φ(xi(t))φ(xi(t + τ))?,
Figure 1: Activity of typical network units (left column), average autocorrelation function (middle
column) and log-power spectrum (right column) for a network with N =1000, g=1.5 and r0=0.2.
a) With no input (I = 0), network activity is chaotic. b) In the presence of a weak input (I =
0.04,f = ω/2π = 4 Hz), an oscillatory response is superposed on chaotic fluctuations. c) For a
stronger input (I=0.2,f =4 Hz), the network response is periodic. d, e, f) Average autocorrelation
function and g, h, i) Logarithm of the power versus frequency for the network states corresponding
to panels a, b, and c.
where the angle brackets denote a time average. C(0) is related to the total variance in the
fluctuations of the firing rates of the network units, whereas C(τ) for non-zero τ provides
information about the temporal structure of network activity.
Previous work  has shown that, in the limit N → ∞ with no input (I = 0), this model
displays only two types of activity: a trivial fixed point with all x=0 when g<1 and chaos
when g>1. The spontaneously chaotic state is characterized by highly irregular firing rates
(Fig. 1a), a decaying average autocorrelation function (Fig. 1d), and a continuous power
spectrum (Fig. 1g). Note that the fluctuations in Fig. 1a are considerably slower than the
10 ms time constant of the model. The associated average autocorrelation function decays
to zero as τ increases (Fig. 1d) implying that the temporal fluctuations of the spontaneous
activity are uncorrelated over large time intervals, a characteristic of the chaotic state. The
power spectrum decays from a peak at zero (Fig. 1g) and, although it is broad, the power
at high frequency is exponentially suppressed. Strong suppression of high-frequency fluc-
tuations is another characteristic of the chaotic state in these networks. By comparison, the
power spectrum of a non-chaotic network responding to a white-noise input falls off only
as a power law at high frequencies.
When this network is driven with a relatively weak sinusoidal input (Fig. 1b, e & h), the
single-neuron response consists of periodic activity induced by the input superposed on a
chaotic background (Fig. 1b). The average autocorrelation function for the network driven
by weak periodic input consequently reveals a mixture of periodic and chaotic activity (Fig.
1e). Periodic oscillations at the input frequency appear at large values of τ, but the variance
given by C(0) is larger than the height of the peaks in these oscillations. This indicates that
the total firing-rate variance is not completely accounted for by the oscillatory response
of the network to the external drive, the additional variance arising from residual chaotic
fluctuations. Similarly, the power spectrum shows a continuous component generated by
the residual chaos, a prominent peak at the frequency of the input, and peaks at harmonics
of the input frequency arising from network nonlinearities (Fig. 1h).
When the amplitude of the input is increased sufficiently, the single-neuron firing rates
oscillate at the input frequency in a perfectly periodic manner (Fig. 1c), yielding a periodic
autocorrelation function (Fig. 1f). C(0) now matches the height of the peaks in each of
its subsequent oscillations, meaning that the periodic component in C accounts for the
entire response variance quantified by C(0). All of the network power is focused at the
frequency of the input and its harmonics, also indicating a periodic response free of chaotic
interference (Fig. 1i).
Toexplore theseresultsanalytically andmoresystematically, wedeveloped dynamicmean-
field equations appropriate for large N. The mean-field theory is based on the observation
that the total recurrent synaptic input onto each network neuron can be approximated as
Gaussian noise . The temporal correlation of this noise is calculated self-consistently
from the average autocorrelation function of the network. We begin by writing xi=x0
where x0is the steady-state solution to dx0
I/√1 + ω2and we have incorporated a frequency-dependent phase shift into the factor
˜θi. Mean-field theory replaces the network interaction term in the equation for x1
Gaussian random variable η, so that dx1
units (denoted by square brackets) as in Eq. 3, are implemented by averaging over J, θ and
η, an approximation valid for large N.
i/dt = −x0
i+ I cos(ωt + θi) and x1
I(t) = hcos(ωt +˜θi), where h =
i/dt = −x1
j). This implies that x0
i/dt = −x1
i+ ηi. Averages over time and network
of η match the moments of the network interaction that it represents. Thus, we set [ηi(t)]=
Next, defining ∆(τ) = [x1
we find that
=0, because [Jij]=0. Similarly, using the identity [JilJjk]=g2δijδkl/N,
[ηi(t)ηj(t + τ)] =
Jjkφ(xl(t))φ(xk(t + τ))
[φ(xk(t))φ(xk(t + τ))] = δijg2C(τ).
i(t + τ)] and recalling that dx1
i/dt = −x1
i+ ηi, it follows
= ∆(τ) − g2C(τ).
The final step in the derivation of the mean-field equations is to note that because x1(t) and
x1(t+τ) are driven by Gaussian noise, they are Gaussian random variables with moments
[x1(t)]=[x1(t + τ)]=0 , [x1(t)x1(t)]=[x1(t+τ)x1(t+τ)]=∆(0), and [x1(t+τ)x1(t)]=
∆(τ). To realize these constraints, we introduce three Gaussian random variables with zero
mean and unit variance, zifor i=1,2,3, and write
x1(t + τ) =
C can then be computed by writing x=x0+ x1and integrating over these Gaussian vari-
x1(t) =∆(0)−|∆(τ)|z1+ sgn(∆(τ))
where Dzi = dziexp(−z2
C(τ) as a nonlinear function of ∆(τ). Substituting this expression into Eq. 5 provides a
nonlinear differential equation for ∆(τ), with g, h, ω and ∆(0) as parameters.
i/2)/√2π for i = 1,2,3 and θ =˜θ+ωt. Eq. 6 determines
∆(τ) moving under the influence of a force that depends on C. This force is a function
of the current position of the particle, ∆(τ) (as well as on its initial position ∆(0)), and it
contains terms representing external forcing that are periodic in τ with period 2π/ω. For
weak inputs and g greater than but close to 1, Eq. 5 reduces to an undamped forced Duffing
oscillator, although we do not restrict our analysis to this limit.
The analogous mechanics problem has to be solved with the initial condition˙∆(0) = 0,
which imposes a smoothness constraint on the correlation function. The initial value ∆(0)
is fixed by requiring that ∆(0)≥∆(τ). We solved Eq. 5 numerically using iterative meth-
ods to determine ∆(0), and found two types of solutions. The first is a solution in which
∆(τ) is a periodic function of τ with frequency ω, as in Fig. 1f. This solution, which rep-
resents a network state that is fully entrained by the oscillatory input, exists for all values
of I, ω and g. The second solution is characterized by ∆(τ) that decays for small τ and
oscillates for large τ, so that ∆(0) is larger than the peaks in the large-τ oscillations, as
in Fig. 1e. This solution, which corresponds to a non-periodic state only partially locked
to the oscillatory drive, only exists for I smaller than a critical value that depends on ω
and g. A linear perturbation analysis of the mean field theory shows that this non-periodic
Figure 2: Phase transition curves showing the critical input amplitude that divides regions of pe-
riodic and chaotic activity as a function of input frequency. a) Transition curves for r0= 0.2 and
g = 1.5 (dashed) or g = 1.8 (solid). The stars indicate parameter values used in Figs. 1b, e, h and
1 c, f, i. The inset traces show representative single-unit firing rates for the regions indicated. b)
A comparison of the transition curve computed by mean-field theory (open circles and line) and by
simulating a network (filled circles) for r0=1, g=2 and, for the simulation, N =10,000.
solution is stable throughout the regime where it exists. The periodic solution is unstable
in this regime and is stable outside it. The mean-field analysis also shows that the non-
periodic solution corresponds to a state with “exponential” sensitivity to initial conditions
(a positive Lyapunov exponent) , i.e., a chaotic state.
The resulting phase diagram marks the transition between the periodic and non-periodic
states (Fig. 2). Surprisingly, the transition curves are non-monotonic functions of frequency
and reveal a “resonant” frequency at which it is easiest to entrain the chaotic network with
a periodic input (even through there is no peak in the power spectrum of the chaotic activity
at this frequency). This frequency is roughly twice the inverse time constant of the chaotic
fluctuations in the spontaneous state and for g not too much greater than 1, the correspond-
ing period can be an order of magnitude longer than the single-neuron time constant. Figs.
2 & 3b indicate that internally generated fluctuations are most easily suppressed by stimuli
oscillating in the few Hz range.
The phase transition curve shifts upward and to the right as g increases (Fig. 2a & b),
indicating a higher resonant frequency as well as a larger critical input amplitude. This
occurs because the chaotic activity for larger g has a higher amplitude, making it more
difficult to suppress, and a smaller inverse correlation time, leading to a higher resonance
[ - C φ]
u t i l p
a e s
s e r
u t i l p
a e s
s e r
0 102468 12 14
Figure 3: Signal and noise amplitudes as a function of input amplitude and frequency. a) Definition
of the signal and noise amplitudes, σ2
correlation function. b) Signal and noise amplitudes for f = 20 Hz, g = 1.5 and r0= 0.2 as a
function of input amplitude. The transition from chaotic to non-chaotic regimes occurs at I =0.44.
c) Same as panel b, but with fixed input amplitude (I = 0.2) and varying input frequency. In the
region between 3 and 7 Hz, responses of the network are free from chaotic noise. In b and c, open
circles denote the signal amplitude and filled circles the noise amplitude.
oscand σchaosrespectively, in terms of the mean-subtracted
frequency. The location of the phase transition computed by mean-field theory is in good
agreement with simulation results for large networks (Fig. 2b).
To study the implications of the phase transition further, we divide network responses into
signal and noise components by separating the full response variance into two terms, σ2
consider the mean-subtracted correlation function, C(τ)−[φ]2. The signal amplitude, σosc,
is the square root of the amplitude of the oscillatory part of this correlation function for
large τ (Fig. 3a). The noise amplitude, σchaos, is the square root of the difference between
the value of the mean-subtracted correlation function at τ =0 and the peak of its oscillations
(Fig. 3a). In the frequency domain, σ2
the input frequency and its harmonics, whereas σ2
chaos. For this purpose, we subtract the square of the average value of φ from C(τ) and
oscmeasures the total power in the network activity at
chaosmeasures the residual power.
The signal amplitude increases linearly with the strength of the input (I) over the range
considered in Fig. 3b. The noise amplitude has a more complex nonlinear dependence,
reflecting the presence of the phase transition in Fig. 2 and duplicating the effect seen in
Fig. 1, in which a sufficiently strong input completely suppresses the chaotic component
of the response. An interesting feature to note is that there is no clear signature of this
chaotic-to-periodic transition in the signal amplitude. When plotted as a function of input
frequency for fixed I, the signal amplitude shows relatively weak frequency dependence
below about 4 Hz and then rolls off at higher frequencies (Fig. 3c). This is a result of
the low-pass filtering property of the network. The noise amplitude has a more interesting
dependence. Between 0 and 3 Hz, the noise amplitude drops steeply and vanishes for
frequencies between 3 and 7 Hz, rising again above 7 Hz. This double transition is a
consequence of the non-monotonicity of the phase transition curves in Fig. 2. As in Fig.
3b, there is no apparent indication of these transitions in the signal amplitude.
It has previously been noted that chaotic activity in neuronal networks can be suppressed by
either white-noise  or constant  input in discrete-time models. However, discrete-
time versions fail to capture the rich dynamics of the chaotic fluctuations and their effect
on responses to time-dependent inputs. Suppression of spatiotemporal chaos by periodic
forcing has also been reported [10,11,12], mostly through numerical simulations. In some
of these simulations, an optimal frequency for complete locking similar to Fig. 2 has been
observed . Our results show that such a resonance effect occurs even when the power
spectrum of the unforced chaotic fluctuations falls monotonically from zero frequency (Fig.
1). The networks we considered only describe the effects of fluctuations induced by local
interactions, whereas additional sources of variability carried by long-range connections or
by local sources of stochasticity are present in real neurons. Therefore, we predict that an
experimental plot of response variability versus stimulus frequency will follow a non-zero
U-shaped curve with a minimum in the several Hz range, rather than falling to zero as in
Variability in cortical responses is sometimes described by adding stochastic noise linearly
to a deterministic response [17,18]. Our results indicate that the interaction between intrin-
sically generated “noise” and responses to external drive is highly nonlinear. Near the onset
of chaos, complete noise suppression can be achieved with relatively low amplitude inputs,
weaker, for example, than the strength of the internal feedback. Thus, suppression of spon-
taneously generated “noise” in neural networks does not require stimuli so strong that they
simple overwhelm fluctuations through saturation. A number of experiments indicate that
stimuli as well as attention can suppress firing-rate variability [19,20,21,22,23](but see
). Although other mechanisms for nonlinear suppression of neuronal variability have
been proposed [25,26,27,28,29,30], our analysis indicates that such suppression is a gen-
eral property of the interaction between internally generated dynamics and external drive
in a nonlinear network.
Spontaneous fluctuations in neural activity occur across a wide range of timescales, with
increasing variability over long time intervals  and increasing power at low frequencies,
although resonances may appear [24,32]. In this work we have focused on firing-rate fluc-
tuations using smooth rate-based dynamics, not spiking dynamics. Spiking neuron models
with strong ’balanced’ interactions can exhibit chaotic firing patterns [2,3], but the fluctu-
ations they produce have relatively flat power spectra associated with variability in short
interspike intervals. It will be interesting to study stimulus effects in spiking network mod-
els that exhibit slow irregular modulations of firing rates.
In our model, weak correlations (of the order of 1/√N) in activity fluctuations exist be-
tween all pairs of neurons. These correlations are distributed evenly between negative and
positive values across the population. Slow spontaneous rate fluctuations in the cortex are
often associated with long-range spatial correlations, especially in anesthetized animals
[33,34]. As in our model, the observed spatial correlations are weaker than the firing rate
autocorrelations. In some cases, both negative and positive rate fluctuations are also ob-
served, such that the mean value of the pairwise correlations across a populations is much
smaller than the width of the distribution of correlations [35,36,37]. However, the extent
of the contribution of local network dynamics to the observed low frequency correlations
is unclear [22,34].
Neuronal selectivity to stimulus features is typically studied by determining how the mean
response across experimental trials depends on various stimulus parameters. The presence
of nonlinear interactions between stimulus-evoked and spontaneous fluctuating activity in-
dicates that response components that are not locked to the temporal modulation of the
stimulus may also be sensitive to stimulus parameters. In general, our results suggest that
experiments studying the stimulus-dependence of the noise component of neural responses
could provide important insights into the nature and origin of activity fluctuations in neu-
ronal circuits, as well as their role in neuronal information processing.
KR and LA supported by National Science Foundation grant IBN-0235463 and an NIH
Director’s Pioneer Award (5-DP1-OD114-02), part of the NIH Roadmap for Medical Re-
search. HS supported by grants from the Israel Science Foundation and McDonnell Foun-
dation. This research was also supported by the Swartz Foundation through the Swartz
Centers at Columbia and Harvard. KR’s current address is Carl Icahn Laboratories, Lewis
Sigler Institute for Integrative Genomics, Princeton University, Princeton NJ.
1) W.R. Softky and C. Koch, Neural Comput. 4, 643646 (1992).
2) H. Sompolinsky, A. Crisanti and H.J. Sommers, Phys. Rev. Lett. 61, 259-262 (1988).
3) C. van Vreeswijk C and H. Sompolinsky, Science 274, 1724-1726 (1996).
4) N. Brunel, J. Physiol. Paris 94, 445-463 (2000).
5) M. Franz and M. Zhang, Phys. Rev. E 52, 3558-3565 (1995).
6) I.Z. Kiss and J.L. Hudson, Phys. Rev. E 64, 046215 (2001).
7) A.S. Pikovsky, M.G. Rosenblum, G.V. Osipov, and J. Kurths, Physica D: Nonlinear Phe-
nomena 104, 219-238 (1997).
8) R. Brown and L. Kocarev, Chaos 10, 344-349 (2000).
9) E.Sch¨ oll and H.G. Schuster (Eds), Handbook of Chaos Control, Wiley-VCH (2007).
10) H. Sakaguchi and T. Fujimoto, Phys. Rev. E 67, 067202-1:3 (2003).
11) A.T. Stamp, G.V. Osipov and J.J. Collins, Chaos 12, 931-940 (2002).
12) S. Wu, K. He and Z. Huang, Phys. Lett. A 260, 345-351 (1999).
13) L. Molgedey, J. Schuchhardt and H.G. Schuster, Phys. Rev. Lett. 69, 3717-3719 (1992).
14) N. Bertchinger and T. Natschl¨ ager, Neural Comput. 16, 1413-1436 (2004).
15)The tanh function has the disadvantage of having the “resting” rate φ(0) halfway be-
tween the minimum and maximum rates. This generalization allows us to adjust the
value of φ(0) to be closer to the minimum of this range, while retaining the desirable
feature that the maximum of the derivative of φ is at x=0.
16) The connectivity pattern in our model does not obey the restriction of cortical neurons
to excitatory and inhibitory subtypes (see K. Rajan and L.F. Abbott, Phys. Rev. Lett. 97,
188104 (2006) for a theoretical treatment of this problem in the linear regime). More
theoretical work is needed to establish a detailed account of the nonlinear interactions
between stimulus features and ongoing fluctuations in such networks.
17) A. Arieli, A. Sterkin, A. Grinvald and A. Aertsen, Science 273, 1868-1871 (1996).
18) J.S. Anderson, I. Lampl, D.C. Gillespie and D. Ferster, J. Neurosci. 21, 2104-2112
19) G. Werner and V.B. Mountcastle, J. Neurophysiol. 26, 958-977 (1963).
20) M.M. Churchland, B.M. Yu, S.I. Ryu, G. Santhanam and K.V. Shenoy, J. Neurosci. 26,
21) I.M. Finn, N.J. Priebe and D. Ferster, Neuron 54, 137-152 (2007).
22) J.F. Mitchell, K.A. Sundberg and J.J. Reynolds, Neuron 55, 131-41 (2007).
23) M.M. Churchland et al., Nature Neurosci. 13, 369-378 (2010).
24) J.A. Henrie and R. Shapley, J. Neurophysiol. 94, 479-490 (2005).
25) P. Kara, P. Reinagel and R.C. Reid, Neuron 27, 635-646 (2000).
26) M. Carandini, PLoS Biol. 9, E264 ( 2004).
27) P.E. Latham, B.J. Richmond, P.G. Nelson and S. Nirenberg, J. Neurophysiol. 83, 808-
28) J. Anderson, I. Lampl, I. Reichova, M. Carandini and D. Ferster, Nat. Neurosci. 3, 617-
29) C.C.H. Petersen, T.T.G Hahn, M. Mehta, A. Grinvald and B. Sakmann, Proc. Natl.
Acad. Sci. USA 100, 13638-13643 (2003).
30) B. Haider, A. Duque, A.R. Hasenstaub, Y. Yu and D.A. McCormick, J. Neurophysiol.
97, 4186-4202 (2007).
31) M.V. Teich, IEEE Trans. of BioMed. Eng. 36, 150-160 (1989).
32) W. Sun and Y. Dan, Proc. Natl. Acad. Sci. (USA) 106, 17986-17991 (2009).
33) M.A. Smith and A. Kohn, J. Neurosci. 28, 12591-12603 (2008).
34) I. Nauhaus, L. Busse, M. Carandini and D.L. Ringach, Nature Neurosci. 12, 70-76
5) E. M. Maynard, N. G. Hatsopoulos, C. L. Ojakangas, B. D. Acuna, J. N. Sanes, R. A.
Normann and J. P. Donoghue, J Neurosci. 19, 8083-8093 (1999).
36) A.S. Ecker, P. Berens, G.A. Keliris, M. Bethge, N.K. Logothetis and A.S. Tolias, Sci-
ence 327, 584-587 (2010).
37) A. Renart, J. de la Rocha, P. Bartho, L. Hollender, N. Parga, A. Reyes and K.D. Harris,
Science 327 584-590 (2010).