PreprintPDF Available

State-dependent effects of electrical stimulation on populations of excitatory and inhibitory neurons

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Electrical stimulation of neural populations is a key tool for understanding neural dynamics and developing treatments. To investigate the effects of external stimulation on a basic cortical motif, we analyse the dynamical properties of an efficient mean-field neural mass model of excitatory and inhibitory adaptive exponential integrate-and-fire (AdEx) neurons and validate the results using detailed network simulations. The state space of the mean-field model and of the detailed network are closely related. They reveal asynchronous up and down-states, bistable regions, and oscillatory regions corresponding to fast excitation-inhibition and slow excitation-adaptation feedback loops. Within this dynamical landscape, external stimuli can cause state transitions, such as turning on and off oscillations. Oscillatory input can frequency-entrain and phase-lock endogenous oscillations. The effects of external stimulation are well-predicted by the mean-field model, further underpinning the utility of low-dimensional neural mass models.
Frequency entrainment of the E-I system's population activity in response to oscillatory external input. The color represents the log-normalized power of the excitatory population's rate frequency spectrum with high power in bright yellow and low power in dark purple. (a) Spectrum of mean-field model parameterized at point A2 with an ongoing oscillation frequency of f 0 = 22 Hz (horizontal green dashed line) in response to a stimulus with increasing frequency and an amplitude of 40 pA. An external electric field with a resonant stimulation frequency of f 0 has an equivalent strength of 3 V/m. The stimulus entrains the oscillation from 18 Hz to 26 Hz, represented by a dashed green diagonal line. At 27 Hz, the oscillation falls back to its original frequency f 0 . At a stimulation frequency of 2 f 0 , the ongoing oscillation at f 0 locks again to the stimulus in a smaller range from 43 Hz to 47 Hz. (b) AdEx network with f 0 = 30 Hz. Entrainment with an input current of 80 pA is effective from 27 Hz to 33 Hz. Electric field amplitude with frequency f 0 corresponds to 5 V/m. (c) Mean-field model with a stimulus amplitude of 200 pA. Green dashed lines mark the driving frequency f ext and its first and second harmonics f 1H and f 2H and subharmonics f 1SH and f 2SH . Entrained is now effective at the lowest stimulation frequencies until at 36 Hz the oscillation falls back to a frequency of 20 Hz. New diagonal lines appear due to interactions of the endogenous oscillation with the entrained harmonics and subharmonics. (d) AdEx network with stimulation amplitude of 240 pA. All parameters are given in Tables 1 and 2.
… 
Content may be subject to copyright.
State-dependent effects of electrical stimulation on
populations of excitatory and inhibitory neurons
Caglar Cakan1,2,* and Klaus Obermayer1,2
1
Department of Software Engineering and Theoretical Computer Science, Technische Universit
¨
at Berlin, Germany
2Bernstein Center for Computational Neuroscience Berlin, Germany
*cakan@ni.tu-berlin.de
ABSTRACT
Electrical stimulation of neural populations is a key tool for understanding neural dynamics and developing treatments. To
investigate the effects of external stimulation on a basic cortical motif, we analyse the dynamical properties of an efficient
mean-field neural mass model of excitatory and inhibitory adaptive exponential integrate-and-fire (AdEx) neurons and validate
the results using detailed network simulations. The state space of the mean-field model and of the detailed network are
closely related. They reveal asynchronous up and down-states, bistable regions, and oscillatory regions corresponding to fast
excitation-inhibition and slow excitation-adaptation feedback loops. Within this dynamical landscape, external stimuli can cause
state transitions, such as turning on and off oscillations. Oscillatory input can frequency-entrain and phase-lock endogenous
oscillations. The effects of external stimulation are well-predicted by the mean-field model, further underpinning the utility of
low-dimensional neural mass models.
Introduction
A paradigm which has proven to be successful in physical sciences is to systematically perturb a system in order to uncover
its dynamical properties. This has also worked well for the different scales at which neural systems are studied. Mapping
input responses experimentally has been key in uncovering the dynamical repertoire of single neurons
1,2
and large neural
populations such as cortical slice preparations
3
. It has been repeatedly shown that non-invasive brain stimulation can modulate
oscillations of ongoing brain activity
4,5
and brain function
6,7
and has enabled a new way for treatment of clinical disorders
such as epilepsy
8
. Moreover, electrical input to neural populations can also originate from the active neural tissue itself, causing
endogenous (intrinsic) extracellular electric fields which modulate neural activity10,11.
However, a complete understanding of how electrical stimulation affects large networks of neurons remains elusive. For
this reason, we would like to provide a framework and tools for studying the interaction of time-varying electric inputs with the
dynamics of neural populations. Unlike in vivo and in vitro experimental setups, in silico models of electrical stimulation offer
the possibility to study a wide range of neuronal and stimulation parameters and might help to interpret experimental results.
For analytical tractability, theoretical research of the effects of stimulation on neural populations has relied on the use of
mean-field methods to derive low-dimensional neural mass models
1619
. Instead of simulating a large number of neurons, these
models aim to approximate the population dynamics of a large number of interconnected neurons by means of dimensionality
reduction. At the cost of disregarding the dynamics of individual neurons, it is possible to make statistical assumptions about
large networks of neurons and can therefore approximate macroscopic behavior, such as the mean firing rate of a population.
Analysis of the state space of mean-field models have helped to characterize the dynamical states of coupled neuronal
populations
20,21
. Due to their computational efficiency, mean-field neural mass models are typically used in whole-brain
network models
22,23
, where they represent individual brain areas. This made it possible to study effects of external electrical
perturbation of the ongoing activity of the human brain on a system level24,25.
Naturally, neural population models have to strike a balance between analytical tractability, computational cost, and
biophysical realism. Thus, relating results from mean-field models to networks of biophysically more realistic spiking neurons
is challenging. In order to bridge this gap, we study a mean-field population model based on a linear-nonlinear cascade
26,27
of a
large network of spiking adaptive exponential (AdEx) integrate-and-fire neurons
28
. The AdEx model neuron in particular quite
successfully reproduces the sub- and supra-threshold voltage traces of single pyramidal neurons found in cerebral cortex
29,30
while offering the advantage of having interpretable biophysical parameters. In our mean-field approximation, the parameters
that define its state space are directly related to the biophysical parameters of a network of AdEx neurons.
In the following, we consider a classical motif of two delay-coupled populations of excitatory and inhibitory neurons
that represents a cortical mass (Fig. 1). We explore the rich dynamical landscape of this generic setup and investigate the
effects of slow somatic adaptation currents on the population dynamics. We then apply time-varying electrical input currents to
arXiv:1906.00676v1 [q-bio.NC] 3 Jun 2019
Figure 1. Schematic of the cortical motif. Coupled populations of excitatory (red) and inhibitory (blue) neurons. (a) Mean-field neural mass model with
axonal feedforward and feedback connections. Each node represents a population. (b) Schematic of the corresponding spiking AdEx neuron network with
connections between and within both populations. Both populations receive independent input currents with a mean
µext
α
and a standard deviation
σext
α
across
all neurons of population α∈ {E,I}.
the excitatory population and observe frequency and amplitude-dependent effects of the interactions between stimulus and
endogenous oscillations of the system. We estimate the equivalent extracellular electric field amplitudes corresponding to these
effects using previous results31 of a spatially extended neuron model with a passive dendritic cable.
Predictions from mean-field theory are validated using simulations of large spiking neural networks. A close relationship
of the mean-field model to the ground-truth model is established, proving its practical and theoretical utility. The mean-field
model retains all dynamical states of a large network of individual neurons and predicts the interaction of the system with
external electrical stimulation to a remarkable degree. We believe that our results may help to understand the rich and plentiful
observations in real neural systems subject to external stimulation and may provide a useful tool for predicting the effects of
external stimuli on populations of neurons.
Results
Network of excitatory and inhibitory neurons
Here we study a cortical mass which consists of two populations of excitatory adaptive (E) and inhibitory (I) adaptive exponential
integrate-and-fire (AdEx) neurons (Fig. 1). Both populations are delay-coupled and the excitatory population has a somatic
adaptation feedback mechanism. We consider a low-dimensional, therefore computationally efficient, mean-field neural mass
model (Fig. 1a) and a large network of spiking AdEx neurons (Fig. 1b) from which the mean-field model was derived.
For the construction of the mean-field model, a set of conditions need to be fulfilled: We assume the number of neurons to
be very large, all neurons within a population to have equal properties, and the connectivity between neurons to be sparse and
random. Additional assumptions about the mathematical nature and a detailed derivation of the mean-field model is presented
in the Methods section.
Bifurcation diagrams: attractors and the dynamical landscape
The E-I motif shown in Fig. 1can occupy various network states, depending on the baseline inputs to both populations.
By gradually changing the inputs, we map out the state space of the system, depicted in the bifurcation diagrams in Fig. 2.
In nonlinear dynamics theory, small changes of parameters can cause sudden and dramatic changes of its overall behavior,
called bifurcations. Bifurcations separate the state space into distinct regions of dynamical network states at which a system
can transition from one to another. In our case, the dynamical state of the E-I system depends on external inputs to both
subpopulations, which are directly affected by external electrical stimulation and other driving sources, e.g. inputs from other
neural populations such as connected brain regions.
Comparing the bifurcation diagrams of the mean-field model (Figs. 2a, c) to the ground truth spiking AdEx neuron
network on which it is based (Figs. 2b, d) demonstrates the similarity between the dynamical landscapes. Transitions between
dynamical states take place at comparable baseline input values and in a well-preserved order. Since the space of possible
neuronal parameter configurations is vast, we focus on two variants of the model: one without a somatic adaptation mechanism,
Figs. 2a and b, and one with finite sub-threshold and spike-triggered adaptation in Figs. 2c and d. Both variants feature distinct
states and dynamics.
No adaptation
Figures 2a and b show the bifurcation diagrams of the E-I system without somatic adaptation. There are two stable fixed-point
solutions of the system with a constant firing rate: a low-activity down-state and a high-activity up-state. These macroscopic
2/22
Figure 2. Bifurcation diagrams and time series. Bifurcation diagrams a-d depict the state space of the E-I system in terms of the mean external input
currents
C·µext
α
to both subpopulations
α∈ {E,I}
.
(a)
Bifurcation diagram of mean-field model without adaptation with up and down-states, a bistable region
bi (green dashed contour) and an oscillatory region LCEI (white solid contour). (b) Diagram of the corresponding AdEx network. (c) Mean-field model with
somatic adaptation. The bistable region is replaced by a slow oscillatory region LC
aE
.
(d)
Diagram of the corresponding AdEx network. The color in panels a -
d indicates the maximum population rate of the excitatory population (clipped at 80 Hz).
(e)
Example time series of the population rates of excitatory (red) and
inhibitory (blue) populations at point A2 (top row) which is located in the fast excitatory-inhibitory limit cycle LCEI, and at point B3 (bottom row) which is
located in the slow limit cycle LCaE.(f) Time series at corresponding points for the AdEx network. All parameters are listed in Table 1. The mean input
currents to the points of interests A1-A3 and B1-B4 are provided in Table 2.
3/22
Figure 3. Transition from multistability to slow oscillation is caused by somatic adaptation. Bifurcation diagrams depending on the external input
current
C·µext
α
to both populations
α∈ {E,I}
for varying somatic adaptation parameters
a
and
b
. Color indicates maximum rate of the excitatory population.
Oscillatory regions have a white contour, bistable regions have a green dashed contour. (a) Bifurcation diagrams of the mean-field model. On the diagonal
(bright-colored diagrams), adaptation parameters coincide with (b). (b) Bifurcation diagrams of the corresponding AdEx network. Parameters are listed in
Table 1.
states correspond to asynchronous firing activity on a microscopic level
16
. In accordance with previous studies
32,33
, at larger
mean background input currents, there is a bistable region in which the up-state and the down-state coexist. At smaller mean
input values, the recurrent coupling of excitatory and inhibitory neurons gives rise to an oscillatory limit cycle LC
EI
with an
alternating activity of the two populations. Example time series of the population rates of E and I inside the limit cycle are
shown in Figs. 2e and f (top row). The frequency inside the oscillatory region depends on the inputs to both populations and
ranges from
8
Hz to
29
Hz in the mean-field model and from
4
Hz to
44
Hz in the AdEx network for the parameters given (see
Supplementary Fig. 11).
All macroscopic network states of the AdEx network are represented in the mean-field model. The bifurcation line that
marks the transition from the down-state to LC
EI
appears at a similar location in the state space, close to the diagonal at which
the mean inputs to E and I are equal, in both, the mean-field and the spiking network model. However, the shape and width
of the oscillatory region, as well as the amplitudes and frequencies of the oscillations differ. In Figs. 2e and f (top row), the
differences are due to the location of the selected points B2 in the bifurcation diagrams, which are not particularly chosen
to precisely match each other in amplitude or frequency but rather in the approximate location in the state space. Overall,
the AdEx network exhibits larger amplitudes across the oscillatory regime and the excitatory amplitudes are larger than the
inhibitory amplitudes (Supplementary Fig. 9) . Another notable difference is the small overlap of the bistable region with the
oscillatory region LCEI in the mean-field model (Fig. 2a) which could not be observed in the AdEx network.
With adaptation
In Figs. 2c and d, bifurcation diagrams of the system with somatic adaptation are shown. Compared to Figs. 2a and b (without
adaptation), the state space, including the oscillatory region LC
EI
, is shifted to the right, meaning that larger excitatory input
currents are necessary to compensate for the inhibiting sub-threshold adaptation currents. The main effect that is caused by
4/22
Figure 4. Population response to time-varying input current is state-dependent.
Population rates of the excitatory population (black) with an additional
external electrical stimulus (red) applied to the excitatory population. (a, b) A DC step input with amplitude 60 pA (equivalent E-field amplitude: 12 V/m)
pushes the system from the low-activity fixed point into the fast limit cycle LCEI. (c, d) A step input with amplitude 40 pA (8V/m) pushes the system from
LC
EI
into the up-state. (
e, f
) In the multistable region bi, a step input with amplitude
100
pA (
20V/m
) pushes the system from the down-state into the up-state
and back. (g ,h) Inside the slow oscillatory region LCaE, an oscillating input current with amplitude 40 pA and a (frequency-matched) frequency of 3 Hz
phase-locks the ongoing oscillation. (
i, j
) A slow
4Hz
oscillatory input with amplitude
40
pA drives oscillations if the system is close to the oscillatory region
LCaE. All parameters are given in Table 1, the parameters of the points of interest are given in Table 2.
adaptation is the appearance of a slow oscillatory region labeled LC
aE
in Figs. 2c and d. The reason for the emergence of
this oscillation is the destabilizing effect the inhibiting adaptation currents have on the up-state inside the bistable region. As
the adaptation currents build up due to high firing rates within the population, the up-state "decays" and transitions to the
down-state. The resulting low population activity causes a decrease of the adaptation currents which in turn allow the activity
to increase back into the up-state, resulting in a slow oscillation. These low-frequency oscillations range from 0.5 Hz to 5 Hz
for the parameters given.
The bifurcation diagrams in Fig. 3show how the emergence of the slow oscillation depends on the adaptation mechanism.
Increasing the subthreshold adaptation parameter primarily shifts the state space to the right whereas a larger spike-triggered
adaptation parameter value enlarges oscillatory regions. Both parameters cause the bistable region to shrink until it is eventually
replaced by a slow oscillatory region LC
aE
. Again, the state space of the mean-field model (3a) reflects the AdEx network (3b)
accurately.
Time-varying external stimulation and electric field effects
Here we describe the effects of time-varying external stimulation and the interactions with ongoing oscillatory states. External
stimulation is implemented by coupling an electric input current to the excitatory population. This additional input current
may be a result of an externally applied electric field or synaptic input from other neural populations. For the cases without
adaptation, we can calculate an equivalent extracellular electric field strength that correspond to the effects of an input current
(see Methods).
Since excitatory neurons in the neocortex are most susceptible to electric fields due to the presence of apical dendrites
34
,
only time-varying input to the excitatory population is studied. This choice is also motivated in the context of inter-areal brain
network models where connections between brain areas are usually considered between excitatory subpopulations.
Given the multitude of possible states of the system, its response to external input critically depends on the dynamical
landscape around its current state. It is important to keep in mind that the bifurcation diagrams (Figs. 2and 3) are valid only
for constant external input currents. However, they provide helpful insights about the dynamics of the non-stationary system
assuming that the bifurcation diagrams do not change too much as we vary the input parameter µext
e(t)over time.
Figs. 4a-f show how a step current input pushes the system in and out of specific states of the E-I system. A positive step
5/22
Figure 5. Frequency entrainment of the E-I system’s population activity in response to oscillatory external input. The color represents the
log-normalized power of the excitatory population’s rate frequency spectrum with high power in bright yellow and low power in dark purple. (
a
) Spectrum of
mean-field model parameterized at point A2 with an ongoing oscillation frequency of
f0=22Hz
(horizontal green dashed line) in response to a stimulus with
increasing frequency and an amplitude of 40pA. An external electric field with a resonant stimulation frequency of f0has an equivalent strength of 3 V/m.
The stimulus entrains the oscillation from 18Hz to 26Hz, represented by a dashed green diagonal line. At 27 Hz, the oscillation falls back to its original
frequency f0. At a stimulation frequency of 2 f0, the ongoing oscillation at f0locks again to the stimulus in a smaller range from 43 Hz to 47Hz. (b) AdEx
network with
f0=30Hz
. Entrainment with an input current of
80pA
is effective from
27Hz
to
33Hz
. Electric field amplitude with frequency
f0
corresponds
to 5V/m. (c) Mean-field model with a stimulus amplitude of 200pA. Green dashed lines mark the driving frequency fext and its first and second harmonics
f1H and f2H and subharmonics f1SH and f2SH . Entrained is now effective at the lowest stimulation frequencies until at 36Hz the oscillation falls back to a
frequency of 20Hz. New diagonal lines appear due to interactions of the endogenous oscillation with the entrained harmonics and subharmonics. (d) AdEx
network with stimulation amplitude of 240pA. All parameters are given in Tables 1and 2.
current represents a movement in positive direction of the
µext
e
axis in Fig. 2. Figs. 4a and b show input-driven transitions from
the low-activity down-state into the fast oscillatory limit cycle LC
EI
. Similar behavior can be observed in Figs. 4c and d where
we push the system’s state from LC
EI
into the up-state, effectively being able to turn oscillations on and off with a step current.
The time it takes to transition between states is longer for the AdEx network.
Inside the bistable region, we can use the hysteresis effect to transition between the down-state and the up-state and vice
versa. After application of an initial push in the desired direction, the system remains in that state, reflecting the system’s
bistable nature.
When adaptation is turned on, a slow oscillatory input current can entrain the ongoing oscillation. In Figs. 4g and h, the
oscillation is initially out of phase with the external input but is quickly phase-locked. Placed close to the boundary of the
slow oscillatory region LC
aE
, in Figs. 4i and j we show how an oscillatory input with a similar frequency as the limit cycle
periodically forces the system from the down-state into an oscillation.
Frequency entrainment with oscillatory input
Here, we study the frequency response of the E-I system in the fast oscillatory regime to an oscillatory input to the excitatory
population with respect to the input’s frequency and amplitude (Fig. 5). The unperturbed system is in the fast limit cycle LC
EI
with its endogenous frequency f0.
The external stimulus with frequency
fext
entrains the ongoing oscillation in a range around
f0
, the resonant frequency of
the system. Here, the ongoing oscillation essentially follows the external drive and adjusts its frequency to it (Fig. 5a). A
second (narrower) range of frequency entrainment appears if
fext
approaches
2f0
, representing the ability of the input to entrain
oscillations at half of its frequency. Due to interference of the frequencies of ongoing and external oscillations, the spectrum
has peaks at the difference of both frequencies which appear as X-shaped patterns in the frequency diagrams. The AdEx
neuron network shows a similar behavior (Fig. 5b), albeit the range of entrainment is smaller than in the mean-field model,
despite the stimulation amplitude being twice as large. For a stronger mean input current, the range of frequency entrainment is
widened considerably. In Fig. 5c, the input dominates the spectrum at very low frequencies. The peak of the spectrum reverts
back to approximately f0if the external frequency fext is close to the first harmonic 2 f0of the endogenous frequency. We see
multiple lines emerging in the frequency spectra that correspond to the harmonics and subharmonics of the external frequency
6/22
Figure 6. Phase locking of ongoing oscillations of the population activity via weak oscillatory external inputs. The left panels show heatmaps of (a)
the mean-field model and
(b)
the AdEx network for different stimulus frequencies and amplitudes. The color indicates the level of phase locking, represented
by dark areas for effective phase locking and bright yellow areas for no phase locking. Phase locking is measured by the standard deviation of the Kuramoto
order parameter which is a measure for phase synchrony. White dashed lines indicate the strength of an equivalent external electric field in V/m that would
correspond to the oscillating input current.
(c)
Time series of four points indicated in (a) with the excitatory population’s rate in black and the external input in
red (upper panel). In the lower panel, the Kuramoto order parameter Ris shown which measures the synchrony between the population rate and the external
input. Constant Rrepresents effective phase locking (phase difference between rate and input is constant), fluctuating Rindicates dephasing of both signals,
hence no phase locking.
(d)
Corresponding time series of points in (b). Both models are parameterized to be in point A2 inside the fast oscillatory region LC
EI
.
All parameters are given in Table 1.
and its interaction with the endogenous frequency
f0
, creating a complex pattern in the diagrams. In the case of the AdEx
network in Fig. 5d, the interaction patterns of the entrained harmonics with the endogenous oscillation are similar but the
endogenous oscillation is able to sustain itself better, visible as horizontal lines in the diagram. This is due to the overall weaker
susceptibility to external input of the AdEx network compared to the mean-field model.
Overall, there is a good qualitative agreement of the frequency spectra of both models, reflecting that interactions of external
input and ongoing oscillations are well-captured by the mean-field model.
Phase locking with oscillatory input
In this section, we quantify the ability of an oscillating external input current to the excitatory population to synchronize an
ongoing oscillation to itself if both frequencies, the driver and the endogenous frequency, are close to each other (frequency
matching). An example time series of a stimulus entraining an ongoing slow oscillation is shown in Fig. 4h.
In Fig. 6, we quantify the ability of the input to force the ongoing oscillation into synchrony by measuring the time course
of the phase difference between the stimulus and the population rate. If phase locking is successful the phase difference remains
constant. In Fig. 6a, the region of phase locking for an external input of frequency
fext
is centered around the endogenous
frequency
f0
of the unperturbed system. Increasing the stimulus amplitude widens the range around
f0
at which phase locking
is effective. An example time series of successful phase locking inside this region is shown in Fig 6c. If the input is not able to
phase-lock the ongoing activity, a small difference between the driver frequency
fext
and
f0
can cause a slow beating of the
activity with a frequency of roughly the difference
|fext f0|
. Thus, a small frequency mismatch can produce a very slowly
oscillating activity (Fig. 6c at points 2-4). Figure 6d at point 2 shows the same drifting effect in the AdEx network. Due to
finite-size noise in the AdEx neuron network, an irregular switching between synchrony and asynchrony can be observed at
the edges of the phase locking region in Fig. 6d at point 3. Similar to the mean-field model, beating activity with very low
7/22
frequencies can be observed in the AdEx network as well, however the frequency is less regular (Fig. 6d at point 4).
In the phase locking diagrams Figs. 6a and b, the equivalent external electric field strengths are shown. Weak field
amplitudes (
0.5V/m
for the mean-field model,
1V/m
-
2V/m
for the AdEx network) are able to phase-lock the ongoing
oscillations if the frequencies are close to being equal.
Discussion
In this paper we explored the dynamical properties of a classical motif of coupled excitatory and inhibitory (E-I) adaptive
exponential integrate-and-fire (AdEx) neurons and studied their response to external electrical stimulation. A focus was put on
the comparison of a mean-field neural mass model with a network model of spiking AdEx neurons from which it was derived
(Fig. 1). The mean-field model is a computationally efficient low-dimensional description of the AdEx network if all neurons
are equal, the number of neurons is large, and the connectivity is sparse and random. Under these conditions, the mean-field
model approximates the macroscopic behavior of the network, such as the population firing rate and the mean membrane
potential. The biophysical parameters of the AdEx network model are preserved in the mean-field description, enabling us to
compute realistic electric current and equivalent extracellular field strengths31 in various electrical stimulation scenarios.
Bifurcation diagrams (Figs. 2) provide a map of the possible dynamical states as a function of the external inputs to both,
excitatory and inhibitory, populations. A comparison of the diagrams of the mean-field model to the corresponding AdEx
network model reveals a high degree of equivalence of the dynamical landscape of both. Every dynamical state of the AdEx
network is represented in the mean-field model. This one-to-one mapping of the dynamical states which allows for accurate
predictions of the network state using the low-dimensional mean-field model.
Without a somatic adaptation feedback mechanism, the population rate can occupy four distinct dynamical states: a
down-state with very weak activity, an up-state with constant high activity representing an asynchronous firing of the neurons,
abistable regime where down-state and up-state coexist and an oscillatory state where the activity alternates between the
excitatory and the inhibitory population at a low gamma frequency.
The AdEx neuron model allows for incorporation of a slow potassium-mediated adaptation current, typically found in
cortical pyramidal neurons
35
. Due to somatic adaptation, in the bistable region the up-state loses its stability
32
and transforms
into a second oscillatory regime caused by the slow feedback mechanism of the adaptation currents (Fig. 3). In this state, the
population activity oscillates at delta frequencies. This oscillatory region coexists with the fast excitatory-inhibitory oscillation.
Using the bifurcation diagrams (Fig. 2), we mapped out several points of interest that represent different network states.
The type of reaction to external stimulation depends on the current state of the system, as seen in the population time series
during stimulation in Fig. 4. Close to edges of attractors, direct currents can cause bifurcations and trigger sudden change of
the dynamics, such as transitions from a low activity down-state to a state with oscillatory activity. Switching due to external
fields has been observed in vitro3at similar field strengths of 6V/m as we have.
It is worth mentioning that other parameters, such as coupling strengths and adaptation parameters, can cause bifurcations as
well. Overall, the specific shape of the dynamical landscape depends on numerous parameters. However, parameter explorations
using the mean-field model indicated that the overall structure of the bifurcation diagrams presented in this paper was fairly
robust to changes of the coupling strengths (see Supplementary Figure 12) and therefore representative for this E-I system.
Inside oscillatory regions, alternating input causes phase locking and frequency entrainment. To study how frequency
entrainment depends on the frequency and amplitude of the stimulus, we analysed frequency spectrograms of the population
activity when subject to external oscillating stimuli with increasing frequencies (Fig. 5). The necessary field amplitudes
for this effect are on the order of endogenously generated fields in the in vivo brain
10,11
and with observations in in vitro
experiments with field strengths of
36V/m3,36
. In accordance with similar computational stimulation studies
15,37
, we found
ranges of frequency entrainment around the natural frequency of the endogenous oscillation, which widen as the stimulus
amplitude increases. We also observed frequency entrainment with a stimulus of a frequency close to the higher harmonics
of the endogenous oscillation. This observation could be valuable for experimental conditions where it is impractical to use
stimulation frequencies close to the endogenous frequency.
If the stimulus frequency is close to the endogenous frequency (frequency matching), an oscillatory stimulus can force the
ongoing oscillation to synchronize its phase with the stimulus, called phase entrainment or phase locking. Very weak input
currents are able to phase lock ongoing oscillations, which correspond to external electric field strengths with amplitudes in the
order of
1V/m
(Fig. 6). The field strengths are comparable to fields generated by stimulation techniques such as transcranial
alternating current stimulation (tACS)38 that could observe phase entrainment13,39,40 and are weaker than endogenous fields.
We confirmed all observed input-dependent effects to be present in the mean-field as well as the AdEx network model.
However, in the parameter range considered here, the AdEx network consistently requires larger input amplitudes in order to
cause the same effect size as in the mean-field model (Figs. 5and 6).
This is related to the fact that in the bifurcation diagrams Fig. 2, the shape of the oscillatory region as well as the amplitudes
and frequencies of the oscillations differ (see Supplementary Figure 11). We suspect that the oscillatory states are where the
8/22
steady-state approximations that are used to construct the mean-field model break down due to the fast temporal dynamics in
this state. Hence, both models have differences in frequencies and amplitudes of the excitatory population within the oscillatory
regions. This also affects at which parameter values the described bifurcations take place, resulting in a narrower limit cycle
region in the mean-field approximation.
Overall, our observations confirm that a sophisticated mean-field model of a neural mass is appropriate for studying
the macroscopic dynamics of large populations of biophysical neurons consisting of excitatory and inhibitory units. To our
knowledge, such a remarkable equivalence of the dynamical states between a mean-field neural mass model and its ground-truth
spiking network model has not been demonstrated before. Our analysis shows that mean-field models are useful for quickly
exploring the parameter space in order to predict parameters of the detailed network on which it is based. Since the rich
dynamical landscapes of both models are topologically closely related, we believe that it should be possible to reproduce a
variety of dynamical properties that are observed in large-scale network simulations with simplified neural population models.
This may help to understand the rich and plentiful observations in real neural systems when subject to stimulation with electric
currents or external fields and may provide a useful tool for predicting the effects of external stimuli on populations of neurons,
such as switching between bistable up and down-state states or phase and frequency entrainment of the population activity.
Bifurcations, as studied in dynamical systems theory, offer a plausible mechanism of how networks of neurons as well
as the brain as a whole
41,42
can change its mode of operation due to time-dependent parameters or noise. Understanding the
dynamical landscape of real neural systems will be beneficial for developing stimulation techniques and protocols, represented
as trajectories in the dynamical landscape, that can be used to reach desirable states or inhibit pathological dynamics.
Because of the variety of possible different dynamical regimes that arise from the basic E-I architecture, it is critical to
consider the dynamical state of the system in order to understand its response to external stimuli. Therefore, we expect that in
order to account for the numerous seemingly inconclusive experimental results from noninvasive brain stimulation studies
12
,
additional to the stimulus parameters
5
, the response of a system to external stimuli has to be understood in context of the
dynamical state of the unperturbed system3,14,15.
Methods
Neural population setting
In order to derive the mean-field description of an AdEx network, we consider a very large number of
N
neurons for each of
the two populations
E
and
I
. We assume (1) random connectivity (within and between populations), (2) sparse connectivity
43,44
,
but each neuron having a large number of inputs
45 K
with
1KN
, (3) and that each neuron’s input can be approximated by
a Poisson spike train
46,47
where each incoming spike causes a small (
c1
) and quasi-continuous change of the postsynaptic
potential (PSP)48 (diffusion approximation).
The spiking neuron model
The adaptive exponential (AdEx) integrate-and-fire neuron model forms the basis for the derivation of the mean-field equations
as well as the spiking network simulations. Each population
α∈ {E,I}
has
Nα
neurons which are indexed with
i[0,Nα]
.
The membrane voltage of neuron iin population αis governed by
CdVi
dt =Iion(Vi) + Ii(t) + Ii,ext(t),(1)
Iion(V) = gL(VEL) + gLTexpVVT
TIA(t).(2)
The first term of
Iion
describes the voltage-depended leak current, the second term the nonlinear spike initiation mechanism, and
the last term the somatic adaptation current.
Ii,ext(t) = µext(t) + σextξi(t)
is a noisy external input. It consists of a mean current
µext(t)
which is equal across all neurons of a population and independent Gaussian fluctuations
ξi(t)
with standard deviation
σext
(
σext
is equal for all neurons of a population). For a neuron in population
α
, synaptic activity induces a postsynaptic
current Iiwhich is a sum of excitatory and inhibitory contributions:
Ii(t) = C(JαEsi,αE(t) + JαIsi,αI(t)),(3)
with
C
being the membrane capacitance and
Jαβ
the coupling strength from population
β
to
α
, representing the maximum
current when all synapses are active. Its dynamics is given by
dsi,α β
dt =si,αβ
τs
+cαβ
Jαβ
(1si,αβ )
j
Gi j
k
δ(ttk
jdαβ ).(4)
9/22
Figure 7. Precomputed quantities of the linear-nonlinear cascade model. (a) Nonlinear transfer function Φfor the mean population rate (Eq. 15)(b)
Transfer function for the mean membrane voltage (Eq. 10)(c) Time constant ταof the linear filter that approximates the linear rate response function of an
AdEx neuron (Eq. 7). Color scale represents the level of the input current variance σαacross the population. All neuronal parameters are given in Table 1.
si,αβ (t)
represents the fraction of active synapses from population
β
to
α
and is bound between 0 and 1.
Gi j
is a (random)
binary connectivity matrix with constant rowsum
Kα
and connects neurons
j
of population
β
to neurons
i
of population
α
.
With the constraint of a constant in-degree
Kα
of each unit, all neurons of population
β
project to neurons of population
α
with
a probability of
pαβ =Kα/Nβ
and
α,β∈ {E,I}
.
Gi j
is generated independently for every simulation. The first term in Eq. 4
is an exponential decay of the synaptic activity, whereas the second term integrates all incoming spikes as long as
si,αβ <1
(i.e.
some synapses are still available). The first sum is the sum over all afferent neurons j, and the second sum is the sum over all
incoming spikes
k
from neuron
j
emitted at time
tk
after a delay
dαβ
. If
si,αβ =0
, the amplitude of the postsynaptic current is
exactly C·cαβ which we set to values from physiological in-vitro measurements49 (see Table 1).
For neurons iof the excitatory population, the adaptation current IA,i(t)is given by
τA
dIA,i
dt =a(ViEA)IA,i,(5)
a
representing the subthreshold adaptation and
b
the spike-dependent adaptation parameters. The inhibitory population doesn’t
have a adaptation mechanism, which is equivalent to setting these parameters to
0
. When the membrane voltage crosses the
spiking threshold,
ViVs
, the voltage is reset,
Vi=Vr
, clamped for a refractory time
Tref
, and the spike-triggered adaptation
increment is added to the adaptation current, IA,i=IA,i+b. All parameters are given in Table 1.
Finally, we define an mean firing rate of the neurons in population αby
rα(t) = 1
Nα
1
dt
Nα
i=0Zt+dt
t
δ(t0tk
i)dt,(6)
which measures the number of spikes in a time window dt, set to the integration step size in our numerical simulations.
The neural mass model
For a sparsely connected random network of AdEx neurons as defined by Eqs. 1-5, the distribution of membrane potentials
p(V)
and the mean population firing rate
r
can be calculated using the Fokker-Planck equation in the thermodynamic limit
N16,50
. Determining the distribution involves solving a partial differential equation, which is computationally demanding.
A low-dimensional linear-nonlinear cascade model
27,51
can be used to capture the steady-state and transient dynamics of a
population in form of a set of simple ODEs. Briefly, for a given mean membrane current
µα
with standard deviation
σα
, the
mean of the membrane potentials
¯
Vα
as well as the population firing rate
rα
in the steady-state can be calculated from the
Fokker-Plank equation52 and captured by a set of simple nonlinear transfer functions Φ(µα,σα)(shown in Fig. 7a and b).
The reproduction accuracy of the linear-nonlinear cascade model in the form a simple ODE system for a single population
has been systematically reviewed in Ref.
26
and has proven to reproduce the dynamics of an AdEx network in a range of
different input regimes quite successfully, while offering significant increase in computational efficiency.
10/22
Rate equations
The full set of the mean-valued equations of the neural mass model reads:
τα
dµα
dt =µsyn
α(t) + µext
αµα(t)(7)
µsyn
α(t) = JαE¯sαE(t) + JαI¯sαI(t)(8)
σ2
α(t) = 2J2
αEσ2
s,αE(t)τs,Eτm
(1+rαE(t))τm+τs,E
+2J2
αIσ2
s,αI(t)τs,Iτm
(1+rαI(t))τm+τs,I
+σ2
ext,α(9)
d¯
IA
dt =τ1
Aa(¯
VE(t)EA)¯
IA+b·rE(t)(10)
d¯sαβ
dt =τ1
s,β¯sα β (t) + 1¯sα β (t)·rαβ (t)(11)
dσ2
s,αβ
dt =1¯sα β (t)2·ραβ (t) + τ2
s,β(ραβ (t)2τs,β(ρα β (t)+ 1·σ2
s,αβ (t),(12)
for
α,β∈ {E,I}
. All parameters are listed in Table 1. The mean
rαβ
and the variance
ραβ
of the effective input rate from
population βto αfor a spike transmission delay dαβ are given by
rαβ (t) = cαβ
Jαβ
Kβ·rβ(tdα),(13)
ραβ (t) = cα β
Jαβ ·rα β (t).(14)
cαβ
describes the amplitude of the post-synaptic current caused by a single spike (at rest,
sαβ =0
) and
Jαβ
the maximum
membrane current generated when all synapses are active (at sαβ =1).
To account for the transient dynamics of the population to a change of the currents,
µα
can be integrated by convolving
the input with a linear response function. This function is well-approximated by a decaying exponential
26,27,51
with a time
constant τα(shown in Fig. 7c). Thus, the convolution can simply be expressed as an ODE (Eq. 7).
Here,
µsyn
α(t)
, as defined by Eq. 8, represents the mean current caused by synaptic activity and
µext
α
the currents caused
by external stimulation. The instantaneous population spike rate
rα
is determined using the precomputed nonlinear transfer
function
rα=Φ(µα,σα).(15)
The function
Φ
is shown in Fig. 7a. It translates the mean
µα
as well as the standard deviation
σα
(Eq. 9) of the membrane
currents into a population firing rate. Using an efficient numerical scheme
27,52
, this function was previously computed
26,53
from the steady-state firing rates of a population of AdEx neurons given a particular input mean and standard deviation. The
transfer function depends on the parameters and dynamics of the AdEx neuron. Equation 10 governs the evolution of the mean
adaptation current of the excitatory population. Equations 11 and 12 describe the mean and standard deviation of the fraction of
active synapses caused by incoming spikes from the population βto population α.
Synaptic model
We derive ODE expressions for the population mean
¯sαβ
and variance
σ2
s,αβ
of the synaptic activity as presented in Ref.
53
.
We rewrite the the synaptic activity given by Eq. 4of a neuron
i
from population
α
caused by inputs from population
β
with
α,β∈ {E,I}in terms of a continuous input rate rβ(diffusion approximation) such that
τs,β
dsi,α β
dt =si+cαβ
Jαβ
(1si)Kαrβ(tdα) + qKαrβ(tdα)ξi(t),(16)
with
Kα=jαGi j
being the constant in-degree of each neuron,
rβ(tdα)
the incoming delayed mean spike rate from all
afferents of population
β
, and
ξi(t)
is standardized Gaussian white noise. The current
Ii(t)
of a neuron in population
α
due to
synaptic activity given the diffusion approximation is given by
Ii(t) =
β∈{E,I}
CJαβ si,α β (t).(17)
We split the mean from the variance of Eq. 16 by first taking the mean over neurons of Eq. 16. The mean synaptic activity
¯sαβ :=hsi,αβ ii
of population
α
caused by input from population
β
is then given by Eq. 11. We get the differential equation of
the variance σ2
s,αβ of si,α β in Eq. 12 by applying Ito’s product rule on d(s2
i,αβ )and taking its time derivative.
11/22
Input currents
Additional to the mean currents (Eq. 7) in the population, we also keep track of its variance. Fig. 7shows the population
firing rate and mean membrane potential for different levels of variance of the membrane currents. Especially the adaptive
time-constant, which affects the temporal dynamics of the population’s response, strongly depends on the variance of the input.
Without loss of generality, we derive the variance of the membrane currents caused by a single afferent population
α
and later
add up the contributions of two coupled populations (excitatory and inhibitory). Assuming that every neuron receives a large
number of uncorrelated inputs (white noise approximation), we write the synaptic current
Ii,α(t)
in terms of contributions to the
population mean and the variance16,5457:
Ii,α(t) = Cµα(t) + σα(t)ξi(t).(18)
In order to obtain the contribution of synaptic input to the mean and the variance of membrane currents, we (1) neglect the
exponential term of
Iion
in Eq. 2and (2) assume that the membrane voltages are mostly subthreshold such that we can neglect
the nonlinear reset condition. Numerical simulations have proven that these assumptions are justifiable in the parameter ranges
that we are concerned with
53
. We apply these simplifications only in this step of the derivation. The exponential term, the
neuronal parameters within it, and the reset condition still affect the precomputed functions (shown in Fig. 7) and thus the
overall population dynamics.
We substitute both approximations Eq. 17 and Eq. 18 separately into the membrane voltage Eq. 1and apply the expectation
operator on both sides, which leads to two equations describing the evolution of the mean membrane potential. If we require
that both approximations should yield the same mean potential
hVi,αi
, we can easily see that
µsyn
α(t) = Jαα hsi,αα (t)i
. Using
Ito’s product rule
58
on
dV 2
and requiring that both approximations should also result in the same evolution of the second
moment hV2
αi, we get
σ2
α(t) = 2Jαα hVi,αsi,αα i−hVi,αihsi,αα i.(19)
Taking the time derivative of Eq. 19 and substituting the time derivative of
hVαsαα i
by applying Ito’s product rule on
d(Vαsαα )
we obtain
dσ2
α
dt =2J2
αα σ2
s,αα (t)cταKrαα (t) + 1
τs,α
+1
τmσ2
s,αα (t)(20)
Here,
σ2
s,αα :=hs2
i,αα ihsi,αα i2
. The timescale of Eq. 20 is much smaller than
τα
of Eq. 7. We can therefore approximate
σ2
α(t)well with its steady-state value:
σ2
α(t) = 2J2
αα τmτασ2
s,αα (t)
(cταKrα α (t) + 1)τm+τs,α
,(21)
τm=C/gL
being the membrane time constant. Adding up the variances in Eq. 21 of both E and I subpopulations and the
variance of the external input
σ2
ext,α
, the total variance of the input currents is then given by Eq. 9. The two moments of the
membrane currents,
µα
and
σα
, fully determine the instantaneous firing rate
rα=hriii
(Eq. 15), the mean membrane potential
¯
Vα:=hViii, and the adaptive timescale τα(Fig. 7).
Adaptation mechanism
The large difference of timescales of the slow adaptation mechanism mediated through
K+
channel dynamics compared to the
faster membrane voltage dynamics
59,60
and synaptic dynamics allows for a separation of timescales
61
(adiabatic approximation).
Therefore, each neuron’s adaptation current can be approximated by its population average
¯
IA
, which evolves according to Eq.
10, where
a
is the sub-threshold adaptation and
b
is the spike-triggered adaptation parameter.
¯
Vα(t) = ¯
Vα(µα,σα)
is the mean
of the membrane potentials of the population and was precomputed and is read from a table (Fig. 7b) at every timestep. In the
case of
a,b>0
, when adaptation is active, we subtract the current
¯
IA
caused by the adaptation mechanism from the current
C·µα
caused by the synapses in order to obtain the net input current. The resulting firing rate of the excitatory population is then
determined by evaluating
rE=Φ(µE¯
IA/C,σE)
. For fast-spiking inhibitory neurons, adaptation was neglected (
a=b=0
)
since the adaptation mechanism was found to be much weaker than in the case of pyramidal cells62.
Obtaining bifurcation diagrams and determining bistability
Each point in the bifurcation diagrams in Figs. 2and 3were simulated for each pair of external inputs
µext
E
and
µext
I
and the
resulting time series of the excitatory population rate of the mean-field model and the AdEx network were analysed and the
dynamical state was classified.
To classify a point in the state space as bistable or not, in both models, the mean-field model as well as the AdEx network,
we apply a negative and a subsequent positive stimulus to the excitatory population and measure the difference in activity after
12/22
both stimuli are turned off again. In the AdEx network, a simple step input can cause over- and under-shoot as a reaction,
which is a problem when assessing the stability of a basin of attraction around a fixed point. To overcome this problem, we
constructed a slowly-decaying stimulus (in contrast to previous work
63
, where bistability was identified using a step current).
An inverted example of this stimulus is shown in Figs. 4e and f. Using this stimulus, we first made sure that the population
rate is in the down-state, the initial state, with an initial negative external input current that slowly decays back to zero. We
then kicked the activity into the up-state (the target state) with a positive input and then let the current slowly decay to zero
again. A slowly-decaying stimulus (in contrast to a step stimulus) ensures that transient effects such as over- and undershooting
are minimized that would otherwise disturb the target state. As a result, the stability of the target state can be observed. We
determined whether the up-state is stable or the activity has decayed back into the down-state by comparing the
1s
mean of
the population rates after both stimuli have decayed. We classified a state as bistable if the rate difference after both kicks
and subsequent relaxation phases was greater than
10Hz
. This threshold value was chosen to be smaller than every observed
difference between the up- and the down-state. We confirmed the validity of this method for the mean-field model by using a
continuation method to determine the stability of the fixed-point states, which provided the same bifurcation diagrams,
Determining frequency spectra of the population activity
In the bifurcation diagrams in Figs. 2and 3, regions were classified as oscillating if the time series showed oscillations during
the last
1s
after the first (negative) stimulus pushed the system into the down-state. The power spectrum of this oscillation was
computed using the implementation of Welch’s method
64 scipy.signal.welch
in the python package SciPy (1.2.1)
65
. A
rolling Hanning window of length
0.5s
was used to compute the spectrum. If the dominant frequency was above
0.1Hz
and its
power density was above
1Hz
we classified the state as oscillating. Visual observation of the time series confirmed that these
thresholds classified the oscillating regions well. In cases where the transient of
1s
was too short such that the activity state of
the population jumped from the down-state to the up-state within this period, misclassifications of these points as oscillatory
states caused artifacts at the right-hand border of the bistable region to the up-state.
In Fig. 5, we determined frequency entrainment by observing changes of the frequency spectrum of the population activity
rE
. Each run was simulated for
6s
. We waited for
1s
for transient effects to vanish before turning on the oscillating stimulus
and measured the power spectrum of the remaining
5s
. A rolling Hanning window of length
1s
was used. For better visibility,
the power was normalized between 0 and 1 on a logarithmic scale and plotted with a linear colormap.
Measuring phase locking using the Kuramoto order parameter
In Fig. 6we quantified the degree of phase locking of an oscillatory input current with the E-I system’s ongoing oscillation. We
calculated the Kuramoto order parameter66 to measure phase synchrony. The Kuramoto order parameter Ris given by:
R(t) = 1
Nosc
Nosc
j=1
eiΦj(t)
.(22)
In out case, the number of oscillators
Nosc =2
and
Φj[0,2π)
is the instantaneous phase of the stimulus (
j=1
) and the
population activity rE(j=2). We define the phase Phijas
Φj(t) = 2πttn
tntn1
,(23)
where
tn
is the time of the last maximum (from time
t
) of the time series and
tn1
the penultimate one. To robustly detect the
oscillation maxima of the noisy AdEx network population rate, the time series was first smoothed using the Gaussian filter
scipy.ndimage.filters.gaussian_filter
implemented in SciPy. The Gaussian kernel had a standard deviation
of
5
. Then, the maxima were detected using the peak finding algorithm
scipy.signal.find_peaks_cwt
with a
peak-width between 0.1 and 0.2.
For
R=1
, perfect (zero-lag) phase synchronization is reached; if
R0
, the oscillations are maximally desynchronized. To
measure phase locking in Figs. 6a and c, we calculated the standard deviation of
R(t)
in time after transient effects vanish
for
t>1.5s
. A low standard deviation indicates that the phase difference between the input and the ongoing oscillation stays
constant.
Calculating equivalent electric field strengths
Our results can be used to estimate the necessary amplitude of an external electric field to reproduce the effects of electrical
input currents which we have investigated here. An external field at the location of a neural population might be produced by
endogenous electric fields due to the activity of a neural population or external stimulation techniques such as transcranial
electrical stimulation (tES) with direct (tDCS) or alternating (tACS) currents. The lack of a spatial extension of point neuron
models such as the AdEx neuron makes it impossible to directly couple an external electric field that could affect the internal
13/22
Figure 8. Conversion between electric field amplitudes and equivalent input currents. Curves show frequency-dependent amplitudes for an input
current in pA to an exponential integrate-and-fire neuron with parameters as defined in Table 1(in the main manuscript) when the electric field amplitude
acting on an equivalent ball-and-stick neuron is held constant. Electric field amplitudes in V/m are annotated for each curve.
membrane voltages. Following Ref.
31
we obtain an equivalent electrical input current
C·µext
E(t)
to a point neuron by matching
it to reproduce the effects of an oscillating extracellular electric field on a spatially extended ball-and-stick (BS) model neuron
of a given morphology. The point neuron model for which we calculated the equivalent current amplitudes is the exponential
integrate-and-fire (EIF) neuron, which is the same as the AdEx neuron without somatic adaptation (
a=b=0
). In the case with
adaptation, the translation from current to field works for high frequency inputs only and the approximation breaks down for
slowly oscillating inputs. We have limited our estimations to the case without adaptation.
The amplitude of the equivalent input current that causes the same subthreshold depolarization of an (linearized) EIF neuron
as the somatic depolarization caused by the effects of an oscillatory electric field on the BS neuron’s dendrite is then calculated
using
Iext =A
UBS(f)
zEIF(f)
(24)
A
is the amplitude of the electric field in
V/m
,
UBS(f)
is the frequency-dependent polarization transfer function of the BS
neuron and zEIF(f)is the impedance of the EIF neuron which are given by
UBS(f) = ga(2ez ldγ)/δ(25)
zEIF(f) = 1
gL1e
VrVT
T+2πiC ·f
(26)
with the following substitutions:
w=2πf,gm= (πdd)/ρm,ga= (π(dd/2)2)/ρa(27)
cm=Cmddπ,gs= (πd2
s)/ρs,cs=Cmπd2
s(28)
α=sgm+pg2
m+w2c2
m
2ga
,β=sgm+pg2
m+w2c2
m
2ga
(29)
z=α+iβ,γ=1+exp(2ldz),δ=γ(cswi +gs) + zga(2γ)(30)
The BS neuron we used to estimate electric field strengths has the following parameters: The soma has a diameter of
ds=10µm
,
a specific membrane capacitance of
Cm=10mF/m2
and a membrane resistivity of
ρs=2.8m2
. The dendritic cable has
a length of
ld=1200µm
, a diameter of
dd=2µm
, a membrane resistivity of
ρm=2.8m2
and an axial resistivity of
ρa=1.5m.
Using these parameters, an step input using an electric field with an amplitude of
1V/m
changes the somatic membrane
potential by about
0.1mV
from its resting potential of
65mV
. The curves in Fig. 8translate an electric field of a given
amplitude and frequency into a corresponding input current and vice versa. An increase of the mean membrane current by
0.1nA corresponds to an increase of the static electric field strength by 11 V/m. This
14/22
Numerical simulations
The mean-field equations were integrated using the forward Euler method. In Figs. 2each time series for a set of external
inputs
µext
E
and
µext
I
in the bifurcation diagrams was obtained after t =
5
s simulation with an integration timestep of dt = 0.05
ms. In Fig. 3we simulated each point for t = 10 s with dt = 0.01 ms. For Figs. 5we simulated for t = 30 s with dt = 0.05 ms.
The spiking network model was implemented using BRIAN2 (2.1.3.1)
67
in Python. The equations were integrated using
the implemented Heun’s integration method. An integration step size of
1ms
was used. For the bifurcation diagrams in Figs. 2,
we used N = 2x25000 (i.e. 25000 per population), a total simulation time of t =
6s
and in Fig. 3, N = 2x10000, t =
6s
. The
stimulation experiments in Fig. 4used N = 2x50000, t =
3s
. The spectra in Fig. 5used N = 2x10000, t =
6s
. Phase locking
plots in Fig. 6used N = 2x10000, t = 20 s.
Benchmarking the AdEx network with N = 2x50000 on a single core took around
104
times longer to run than the
corresponding mean-field simulation. This does not include the time required for initialization of the simulation, such as setting
up all synapses, which can also requires a comparable amount of time. The computation time scales nearly linearly with the
number of neurons.
Code availability
The python code including the implementation of the models, the simulation pipeline, the stimulation experiments, as well
as the data analysis and the ability to reproduce Figs. 2-6can be found in our github repository via the URL
https:
//github.com/caglarcakan/stimulus_neural_populations. The code is released under the BSD 2 license.
Acknowledgements
We would like to thank Dr. Josef Ladenbauer for his previous work on reduced population models, mathematical guidance,
and many insightful discussions. We would like to thank Dr. Florian Aspart his work on the effects of extracellular electric
fields, for helping to incorporate the results in this article, and for a helpful exchange of ideas. This work was supported by the
Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) with the project number 327654276 (SFB 1315) and
the Research Training Group GRK1589/2.
15/22
Tables
Parameter Value Description
σext 1.5 mV/ms Standard deviation of external input
Ke800 Number of excitatory inputs per neuron
Ki200 Number of inhibitory inputs per neuron
cEE ,cI E 0.3 mV/ms Maximum AMPA PSC amplitude49
cEI ,cI I 0.5 mV/ms Maximum GABA PSC amplitude,49
JEE 2.4 mV/ms Maximum synaptic current from E to E
JIE 2.6 mV/ms Maximum synaptic current from E to I
JEI 3.3 mV/ms Maximum synaptic current from I to E
JII 1.6 mV/ms Maximum synaptic current from I to I
τs,E2 ms Excitatory synaptic time constant
τs,I5 ms Inhibitory synaptic time constant
dE4 ms Synaptic delay to excitatory neurons
dI2 ms Synaptic delay to inhibitory neurons
C200 pF Membrane capacitance
gL10 nS Leak conductance
τmC/gLMembrane time constant
EL65 mV Leak reversal potential
T1.5 mV Threshold slope factor
VT50 mV Threshold voltage
Vs40 mV Spike voltage threshold
Vr70 mV Reset voltage
Tref 1.5 ms Refractory time
a15 nS Subthreshold adaptation conductance
b40 pA Spike-triggered adaptation increment
EA80 mV Adaptation reversal potential
τA200 ms Adaptation time constant
Table 1. Model parameters. Parameters apply for the Mean-Field model and the spiking AdEx network. Parameters aand bare always zero for the
inhibitory population.
Point Mean-Field model AdEx network Dynamical state
C·µext
EC·µext
IC·µext
EC·µext
I
A1 0.24 0.24 0.22 0.12 down
A2 0.26 0.1 0.32 0.3 LCEI
A3 0.41 0.34 0.4 0.24 bi
B3 0.8 0.36 0.76 0.24 LCaE
B4 0.76 0.4 0.68 0.24 down
Table 2. Input values for points of interest.
Values of the mean external inputs to the excitatory (
µext
E
) and the inhibitory population (
µext
I
) in units of
nA
for
points of interest in the bifurcation diagrams Fig. 2.
16/22
References
1.
Doron, G. & Brecht, M. What single-cell stimulation has told us about neural coding. Philosophical Transactions of the
Royal Society B: Biological Sciences 370 (2015).
2.
Lynch, E. P. & Houghton, C. J. Parameter estimation of neuron models using in-vitro and in-vivo electrophysiological data.
Frontiers in Neuroinformatics 9, 1–15 (2015).
3.
Reato, D., Rahman, A., Bikson, M. & Parra, L. C. Low-Intensity Electrical Stimulation Affects Network Dynamics by
Modulating Population Rate and Spike Timing. Journal of Neuroscience 30, 15067–15079 (2010).
4.
Reato, D., Rahman, A., Bikson, M. & Parra, L. C. Effects of weak transcranial alternating current stimulation on brain
activity—a review of known mechanisms from animal studies. Frontiers in Human Neuroscience 7, 1–8 (2013).
5.
Thut, G. et al. Guiding transcranial brain stimulation by EEG/MEG to interact with ongoing brain activity and associated
functions: A position paper. Clinical Neurophysiology 128, 843–857 (2017).
6.
Neuling, T., Rach, S., Wagner, S., Wolters, C. H. & Herrmann, C. S. Good vibrations: oscillatory phase shapes perception.
Neuroimage 63, 771–778 (2012).
7.
Herrmann, C., Rach, S., Neuling, T. & Strüber, D. Transcranial alternating current stimulation: a review of the underlying
mechanisms and modulation of cognitive processes. Frontiers in Human Neuroscience 7, 279 (2013).
8.
Marshall, L., Helgadóttir, H., Mölle, M. & Born, J. Boosting slow oscillations during sleep potentiates memory. Nature
444, 610–613 (2006).
9.
Berenyi, A., Belluscio, M., Mao, D. & Buzsáki, G. Closed-Loop Control of Epilepsy by Transcranial Electrical Stimulation.
Science 337, 735–737 (2012).
10.
Fröhlich, F. & McCormick, D. A. Endogenous Electric Fields May Guide Neocortical Network Activity. Neuron
67
,
129–143 (2010).
11.
Anastassiou, C. A., Perin, R., Markram, H. & Koch, C. Ephaptic coupling of cortical neurons. Nature Neuroscience
14
,
217–224 (2011).
12.
Fröhlich, F. Experiments and models of cortical oscillations as a target for noninvasive brain stimulation. Progress in Brain
Research 222, 41–73 (2015).
13.
Ozen, S. et al. Transcranial Electric Stimulation Entrains Cortical Neuronal Populations in Rats. Journal of Neuroscience
30, 11476–11485 (2010).
14.
Alagapan, S. et al. Modulation of Cortical Oscillations by Low-Frequency Direct Cortical Stimulation Is State-Dependent.
PLoS Biology 14, 1–21 (2016).
15.
Herrmann, C. S., Murray, M. M., Ionta, S., Hutt, A. & Lefebvre, J. Shaping Intrinsic Neural Oscillations with Periodic
Stimulation. The Journal of Neuroscience 36, 5328–5337 (2016).
16.
Brunel, N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational
Neuroscience 8, 183–208 (2000).
17.
Spiegler, A., Kiebel, S. J., Atay, F. M. & Knösche, T. R. Bifurcation analysis of neural mass models: Impact of extrinsic
inputs and dendritic time constants. Neuroimage 52, 1041–1058 (2010).
18.
Molaee-Ardekani, B. et al. Effects of transcranial Direct Current Stimulation (tDCS) on cortical activity: A computational
modeling study. Brain Stimulation 6, 25–39 (2013).
19.
D’Andola, M., Weinert, J. F., Mattia, M. & Sanchez-Vives, M. V. Modulation of slow and fast oscillations by direct current
stimulation in the cerebral cortex in vitro. bioRxiv 01 (2018).
20.
Brunel, N. Phase diagrams of sparsely connected networks of excitatory and inhibitory spiking neurons. Neurocomputing
32-33, 307–312 (2000).
21. Grimbert, F. & Olivier, F. Bifurcation analysis of neural mass equations. Neural Computation 18, 3052–3068 (2006).
22.
Deco, G., Jirsa, V. K., Robinson, P. A., Breakspear, M. & Friston, K. The dynamic brain: From spiking neurons to neural
masses and cortical fields. PLoS Computational Biology 4, e1000092 (2008).
23. Breakspear, M. Dynamic models of large-scale brain activity. Nature Neuroscience 20, 340–352 (2017).
24. Gu, S. et al. Controllability of structural brain networks. Nature Communications 6, 8414 (2015).
25. Muldoon, S. F. et al. Stimulation-Based Control of Dynamic Brain Networks. PLoS Computational Biology 12 (2016).
17/22
26.
Augustin, M., Ladenbauer, J., Baumann, F. & Obermayer, K. Low-dimensional spike rate models derived from networks of
adaptive integrate-and-fire neurons: comparison and implementation. PLOS Computational Biology
13
, e1005545 (2017).
27.
Ostojic, S. & Brunel, N. From spiking neuron models to linear-nonlinear models. PLoS Computational Biology
7
(2011).
28.
Brette, R. & Gerstner, W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. J.
Neurophysiol. 94, 3637–3642 (2005).
29.
Jolivet, R. et al. A benchmark test for a quantitative assessment of simple neuron models. Journal of Neuroscience Methods
169, 417–424 (2008).
30.
Naud, R., Marcille, N., Clopath, C. & Gerstner, W. Firing patterns in the adaptive exponential integrate-and-fire model.
Biological Cybernetics 99, 335–347 (2008).
31.
Aspart, F., Ladenbauer, J. & Obermayer, K. Extending Integrate-and-Fire Model Neurons to Account for the Effects of
Weak Electric Fields and Input Filtering Mediated by the Dendrite. PLoS Computational Biology 12, 1–29 (2016).
32. Jercog, D. et al. UP-DOWN cortical dynamics reflect state transitions in a bistable network. eLife 6, 1–33 (2017).
33. Roxin, A., & Compte, A. Oscillations in the bistable regime of neuronal networks. Physical Review E 94, 1–17 (2016).
34.
Radman, T., Ramos, R. L., Brumberg, J. C. & Bikson, M. Role of cortical cell type and morphology in subthreshold and
suprathreshold uniform electric field stimulation in vitro. Brain Stimulation 2, 215–228.e3 (2009).
35.
Madison, B. Y. D. V. & Nicoll, R. A. Control of the repetitive discharge of rat CA 1 pyramidal neurones in vitro. J. Physiol.
354, 319–331 (1984).
36.
Deans, J. K., Powell, A. D. & Jefferys, J. G. Sensitivity of coherent oscillations in rat hippocampus to AC electric fields.
Journal of Physiology 583, 555–565 (2007).
37.
Ali, M. M., Sellers, K. K. & Frohlich, F. Transcranial Alternating Current Stimulation Modulates Large-Scale Cortical
Network Activity by Network Resonance. Journal of Neuroscience 33, 11262–11275 (2013).
38.
Huang, Y. et al. Measurements and models of electric fields in the in vivo human brain during transcranial electric
stimulation. eLife 6, 1–26 (2017).
39.
Helfrich, R. et al. Entrainment of Brain Oscillations by Transcranial Alternating Current Stimulation. Current Biology
24
,
333–339 (2014).
40.
Witkowski, M. et al. Mapping entrained brain oscillations during transcranial alternating current stimulation (tACS).
Neuroimage 140, 89–98 (2016).
41.
Hansen, E. C. A., Battaglia, D., Spiegler, A., Deco, G. & Jirsa, V. K. Functional connectivity dynamics: Modeling the
switching behavior of the resting state. Neuroimage 105, 525–535 (2015).
42.
Betzel, R. F., Gu, S., Medaglia, J. D., Pasqualetti, F. & Bassett, D. S. Optimally controlling the human connectome: the
role of network topology. Scientific Reports 6(2016).
43.
Holmgren, C., Harkany, T., Svennenfors, B. & Zilberter, Y. Pyramidal cell communication within local networks in layer
2/3 of rat neocortex. Journal of Physiology 551, 139–153 (2003).
44. Laughlin, S. B. & Sejnowski, T. J. Communication in neuronal networks. Science 301, 1870–1874 (2003).
45.
Destexhe, A., Rudolph, M. & Paré, D. The high-conductance state of neocortical neurons in vivo. Nature Reviews
Neuroscience 4, 739–751 (2003).
46.
Fries, P. et al. Modulation of Oscillatory Neuronal Synchronization by Selective Visual Attention Modulation of Oscillatory
Neuronal Synchronization by Selective Visual Attention. Science (New York, N.Y.) 291, 1560–3 (2001).
47. Wang, X.-j. Neurophysiological and Computational Principles of Cortical Rhythms in Cognition. Physiological Reviews
1195–1268 (2010).
48.
Williams, S. R. & Stuart, G. J. Dependence of EPSP efficacy on synapse location in neocortical pyramidal neurons. Science
295, 1907–1910 (2002).
49.
Brunel, N. What Determines the Frequency of Fast Network Oscillations With Irregular Neural Discharges? I. Synaptic
Dynamics and Excitation-Inhibition Balance. Journal of Neurophysiology 90, 415–430 (2003).
50. Hertäg, L., Durstewitz, D. & Brunel, N. Analytical approximations of the firing rate of an adaptive exponential integrate-
and-fire neuron in the presence of synaptic noise. Frontiers in Computational Neuroscience 8, 116 (2014).
51.
Fourcaud-Trocmé, N., Hansel, D., van Vreeswijk, C. & Brunel, N. How spike generation mechanisms determine the
neuronal response to fluctuating inputs. J. Neurosci. 23, 11628–11640 (2003).
18/22
52.
Richardson, M. J. E. Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based
and conductance-based synaptic drive. Physical Review E 76, 021919 (2007).
53.
Ladenbauer, J. The collective dynamics of adaptive neurons: insights from single cell and network models. Ph.D. Thesis,
Technische Universität Berlin (2015).
54.
Nykamp, D. Q. & A, D. T. A population density approach that facilitates large-scale modeling of neural networks: Analysis
and an application to orientation tuning. Journal of Computational Neuroscience 8, 19–50 (2000).
55.
Renart, A., Brunel, N. & Wang, X.-J. Mean field theory of irregularly spiking neuronal populations and working memory
in recurrent cortical networks. In Computational Neuroscience A Comprehensive Approach, 431–490 (2004).
56.
Richardson, M. J. Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons. Physical
Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics 69, 8 (2004).
57.
Gigante, G., Mattia, M. & Giudice, P. Diverse population-bursting modes of adapting spiking neurons. Phys. Rev. Lett.
98
,
1–4 (2007).
58. Movellan, J. R. Tutorial on Stochastic Differential Equations. MPLab Tutorials 6(2011).
59.
Womble, M. D. & Moises, H. C. Muscarinic inhibition of M-current and a potassium leak conductance in neurones of the
rat basolateral amygdala. The Journal of Physiology 457, 93–114 (1992).
60.
Stocker, M. Ca(2+)-activated K+ channels: molecular determinants and function of the SK family. Nat. Rev. Neurosci.
5
,
758–770 (2004).
61.
Augustin, M., Ladenbauer, J. & Obermayer, K. How adaptation shapes spike rate oscillations in recurrent neuronal
networks. Front. Comput. Neurosci. 7, 1–11 (2013).
62.
La Camera, G. et al. Multiple time scales of temporal response in pyramidal and fast spiking cortical neurons. J.
Neurophysiol. 96, 3448–3464 (2006).
63.
Lundqvist, M., Compte, A. & Lansner, A. Bistable, irregular firing and population oscillations in a modular attractor
memory network. PLoS Computational Biology 6, 1–12 (2010).
64.
Welch, P. The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over
short, modified periodograms. IEEE Transactions On Audio and Electroacoustics 15, 70–73 (1967).
65. Jones, E., Oliphant, T., Peterson, P. & Others. SciPy: Open source scientific tools for Python (2001).
66. Kuramoto, Y. Chemical Oscillations, Waves, and Turbulence (Courier Corporation, 2003).
67.
Stimberg, M., Goodman, D. F. M., Benichoux, V. & Brette, R. Equation-oriented specification of neural models for
simulations. Front. in Neuroinform. 8, 1–14 (2014).
19/22
Supplementary Figures
Figure 9. Bifurcation diagrams with maximum rate of inhibitory population. (a) Bifurcation diagram of mean-field model without adaptation with up
and down-states, a bistable region bi (green dashed contour) and an oscillatory region LCEI (white solid contour). (b) Diagram of the corresponding AdEx
network. (c) Mean-field model with somatic adaptation has a slow oscillatory region LCaE.(d) Diagram of the corresponding AdEx network. The color
indicates the maximum population rate of the inhibitory population (clipped at 80 Hz). All parameters are given in Table 1in the main manuscript.
Figure 10. Bifurcation diagrams depicting the difference of excitatory and inhibitory amplitudes. (a) Bifurcation diagram of mean-field model
without adaptation with up and down-states, a bistable region bi (green dashed contour) and an oscillatory region LC
EI
(white solid contour).
(b)
Diagram of
the corresponding AdEx network.
(c)
Mean-field model with somatic adaptation has a slow oscillatory region LC
aE
.
(d)
Diagram of the corresponding AdEx
network. The color indicates the difference of excitatory and inhibitory amplitudes (clipped from -100 Hz to 100 Hz). All parameters are given in Table 1in the
main manuscript.
20/22
Figure 11. Bifurcation diagrams with dominant oscillation frequency of the excitatory population. (a) Bifurcation diagram of mean-field model
without adaptation.
(b)
Diagram of the corresponding AdEx network.
(c)
Mean-field model with somatic adaptation.
(d)
Diagram of the corresponding AdEx
network. The color indicates the difference of excitatory and inhibitory amplitudes (clipped at 35 Hz). All parameters are given in Table 1.
21/22
Figure 12. Bifurcation diagrams of mean-field model for changing coupling strengths. Stacked bifurcation diagrams depending on the mean input
current to populations E and I showing dynamical states for increasing
JEE
and
JII
(outer axis),
JIE
and
JEI
(inner axis) by values of
0.5mV/ms
. The middle
rows and columns correspond to the default value of the corresponding parameter. White contours are oscillatory areas LCEI , green dashed contours are
bistable regions. The system has no somatic adaptation. Diagram in the middle (blue box) corresponds to bifurcation diagram Fig. 2a in the main manuscript.
All parameters are given in Table 1in the main manuscript.
22/22
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In the idling brain, neuronal circuits transition between periods of sustained firing (UP state) and quiescence (DOWN state), a pattern the mechanisms of which remain unclear. Here we analyzed spontaneous cortical population activity from anesthetized rats and found that UP and DOWN durations were highly variable and that population rates showed no significant decay during UP periods. We built a network rate model with excitatory (E) and inhibitory (I) populations exhibiting a novel bistable regime between a quiescent and an inhibition-stabilized state of arbitrarily low rate. Fluctuations triggered state transitions, while adaptation in E cells paradoxically caused a marginal decay of E-rate but a marked decay of I-rate in UP periods, a prediction that we validated experimentally. A spiking network implementation further predicted that DOWN-to-UP transitions must be caused by synchronous high-amplitude events. Our findings provide evidence of bistable cortical network that exhibits non-rhythmic state transitions when the brain rests.
Article
Full-text available
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. [...] Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. [...] The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. [...] Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models.
Article
Full-text available
Transcranial brain stimulation and evidence of ephaptic coupling have recently sparked strong interests in understanding the effects of weak electric fields on the dynamics of brain networks and of coupled populations of neurons. The collective dynamics of large neuronal populations can be efficiently studied using single-compartment (point) model neurons of the integrate-and-fire (IF) type as their elements. These models, however, lack the dendritic morphology required to biophysically describe the effect of an extracellular electric field on the neuronal membrane voltage. Here, we extend the IF point neuron models to accurately reflect morphology dependent electric field effects extracted from a canonical spatial "ball-and-stick" (BS) neuron model. Even in the absence of an extracellular field, neuronal morphology by itself strongly affects the cellular response properties. We, therefore, derive additional components for leaky and nonlinear IF neuron models to reproduce the subthreshold voltage and spiking dynamics of the BS model exposed to both fluctuating somatic and dendritic inputs and an extracellular electric field. We show that an oscillatory electric field causes spike rate resonance, or equivalently, pronounced spike to field coherence. Its resonance frequency depends on the location of the synaptic background inputs. For somatic inputs the resonance appears in the beta and gamma frequency range, whereas for distal dendritic inputs it is shifted to even higher frequencies. Irrespective of an external electric field, the presence of a dendritic cable attenuates the subthreshold response at the soma to slowly-varying somatic inputs while implementing a low-pass filter for distal dendritic inputs. Our point neuron model extension is straightforward to implement and is computationally much more efficient compared to the original BS model. It is well suited for studying the dynamics of large populations of neurons with heterogeneous dendritic morphology with (and without) the influence of weak external electric fields.
Article
Full-text available
Unlabelled: Rhythmic brain activity plays an important role in neural processing and behavior. Features of these oscillations, including amplitude, phase, and spectrum, can be influenced by internal states (e.g., shifts in arousal, attention or cognitive ability) or external stimulation. Electromagnetic stimulation techniques such as transcranial magnetic stimulation, transcranial direct current stimulation, and transcranial alternating current stimulation are used increasingly in both research and clinical settings. Currently, the mechanisms whereby time-dependent external stimuli influence population-scale oscillations remain poorly understood. Here, we provide computational insights regarding the mapping between periodic pulsatile stimulation parameters such as amplitude and frequency and the response dynamics of recurrent, nonlinear spiking neural networks. Using a cortical model built of excitatory and inhibitory neurons, we explored a wide range of stimulation intensities and frequencies systematically. Our results suggest that rhythmic stimulation can form the basis of a control paradigm in which one can manipulate the intrinsic oscillatory properties of driven networks via a plurality of input-driven mechanisms. Our results show that, in addition to resonance and entrainment, nonlinear acceleration is involved in shaping the rhythmic response of our modeled network. Such nonlinear acceleration of spontaneous and synchronous oscillatory activity in a neural network occurs in regimes of intense, high-frequency rhythmic stimulation. These results open new perspectives on the manipulation of synchronous neural activity for basic and clinical research. Significance statement: Oscillatory activity is widely recognized as a core mechanism for information transmission within and between brain circuits. Noninvasive stimulation methods can shape this activity, something that is increasingly capitalized upon in basic research and clinical practice. Here, we provide computational insights on the mechanistic bases for such effects. Our results show that rhythmic stimulation forms the basis of a control paradigm in which one can manipulate the intrinsic oscillatory properties of driven networks via a plurality of input-driven mechanisms. In addition to resonance and entrainment, nonlinear acceleration is involved in shaping the rhythmic response of our modeled network, particularly in regimes of high-frequency rhythmic stimulation. These results open new perspectives on the manipulation of synchronous neural activity for basic and clinical research.
Article
Full-text available
Cortical oscillations play a fundamental role in organizing large-scale functional brain networks. Noninvasive brain stimulation with temporally patterned waveforms such as repetitive transcranial magnetic stimulation (rTMS) and transcranial alternating current stimulation (tACS) have been proposed to modulate these oscillations. Thus, these stimulation modalities represent promising new approaches for the treatment of psychiatric illnesses in which these oscillations are impaired. However, the mechanism by which periodic brain stimulation alters endogenous oscillation dynamics is debated and appears to depend on brain state. Here, we demonstrate with a static model and a neural oscillator model that recurrent excitation in the thalamo-cortical circuit, together with recruitment of cortico-cortical connections, can explain the enhancement of oscillations by brain stimulation as a function of brain state. We then performed concurrent invasive recording and stimulation of the human cortical surface to elucidate the response of cortical oscillations to periodic stimulation and support the findings from the computational models. We found that (1) stimulation enhanced the targeted oscillation power, (2) this enhancement outlasted stimulation, and (3) the effect of stimulation depended on behavioral state. Together, our results show successful target engagement of oscillations by periodic brain stimulation and highlight the role of nonlinear interaction between endogenous network oscillations and stimulation. These mechanistic insights will contribute to the design of adaptive, more targeted stimulation paradigms.
Article
Full-text available
To meet ongoing cognitive demands, the human brain must seamlessly transition from one brain state to another, in the process drawing on different cognitive systems. How does the brain's network of anatomical connections help facilitate such transitions? Which features of this network contribute to making one transition easy and another transition difficult? Here, we address these questions using network control theory. We calculate the optimal input signals to drive the brain to and from states dominated by different cognitive systems. The input signals allow us to assess the contributions made by different brain regions. We show that such contributions, which we measure as energy, are correlated with regions' weighted degrees. We also show that the network communicability, a measure of direct and indirect connectedness between brain regions, predicts the extent to which brain regions compensate when input to another region is suppressed. Finally, we identify optimal states in which the brain should start (and finish) in order to minimize transition energy. We show that the optimal target states display high activity in hub regions, implicating the brain's rich club. Furthermore, when rich club organization is destroyed, the energy cost associated with state transitions increases significantly, demonstrating that it is the richness of brain regions that makes them ideal targets.
Poster
Transcranial electric stimulation aims to stimulate the brain by applying weak electrical currents at the scalp. However, the magnitude and spatial distribution of electric fields in the human brain are unknown. Here we measure electric potentials intracranially in ten epilepsy patients and estimate electric fields across the entire brain by leveraging calibrated current-flow models. Electric field magnitudes at the cortical surface reach values of 0.4 V/m, which is at the lower limit of effectiveness in animal studies. When individual anatomy is taken into account, the predicted electric field magnitudes correlate with the recorded values (r=0.89 and r=0.84 in cortical and depth electrodes, respectively). Modeling white matter anisotropy and different skull compartments does not improve accuracy, but correct magnitude estimates require an adjustment of conductivity values used in the literature. This is the first study to validate and calibrate current-flow models with in vivo intracranial recordings in humans, providing a solid foundation for targeting of stimulation and interpretation of clinical trials.
Article
Movement, cognition and perception arise from the collective activity of neurons within cortical circuits and across large-scale systems of the brain. While the causes of single neuron spikes have been understood for decades, the processes that support collective neural behavior in large-scale cortical systems are less clear and have been at times the subject of contention. Modeling large-scale brain activity with nonlinear dynamical systems theory allows the integration of experimental data from multiple modalities into a common framework that facilitates prediction, testing and possible refutation. This work reviews the core assumptions that underlie this computational approach, the methodological framework that fosters the translation of theory into the laboratory, and the emerging body of supporting evidence. While substantial challenges remain, evidence supports the view that collective, nonlinear dynamics are central to adaptive cortical activity. Likewise, aberrant dynamic processes appear to underlie a number of brain disorders.
Article
Non-invasive transcranial brain stimulation (NTBS) techniques have a wide range of applications but also suffer from a number of limitations mainly related to poor specificity of intervention and variable effect size. These limitations motivated recent efforts to focus on the temporal dimension of NTBS with respect to the ongoing brain activity. Temporal patterns of ongoing neuronal activity, in particular brain oscillations and their fluctuations, can be traced with electro- or magnetoencephalography (EEG/MEG), to guide the timing as well as the stimulation settings of NTBS. These novel, online and offline EEG/MEG-guided NTBS-approaches are tailored to specifically interact with the underlying brain activity. Online EEG/MEG has been used to guide the timing of NTBS (i.e., when to stimulate): by taking into account instantaneous phase or power of oscillatory brain activity, NTBS can be aligned to fluctuations in excitability states. Moreover, offline EEG/MEG recordings prior to interventions can inform researchers and clinicians how to stimulate: by frequency-tuning NTBS to the oscillation of interest, intrinsic brain oscillations can be up- or down-regulated. In this paper, we provide an overview of existing approaches and ideas of EEG/MEG-guided interventions, and their promises and caveats. We point out potential future lines of research to address challenges.
Article
Bistability between attracting fixed points in neuronal networks has been hypothesized to underlie persistent activity observed in several cortical areas during working memory tasks. In network models this kind of bistability arises due to strong recurrent excitation, sufficient to generate a state of high activity created in a saddle-node (SN) bifurcation. On the other hand, canonical network models of excitatory and inhibitory neurons (E-I networks) robustly produce oscillatory states via a Hopf (H) bifurcation due to the E-I loop. This mechanism for generating oscillations has been invoked to explain the emergence of brain rhythms in the β to γ bands. Although both bistability and oscillatory activity have been intensively studied in network models, there has not been much focus on the coincidence of the two. Here we show that when oscillations emerge in E-I networks in the bistable regime, their phenomenology can be explained to a large extent by considering coincident SN and H bifurcations, known as a codimension two Takens-Bogdanov bifurcation. In particular, we find that such oscillations are not composed of a stable limit cycle, but rather are due to noise-driven oscillatory fluctuations. Furthermore, oscillations in the bistable regime can, in principle, have arbitrarily low frequency.