ArticlePDF Available

Ghosts of bump attractors in stochastic neural fields: Bottlenecks and extinction

Authors:

Abstract and Figures

We study the effects of additive noise on stationary bump solutions to spatially extended neural fields near a saddle-node bifurcation. The integral terms of these evolution equations have a weighted kernel describing synaptic interactions between neurons at different locations of the network. Excited regions of the neural field correspond to parts of the domain whose fraction of active neurons exceeds a sharp threshold of a firing rate nonlinearity. As the threshold is increased, these a stable and unstable branch of bump solutions annihilate in a saddle node bifurcation. Near this criticality, we derive a quadratic amplitude equation that describes the slow evolution of the even mode as it depends on the distance from the bifurcation. Beyond the bifurcation, bumps eventually become extinct, and the time it takes for this to occur increases for systems nearer the bifurcation. When noise is incorporated, a stochastic amplitude equation for the even mode can be derived, which can be analyzed to reveal bump extinction time both below and above the saddle-node.
Content may be subject to copyright.
GHOSTS OF BUMP ATTRACTORS IN STOCHASTIC NEURAL
FIELDS: BOTTLENECKS AND EXTINCTION
ZACHARY P. KILPATRICK
Abstract. We study the effects of additive noise on stationary bump solutions to spatially
extended neural fields near a saddle-node bifurcation. The integral terms of these evolution equations
have a weighted kernel describing synaptic interactions between neurons at different locations of the
network. Excited regions of the neural field correspond to parts of the domain whose fraction of
active neurons exceeds a sharp threshold of a firing rate nonlinearity. As the threshold is increased,
these a stable and unstable branch of bump solutions annihilate in a saddle node bifurcation. Near
this criticality, we derive a quadratic amplitude equation that describes the slow evolution of the even
mode as it depends on the distance from the bifurcation. Beyond the bifurcation, bumps eventually
become extinct, and the time it takes for this to occur increases for systems nearer the bifurcation.
When noise is incorporated, a stochastic amplitude equation for the even mode can be derived, which
can be analyzed to reveal bump extinction time both below and above the saddle-node.
Key words. Stochastic partial differential equations, Langevin equation, Perturbation theory,
Amplitude qquations, Saddle-node bifurcation
1. Introduction. Continuum neural fields are a well-accepted model of spa-
tiotemporal neuronal activity evolving within in vitro and in vivo brain tissue [6,10].
Wilson and Cowan initially introduced these nonlocal integrodifferential equations to
model activity of neuronal populations in terms of mean firing rates [41]. While they
discount the intricate dynamics of neuronal spiking, these models are capable of qual-
itatively capturing a wide range of phenomena such as propagating activity waves
observed in disinhibited slice preparations [23,3335]. Neural field models exhibit a
wide variety of spatiotemporal dynamics including traveling waves, Turing patterns,
stationary pulses, breathers, and spiral waves [14,16]. A distinct advantage of uti-
lizing these continuum equations to model large-scale neural activity is that many
analytical methods for studying their behavior can be adapted from nonlinear partial
differential equations (PDEs) [6]. Recently, several authors have explored the impact
of stochasticity on spatiotemporal patterns in neural fields [8,25,27] by employing
techniques originally used to study stochastic front propagation in reaction-diffusion
systems [36]. Typically, the approach is to perturb about a linearly stable solution
of the deterministic system, under the assumption of weak noise. However, some re-
cent efforts have been aimed at understanding the impact of noise on patterns near
bifurcations [25,28].
In this work, we are particularly interested in how noise interacts with stationary
pulse (bump) solutions near a saddle-node bifurcation at which a branch of stable
bumps and a branch of unstable bumps annihilate [1]. Bumps are commonly utilized
as a model of persistent and tuned neural activity underlying spatial working memory
[18,42]. This activity tends to last for a few seconds, after which it is extinguished,
to allow for subsequent memories to be formed [20]. One proposed mechanism for
terminating persistent activity is a strong and brief global inhibitory signal, which
would drive the system from the stable bump state to a stable uniform quiescent
state [9]. In terms of neural field and spiking models, this can be thought of as
momentarily raising the firing threshold of the system, essentially driving it beyond
the saddle-node bifurcation from which the stable bump emerges.
Department of Mathematics, University of Houston, Houston, Texas 77204, USA
(zpkilpat@math.uh.edu). This author is supported by an NSF grant (DMS-1311755).
1
arXiv:1505.06257v1 [nlin.PS] 23 May 2015
We focus on a scalar neural field model that supports stationary bump solutions
for appropriate choices of parameters and constituent functions [1,10]:
∂u(x, t)
∂t =u(x, t) + Z
w(xy)f(u(y, t))dy(1.1)
where u(x, t) is the total synaptic input arriving to location xand time t, and w(xy)
describes the strength (amplitude) and polarity (sign) of synaptic connections from
neurons at location yto neurons at location x. We assume w(x) is an even-symmetric
function w(x) = w(x) with a bounded integral Rw(x)dxover the spatial domain
xΩ=(x, x). The nonlinearity f(u) is a firing rate function, which we take to
be the sigmoid [41]
f(u) = 1
1+eη(uκ),(1.2)
and we also find it useful to take the high gain limit η→ ∞, in which case:
f(u) = H(uθ) = 1 : u > θ,
0 : u < θ, (1.3)
allowing for analytical tractability in several of our calculations.
Amari was the first to analyze Eq. (1.1) in detail, showing that when f(u) is de-
fined to be a Heaviside function (1.3), the network supports stable stationary bump
solutions when the weight function w(x) is a lateral inhibitory (Mexican hat) distri-
bution satisfying: (i) w(x)>0 for x[0, x0) with w(x0) = 0; (ii) w(x)<0 for
x(x0, x); (iii) w(x) is decreasing on [0, x0]; and (iv) w(x) has a unique minimum
on [0, x) at x=x1with x1> x0and w(x) strictly increasing on (x1, x) [1]. Based
on restrictions (i)-(iv), Amari made use of the integral of the weight function
W(x)Zx
0
w(y)dy(1.4)
to prove some of the main results of his seminal work. For instance, it is clear that
W(0) = 0 and W(x) = W(x) based on the above assumptions. Moreover, there
will be a single maximum of the function W(x) on the interval (0, x) given at
x=x0, i.e. Wmax = maxxW(x) = W(x0), due to conditions (i) and (ii), and
w(x0) = 0. When θ < W (x0) there are two bump solutions: one stable and one
unstable (up to translation symmetry), and when θ > W (x0) there are no bump
solutions to (1.1). When θ=θcW(x0), there is a single marginally stable bump
solution. It is at this point that the two branches (stable and unstable) of bump
solutions meet and annihilate in a saddle-node bifurcation (Fig. 2.1). Dynamics of
(1.1) for values of θbeyond this saddle-node bifurcation evolve to quasi-stationary
solutions resembling the ghost of the bump at θc, lasting for a period of time inversely
related to p|θθc|[39]. A principled exploration of these dynamics (section 2) is
one of the primary goals of this paper.
As mentioned, the neural field equation (1.1) in the absence of noise has been
analyzed extensively [1,10,14]. We expand upon these previous studies by also ex-
ploring the impact of noise on stationary bump solutions to (1.1) near a saddle-node
bifurcations (section 3). Additive noise is incorporated, so that the evolution of the
neural field is now described by the spatially extended Langevin equation [4,6,25,30]:
du(x, t) = u(x, t) + Z
w(xy)f(u(y, t))dydt+dW(x, t),(1.5)
2
|xy|
01234
w(|xy|)
-0.4
-0.2
0
0.2
0.4
0.6
w(2ac) = 0
A
x
01234
W(x)
0
0.1
0.2
0.3
0.4
0.5
0.6
B
θc: saddle-node bifurcation
θc<θ: no equilibria
2acθc>θ: two equilibria
2au2as
Fig. 2.1.Saddle-node bifurcation of bumps in (1.1) with a Heaviside firing rate function (1.3).
(A) Difference of Gaussians weight function w(x)=ex2
Aex22has a Mexican hat profile
since A= 0.4<1and σ= 2 >1. The critical bump half-width acat the saddle-node satisfies
the relation w(2ac)=0. (B) The weight function integral (1.4) determines the bump half-widths
a. When θis below the critical threshold θcat the saddle-node, there are two stationary bump
solutions to (1.1): one stable asand one unstable au. When θ > θc, there are zero equilibria, but
the dynamics of (1.1) are slow in the bottleneck near Uc(x).
where the term dW(x, t) is the increment of a spatially varying Wiener process with
spatial correlations defined by hdW(x, t)i= 0 and dW(x, t)dW(y, s)i=C(xy)δ(t
s)dtdsand describes the amplitude of the noise, assumed to be weak (1). The
function C(xy) describes the spatial correlation in each noise increment between
two points x, y Ω.
2. Slow bump extinction in the deterministic system. We begin by ex-
amining the dynamics of stationary bump solutions near a saddle-node bifurcation,
where a stable and unstable branch of solutions annihilate. Our initial analysis fo-
cuses on the noise-free case W(x, t)0, allowing us to derive an amplitude equation
that approximates the evolution of the bump height. Linearization of bumps in (1.1)
typically reveals that they are marginally stable to translating perturbations, so the
overall stability is typically characterized by the stability to even perturbations that
expand/contract the bump [14]. Our analysis will emphasize the region of parameter
space near where bumps are marginally stable to even perturbations.
2.1. Existence and stability of bumps. We now briefly review existence and
stability results for stationary bump solutions to the neural field equation (1.1). These
results are analogous to those presented in [1,27,40]. For transparency, we focus on
the case of a Heaviside firing rate function (1.3). This allows us to cast bump stability
in terms of a finite dimensional set of equations, focusing on the evolution of the two
edge interfaces of the bump [1,13]. Assuming a stationary solution u(x, t) = U(x),
we find (1.1) requires
U(x) = Z
w(xy)H(U(y)θ)dy. (2.1)
Given a unimodal bump solution U(x), without loss of generality, we can fix the center
and peak of the bump to be at the origin x= 0. In the case of even-symmetric bumps
U(x) = U(x) [1], we will have the conditions for the bump half-width a:U(x)> θ
for x(a, a), U(x)< θ for x\[a, a], and U(±a) = θ. In this case, (2.1)
3
becomes
U(x) = Za
a
w(xy)dy=Zx+a
xa
w(y)dy=Zx+a
0
w(y)dyZxa
0
w(y)dy.
By utilizing the integral function (1.4), we can write the even-symmetric solution
U(x) = W(x+a)W(xa).(2.2)
To determine the half-width a, we require the threshold conditions U(±a) = θof the
solution (2.2) to yield
U(a) = W(2a) = Z2a
0
w(y)dy=θ.
Note that when θ < Wmax = maxxW(x), there will be a stable and unstable bump
solution to (1.1). When θ=θcWmax , there is a single marginally stable bump
solution Uc(x) to (1.1), as illustrated in Fig. 2.1B. Differentiating W(2a) by its ar-
gument yields W0(2ac) = w(2ac)0 as an implicit equation for the half-width acat
this criticality. Utilizing the notation of Amari condition (i), we have that ac=x0/2.
Note, the relation w(2ac) = 0 is explicitly solvable for acfor several typical lateral in-
hibitory type weight functions. For instance, in the case of the difference of Gaussians
w(x)=ex2Aex22on x(−∞,) [1], we have ac=σpln(1/A)/2σ21
and θc=π
2[erf(2ac)erf(2ac)]. For the “wizard hat” w(x) = (1 |x|)e−|x|on
x(−∞,) [11], we have ac= 1/2 and θc= e1. For a cosine weight w(x) = cos(x)
on the periodic domain x[π , π] [27], we have ac=π/4 and θc= 1.
To characterize the stability of bump solutions to (1.1), we will study the evolution
of small smooth perturbations ε¯
ψ(x, t) (ε1) to stationary bumps U(x) by utilizing
the Taylor expansion u(x, t) = U(x) + ε¯
ψ(x, t) + O(ε2). By plugging this expansion
into (1.1) and truncating to O(ε), we can derive an equation whose solutions constitute
the family of eigenfunctions associated with the linearization of (1.1) about the bump
solution U(x). We begin by truncating (1.1) to O(ε) assuming uis given by the above
expansion and that the nonlinearity f(u) is given by the Heaviside function (1.3), so
¯
ψ(x, t)
∂t =¯
ψ(x, t) + Z
w(xy)H0(U(y)θ)¯
ψ(y, t)dy, (2.3)
and we can differentiate the Heaviside function, in the sense of distributions, by noting
H(U(x)θ) = H(x+a)H(xa), so
δ(x+a)δ(xa) = dH(U(x)θ)
dx=H0(U(x)θ)U0(x),
which we can rearrange to find
H0(U(x)θ) = δ(x+a)δ(xa)
U0(x)=1
|U0(a)|(δ(x+a) + δ(xa)) .(2.4)
Upon applying the identity (2.4) to (2.3), we have
¯
ψ(x, t)
∂t =¯
ψ(x, t) + γw(x+a)¯
ψ(a, t) + w(xa)¯
ψ(a, t),(2.5)
4
where γ1=|U0(a)|=w(0) w(2a). One class of solutions, such that ¯
ψ(±a, t) =
¯
ψ(±a, 0) = 0, lies in the essential spectrum of the linear operator that defines (2.5).
In this case, ¯
ψ(x, t) = ¯
ψ(x, 0)et, so perturbations of this type to not contribute
to any instabilities of the stationary bump U(x) [21]. Assuming separable solutions
¯
ψ(x, t) = b(t)ψ(x), we can characterize the remaining solutions to (2.5). In this case,
b0(t) = λb(t), so b(t)=eλt and
(λ+ 1)ψ(x) = γ[w(x+a)ψ(a) + w(xa)ψ(a)] .(2.6)
Solutions to (2.6) that do not satisfy the condition ψ(±a)0 can be separated into
two classes: (i) odd ψ(a) = ψ(a) and (ii) even ψ(a) = ψ(a). This is due to the
fact that the equation (2.6) implies the function ψ(x) is fully specified by its values
at x=±a. Thus, we need only concern ourselves with these two points, yielding the
two-dimensional linear system
(λ+ 1)ψ(a) = γ[w(0)ψ(a) + w(2a)ψ(a)] (2.7a)
(λ+ 1)ψ(a) = γ[w(2a)ψ(a) + w(0)ψ(a)] .(2.7b)
For odd solutions ψ(a) = ψ(a), the eigenvalue
λo=1 + γ[w(0) w(2a)] = 1 + w(0) w(2a)
w(0) w(2a)= 0,
reflecting the fact that (1.1) is translationally symmetric, so bumps are marginally
stable to perturbations that translate their position. Even solutions ψ(a) = ψ(a)
have associated eigenvalue
λe=1 + γ[w(0) + w(2a)] = 1 + w(0) + w(2a)
w(0) w(2a)=2w(2a)
w(0) w(2a).
Thus, when θ < θc, the wide bump as> acwill be linearly stable to expand-
ing/contracting perturbations since w(2as)<0 due to Amari’s condition (ii) [1]. The
narrow bump au< acis linearly unstable to such perturbations since w(2au)>0
due to condition (i). When θ=θc, we have w(2ac) = 0 so that λe= 0 and
|U0(±ac)|=w(0).
In anticipation of our derivations of amplitude equations, we define the eigenfunc-
tions at the criticality θ=θc. Utilizing the fact that |U0(±ac)|=w(0) and the linear
system (2.7a), we have that the odd eigenfunction at the bifurcation is
ψo(x) = 1
w(0) [w(xac)w(x+ac)] ,(2.8)
and the even eigenfunction is
ψe(x) = 1
w(0) [w(xac) + w(x+ac)] .(2.9)
Note, this specifies that ψe(±a) = ψo(a) = ψ0(a) = 1. Furthermore, we will find
it useful to compute the derivatives
ψ0
o(x) = 1
w(0) [w0(xa)w0(x+a)] ,
5
which is even (ψ0
o(ac) = ψ0
o(ac)), and
ψ0
e(x) = 1
w(0) [w0(xa) + w0(x+a)] ,
which is odd (ψ0
e(ac) = ψ0
e(ac)). Lastly, we note that we will utilize the fact
that, for even symmetric functions, w0(0) = 0, so ψ0
o(±ac) = ψ0
e(ac) = ψ0
e(ac) =
w0(2ac)/w(0).
2.2. Saddle-node bifurcation of bumps. Motivated by the above linear sta-
bility analysis, we now carry out a nonlinear analysis in the vicinity of the saddle-node
bifurcation from which the stable and unstable branches of stationary bumps emanate.
Specifically, we will perform a perturbation expansion about the bump solution Uc(x)
at the critical threshold value θc. We therefore define θ=θc+µε2,ε1, so that µis
a bifurcation parameter determining the distance of θfrom the saddle-node bifurca-
tion point. As demonstrated above, the linear stability problem for Uc(x) reveals two
zero eigenvalues λo=λe= 0 associated with the odd ψoand even ψeeigenfunctions
(2.8) and (2.9), respectively. Our analysis employs the ansatz:
u(x, t) = Uc(x) + εAe(τ)ψe(x) + ε2Ao(t)ψo(x) + ε2u2(x, τ ) + O(ε3),(2.10)
where τ=εt is a temporal rescaling that reflects the vicinity of the system to a
saddle-node bifurcation associated with the expanding/contracting eigenmode ψe[39].
Similar expansions have been utilized in the analysis of front bifurcations in reaction-
diffusion systems [3,37] and neural field models [5]. Upon plugging (2.10) into (1.1)
and expanding in orders of ε, we find that at O(1), we simply have the stationary
bump equation (2.1) at θ=θc. Proceeding to O(ε), we find
0 = Ae(τ)Z
w(xy)H0(Uc(y)θc)ψe(y)dyψe(x),
so we can use (2.4) to write
0 = Ae(τ)1
w(0)(w(x+a)ψe(a) + w(xa)ψe(a)) ψe(x).(2.11)
The right hand side of (2.11) vanishes due to the formula for the even (2.9) eigen-
function associated with the stability of the bump Uc(x). At O(ε2), we obtain an
equation for higher order term u2:
L[Aoψo+u2] =A0
eψe+A0
oψo+µZ
w(xy)H0(Uc(y)θc)dy(2.12)
A2
e
2Z
w(xy)H00(Uc(y)θc)ψe(y)2dy,
where Lis the non-self-adjoint linear operator
Lu(x) = u(x) + Z
w(xy)H0(Uc(y)θc)u(y)dy. (2.13)
Both ψo(x) and ψe(x) lie in the nullspace N(L), as demonstrated in the previous
section by identifying solutions to (2.3). Thus, the ψoterms on the left hand side of
(2.12) vanish. We can ensure a bounded solution to (2.12) exists by requiring that the
6
right hand side be orthogonal to all elements of the nullspace of the adjoint operator
L. The adjoint is defined with respect to the L2inner product
hLu, vi=Z
[Lu(x)] v(x)dx=Z
u(x) [Lv(x)] dx=hu, Lvi.(2.14)
Thus, we find
Lv(x) = v(x) + H0(Uc(x)θc)Z
w(xy)v(y)dy, (2.15)
defined in the sense of distributions under the L2inner product given in (2.14). It is
straightforward to show that ϕo:= H0(Ucθc)ψoand ϕe:= H0(Ucθc)ψelie in the
nullspace of L. Components of N(L) are defined by the equation
v(x) = H0(Uc(x)θc)Z
w(xy)v(y)dy. (2.16)
To show ϕo, ϕe∈ N(L), we simply plug these formulas into (2.16) to find
H0(Uc(x)θc)ψj(x) = H0(Uc(x)θc)Z
w(xy)H0(Uc(y)θc)ψj(y)dy,
for j=o, e, which is true due to the fact that ψoand ψelie in N(L). Thus, we will
impose solvability of (2.12) by taking the inner product of both sides of the equation
with respect to ϕo:= H0(Ucθc)ψoand ϕe:= H0(Ucθc)ψeyielding
0 = ϕj, A0
eψe+A0
oψo+µw H0(Ucθc)A2
e
2wH00(Ucθc)ψ2
e,(2.17)
for j=o, e, where we have defined the convolution wF=Rw(xy)F(y)dy. Due to
odd-symmetry, terms of the form hH0(Ucθc)ψj, ψki,j6=k, vanish. In a similar way,
the term hH0(Ucθc)ψo, w H0(Ucθc)ivanishes due to odd-symmetry. Isolating the
temporal derivatives A0
jin (2.17), we find that the amplitudes Aj(j=o, e) satisfy
the following fast-slow system of nonlinear differential equations
dAo
dt=ϕo, w H00(Ucθc)ψ2
e
2hϕo, ψoiAe(τ)2,(2.18a)
dAe
dτ=µhϕe, w H0(Ucθc)i
hϕe, ψei+ϕe, w H00(Ucθc)ψ2
e
2hϕe, ψeiAe(τ)2.(2.18b)
With the system (2.18) in hand, we can determine the long term dynamics of
the amplitudes as the bifurcation parameter µis varied. We begin by computing the
constituent components of the right hand sides, using properties of the eigenfunctions
ψoand ψe. To start, we will compute the second derivative H00(Ucθc), which appears
in the coefficient of the quadratic term A2
e. Differentiating the function H(Uc(x)θc)
twice with respect to x, using the chain and product rule, we find the following formula
d2H(Uc(x)θc)
dx2= (U0
c(x))2H00(Uc(x)θc) + U00
c(x)H0(Uc(x)θc)
= (U0
c(x))2H00(Uc(x)θc) + U00
c(x)
U0
c(x)
dH(Uc(x)θc)
dx,
7
where we have applied the identity (2.4) for the first derivative H0(Uθ). Rearranging
terms, we find that
H00(Ucθc) = 1
U0
c(x)2
d2H(Uc(x)θc)
dx2U00
c(x)
|U0
c(a)|3[δ(x+ac) + δ(xac)] .(2.19)
We can further specify the formula (2.19) by differentiating dH(Ucθc)
dx=δ(x+ac)
δ(xac) with respect to xto yield
d2H(Ucθc)
dx2=δ0(x+ac)δ0(xac),
where δ0(xx0) is defined, in the sense of distributions, for any smooth function F(x)
by using integration-by-parts [26]:
Z
δ0(xx0)F(x)dx=Z
δ(xx0)F0(x)dx=F0(x0).
Furthermore, we note that the spatial derivative |U0
c(±ac)|=w(0) and U00
c(x) =
w0(x+ac)w0(xac). Even symmetry of w(x) mandates that w0(x) = w0(x)
and w0(0) = 0, so U00
c(±ac) = w0(2ac). Thus, we can at last write
H00(Ucθc) = δ0(x+ac)δ0(xac)
w(0)2w0(2ac) [δ(x+ac) + δ(xac)]
w(0)3.(2.20)
Computing the inner products in (2.18) then simply amounts to evaluating the inte-
grals in the sense of distributions. First, we use (2.4) to note
hϕj, ψji=Z
ψj(x)2H0(Uc(x)θc)dx=γψj(ac)2+ψj(ac)2=2
w(0),
for j=o, e, since ψe(±ac) = ψo(ac) = ψo(ac) = 1. Furthermore,
hϕe, w H0(Ucθc)i=ZZ
w(xy)ψe(x)H0(Uc(x)θc)H0(Uc(y)θc)dydx
=γ2Z"X
a=±ac
w(x+a)#ψe(x)"X
a=±ac
δ(x+a)#dx
=γ2[ψe(ac) + ψe(ac)] ·[w(0) + w(2ac)] = 2
w(0),(2.21)
where we have utilized ψe(±ac) = 1 and w(2ac)0. Finally, we compute the
quadratic terms using the identity (2.20), starting with
hϕo, w H00(Ucθc)ψ2
ei=ZZ
w(xy)ϕo(x)H00(Uc(y)θc)ψe(y)2dydx
=γX
a=±ac
ψo(a)Z
w(ay)H00(Uc(y)θc)ψe(y)2dy,
(2.22)
8
and we note that individual terms under the integral from the sum defining (2.20) are
Z
w(acy)δ0(y+ac)ψe(y)2dy=w0(0)ψe(ac)22w(0)ψ0
e(ac)ψe(ac)
= 2w0(2ac),
Z
w(acy)δ0(y+ac)ψe(y)2dy=w0(2ac)ψe(ac)22w(2ac)ψ0
e(ac)ψe(ac)
=w0(2ac),
Z
w(acy)δ0(yac)ψe(y)2dy=w0(2ac)ψe(ac)22w(2ac)ψ0
e(ac)ψe(ac)
=w0(2ac),
Z
w(acy)δ0(yac)ψe(y)2dy=w0(0)ψe(ac)22w(0)ψ0
e(ac)ψe(ac)
=2w0(2ac),
for the terms involving the distributional derivative δ0(xx0), whereas the terms
involving δ(xx0) are
Z
w(acy)δ(y+ac)ψe(y)2dy=w(0)ψe(ac)2=w(0),
Z
w(acy)δ(y+ac)ψe(y)2dy=w(2ac)ψe(ac)2= 0,
Z
w(acy)δ(yac)ψe(y)2dy=w(2ac)ψe(ac)2= 0,
Z
w(acy)δ(yac)ψe(y)2dy=w(0)ψe(ac)2=w(0).
Thus, each integral term
Z
w(acy)H00(Uc(y)θc)ψe(y)2dy=2w0(2ac)
w(0)2(2.23)
Z
w(acy)H00(Uc(y)θc)ψe(y)2dy=2w0(2ac)
w(0)2.(2.24)
Finally, using the fact that ψo(a) = ψo(a) = 1, we find that the two terms in the
sum of (2.22) cancel and the integral vanishes. Thus, hϕo, w H00(Ucθc)ψ2
ei= 0,
so Ao(t)¯
Aois constant. On the other hand, computing the quadratic coefficient in
the equation for Ae, we have
hϕe, w H00(Ucθc)ψ2
ei=ZZ
w(xy)ϕe(x)H00(Uc(y)θc)ψe(y)2dydx
=γX
a=±ac
ψe(a)Z
w(ay)H00(Uc(y)θc)ψe(y)2dy.
(2.25)
The integrals in (2.25) are identical to those in (2.22), so it is straightforward to
compute, using (2.23) and (2.24) that
hϕe, w H00(Ucθc)ψ2
ei=γ2w0(2ac)
w(0)2+2w0(2ac)
w(0)2=4w0(2ac)
w(0)3.
9
Thus, we can at last compute all the terms in (2.18), specifying that
dAo
dt= 0,(2.26a)
dAe
dτ=µ|w0(2ac)|
w(0)2Ae(τ)2,(2.26b)
where we have noted the fact that w0(2ac)<0 due to Amari’s conditions (iii) and
(iv) on the weight function w(x) [1].
Equation (2.26a) reflects the translational symmetry of the original neural field
equation (1.1), so bumps are neutrally stable to translating perturbations ψoregard-
less of the bifurcation parameter µ. On the other hand, as the bifurcation parameter
µis changed, the dynamics of the even eigenmode ψereflect the relative distance to
the saddle-node bifurcation at which point bumps are marginally stable to expand-
ing/contracting perturbations. When µ < 0, there are two fixed points of equation
(2.26b) at Ae=±w(0)p|µ/w0(2ac)|, corresponding to the pair of emerging station-
ary bump solutions which are wider (+) and narrower () than the critical bump Uc.
As expected, the wide bump is linearly stable since a linearization of (2.26b) yields
λ+=p|µ·w0(2ac)|/w(0) <0, and the narrow bump is linearly unstable since
λ= +p|µ·w0(2ac)|/w(0) >0 [1,14]. Crossing through the subcritical saddle-node
bifurcation, we find that for µ0, there is a single fixed point Ae0, which is
marginally stable, since λ0= 0.
Lastly, note when µ > 0, there are no fixed points of the differential equa-
tion (2.26b). However, starting at the initial condition Ae(0) = 0 (correspondingly
u(x, 0) = Uc(x)), we find that the dynamics of the amplitude Ae(τ) are strongly de-
termined by the ghost of the fixed point at Ae= 0 [39]. Note in Fig. 2.2Athat the
transient bump retains a shape much like that of the critical bump for an appreciable
period of time before extinguishing. Trajectories of the full system (1.1) evolve more
slowly when the distance to the bifurcation |θθc|=|µ|ε2is smaller. Solving for
Ae(τ) in this specific case and reverting the the original time coordinate t=τ , we
find
Ae(t) = w(0)µ
p|w0(2ac)|tan εpµ· |w0(2ac)|t/w(0).(2.27)
Thus, the residence time tbin the bottleneck, or neighborhood of the ghost of the fixed
point Ae= 0, is given by the amount of time it takes for Ae(t) to traverse to some set
value. Of course, this is dependent on the bifurcation parameter µ. For illustration,
we examine how long it takes until Ae(tb) = 1. Using the formula (2.27), it is
straightforward to find that
tb=w(0)
εpµ· |w0(2ac)|tan1 p|w0(2ac)|
w(0)µ!.(2.28)
We compare this formula to the results of numerical simulations in Fig. 2.2B, utilizing
the difference of Gaussians weight function w(x) = ex2Aex22on x(−∞,).
Comparisons are made by noting that when Ae(tb) = 1, then u(x, t)Uc(x)
εψe(x), so that the peak of the activity profile will be
u(0, tb)Uc(0) εψe(0) = W(ac)W(ac)2w(ac)ε
w(0) .
10
t
0 2 4 6
u(0, t)
0
0.1
0.2
0.3
0.4
0.5
0.6 Uc(0)
Uc(0) εψe(0)
tb
B
bottleneck
t
0 2 4 6 8 10 12
u(0, t)
0
0.1
0.2
0.3
0.4
0.5
0.6
ε= 0.2
ε= 0.1
ε= 0.07
C
ε
0 0.05 0.1 0.15 0.2
tb
0
5
10
15
20
D
Fig. 2.2.Slow passage of bumps on x(−∞,)when w(x)=ex2
Aex22. (A) Slow
passage of a transient bump by the ghost of the critical solution Uc(x)when θ=θc+ε2for ε= 0.1
(µ= 1), A= 0.4, and σ= 2. (B) The peak of the bump u(0, t)slowly decreases in amplitude until
breaking down quickly in the vicinity of Ae(t) = 1. Note the theoretical formula for the amplitude
(solid line) given by (2.27) matches the numerical simulation (dashed line) in the slow passage
region. (C) Amplitude of the even mode Ae(t)slowly decreases with time. The duration of the
bottleneck increases as the distance to the bifurcation is decreased by reducing ε. (D) Comparison
of the theory (solid) given by (2.28) to the numerically computed (dots) duration in the bottleneck
(the crossing Ae(tb) = 1).
t
0 10 20 30
u(0, t)
0
0.5
1
1.5
ε= 0.2
ε= 0.1
ε= 0.05
ε
0 0.05 0.1 0.15 0.2
tb
0
5
10
15
20
25
30
Fig. 2.3.Slow passage of a bump on x[π, π]for a cosine weight function w(x) = cos(x).
(A) Amplitude of the even mode Ae(t)slowly decreases with time. Numerical simulations (dashed
lines) of (1.1) are compared to the trajectory 2(1 εtan(εt)). (B) The duration of bottleneck
increases as the distance to the bifurcation is decreased. Simulations (dots) are well fit by the theory
tb=π/[4ε].
11
Notice in Fig. 2.2C,Dthat, as predicted, the time spent in the bottleneck increases
as the amplitude of the small parameter εis decreased. The attracting impact of
the ghost is stronger when the parameters of the system lie closer to the bottleneck.
For further comparison, we consider the case w(x) = cos(x) in Fig. 2.3. In this case
the constituent functions ac=π/4, w(0) = 1, and w(2ac) = 1. Furthermore, by
setting µ= 1 the formulas for the amplitude (2.27) and residence time (2.28) simplify
considerably to Ae(t) = tan(εt) and tb=π/[4ε].
2.3. Amplitude equations for smooth nonlinearities. Our nonlinear anal-
ysis in the case of Heaviside nonlinearities f(u)H(uθ) made extensive use of the
specific form of the distributional derivatives. Inner products with these functions
lead to dynamical equations focused on a finite number of discrete points in space,
rather than over the spatial continuum xΩ. Here, we show it is straightforward
to extend this analysis to the case of arbitrary smooth nonlinearities f(u). There
are several detailed analyses of stationary bumps in neural field with smooth firing
rate, showing a similar bifurcation structure to that presented in Fig. 2.1: a stable
and an unstable branch of bump solutions annihilate in a saddle-node bifurcation as
the threshold of the firing rate function is increased. We refrain from such a detailed
analysis here and refer the reader to these works [12,15,27,29,31,40]. Again, defining
θ=θc+µε2,ε1, so µdetermines the distance of θfrom the bifurcation and on
which side of θcit lies. Following our previous analysis, we utilize the ansatz (2.10)
and rescale time τ=ετ. In this case, ψo(x) and ψe(x) will still be odd and even
eigenmodes associated with the linear stability of stationary bump solutions to (1.1).
At the criticality θθc, their associated eigenvalues will be λo=λe0, as in the
case of Heaviside firing rates [40]. Expanding (1.1) in orders of εusing the ansatz
(2.10) yields at O(ε):
0 = Ae(τ)Z
w(xy)f0(Uc(y))ψe(y)dyψe(x).(2.29)
The right hand side of (2.29) will vanish as long as ψe(x) lies in the null space of the
non-self-adjoint linear operator
Lu(x) = u(x) + Z
w(xy)f0(Uc(y))u(y)dy, (2.30)
defining the linear stability of the critical bump solution to the stationary equation
Uc(x) = Rw(xy)f(Uc(y))dy. Of course, this condition must be satisfied for the
system (1.1) to lie at a saddle-node bifurcation at θθc. At O(ε2), the equation for
the higher order term u2is
L[Aoψo+u2] =A0
eψe+A0
oψo+µZ
w(xy)f0(Uc(y))dy(2.31)
A2
e
2Z
w(xy)f00(Uc(y))ψe(y)2dy,
where Lis the linear operator (2.30). The ψoterms on the left of (2.31) vanish since
ψo∈ N(L). We can show that ϕo:= f0(Uc)ψoand ϕe:= f0(Uc)ψelie in the nullspace
of the adjoint operator N(L) where
Lv(x) = v(x) + f0(Uc(x)) Z
w(xy)v(y)dy
12
under the L2inner product (2.14). Elements of N(L) satisfy the equation
v(x) = f0(Uc(x)) Z
w(xy)v(y)dy. (2.32)
Plugging the formulas for ϕoand ϕeinto (2.32), we find
f0(Uc(x))ψj(x) = f0(Uc(x)) Z
w(xy)f0(Uc(y))ψj(y)dy,
for j=o, e, which must be satisfied since ψo, ψe∈ N(L). Imposing solvability of
(2.31), we find that
0 = f0(Uc)ψj, A0
e(τ)ψe+A0
o(t)ψo+µw f0(Uc)A2
e
2wf00(Uc)ψ2
e,
for j=o, e. After canceling odd terms and isolating the derivatives A0
j, we find the
amplitudes Ajsatisfy the system:
dAo
dt=hϕo, w f00(Uc)ψ2
ei
2hϕo, ψoiAe(τ)2,(2.33a)
dAe
dτ=µhϕe, w f0(Uc)i
hϕe, ψei+hϕe, w f00(Uc)ψ2
ei
2hϕe, ψeiAe(τ)2.(2.33b)
We can derive the coefficients in the system (2.33) by computing the inner prod-
ucts therein. To do so, we must choose a specific nonlinearity, such as the sigmoid
(1.2), and a weight kernel. For illustration, we consider the cosine kernel w(x) on
the ring xΩ = [π, π] with periodic boundaries. As shown in previous stud-
ies, the bump solution Uc(x) = Accos xwhile the eigenmodes ψo(x) = sin(x) and
ψe(x) = cos(x) [22,27,40]. Since Lψj0 for j=o, e, this means
sin(x) = Zπ
π
cos(xy)f0(Accos(y)) sin(y)dy= sin xZπ
π
sin2(y)f0(Accos y)dy,
where we have used cos(xy) = cos xcos y+ sin xsin y, and
cos(x) = Zπ
π
cos(xy)f0(Accos(y)) cos(y)dy= cos xZπ
π
cos2(y)f0(Accos y)dy,
so that we can write
Zπ
π
sin2(y)f0(Accos y)dy1,Zπ
π
cos2(y)f0(Accos y)dy1.(2.34)
The identities (2.34) allow us to compute
hϕo, ψoi=Zπ
π
f0(Accos(y)) sin(y)2dy= 1,
and
hϕe, ψei=Zπ
π
f0(Acsin(y)) cos(y)2dy= 1.
13
Furthermore,
hϕo, w f00(Uc)ψ2
ei=Zπ
π
f00(Uc(y))ψe(y)2Zπ
π
cos(xy)f0(Accos(y)) sin(y)dxdy
=Zπ
π
f00(Uc(y)) cos(y)2sin(y)dy= 0,(2.35)
where the last equality holds due to the integrand being odd. Thus, the equation
(2.33a) reduces to A0
o(t) = 0, so Ao(t)¯
Ao. Now, we can calculate the coefficients
of the Aeamplitude equation. First by utilizing the fact that Rπ
πw(xy)ϕe(y)dy=
ψe(x), we can compute
hϕe, w f0(Uc)i=Zπ
π
f0(Accos(x)) cos(x)dx=hϕe,1i.(2.36)
Lastly, we can simplify the integrals in the quadratic term by again making use of the
identity Rπ
πw(xy)ϕe(y)dy=ψe(x), so
hϕe, w f00(Uc)ψ2
ei=Zπ
π
f00(Accos(x)) cos3(x)dx=hf00(Uc), ψ3
ei.(2.37)
so we can simplify (2.33b) to
dAe
dτ=µhϕe,1i+1
2hf00(Uc), ψ3
eiAe(τ)2.(2.38)
3. Stochastic neural fields near the saddle-node. We now study the impact
of stochastic forcing near the saddle-node bifurcation of bumps. Our analysis utilizes
the spatially extended Langevin equation with additive noise (1.5). Guided by our
analysis of the deterministic system (1.1), we will utilize an expansion in the small
parameter ε, which determines the distance of the system from the saddle-node. To
formally derive stochastic amplitude equations, we must specify the scaling of the
noise amplitude as it relates to the small parameter ε, as this will determine the
level of the perturbation hierarchy wherein the noise term dWwill appear. We opt for
the scaling =ε5/2, as this introduces a nontrivial interaction between the nonlinear
amplitude equation for Aeand the noise, but random perturbations do not shift the
location of the bifurcation as in [2,25].
3.1. Stochastic amplitude equation for bumps. Motivated by our quanti-
tative analysis in the noise-free case, we rescale time in the stochastic term of (1.5)
using τ=εt, so
du(x, t) = u(x, t) + Z
w(xy)f(u(y, t))dydt+ε2dˆ
W(x, τ ),(3.1)
where d ˆ
W(x, τ ) := εdW(x, ε1τ) is a rescaled version of the Wiener process dW
that is independent of ε[19]. We then apply the ansatz (2.10) once again and take
Heaviside firing rate functions (1.3), thus finding (2.11) at O(ε). The O(ε) equation
is satisfied due to the fact that ψe∈ N(L), where Lis the linear operator given by
(2.30). Finally, proceeding to O(ε2), we find
L[Aoψo+u2] dt=dAeψe+ dAoψo+µZ
w(xy)H0(Uc(y)θc)dydt(3.2)
A2
e
2Z
w(xy)H00(Uc(y)θc)ψe(y)2dydt+ d ˆ
W .
14
As before, the ψoterms on the left vanish since Lψo0, and we ensure a bounded
solution to (3.2) exists by requiring the inhomogeneous part is orthogonal to ϕo, ϕe
N(L), where Lis the adjoint linear operator given by (2.15). Taking inner products
yields
0 = hϕj,dAe(τ)ψe(x)+dAo(t)ψo(x) + µw H0(Ucθc)dt(3.3)
Ae(τ)2
2wH00(Ucθc)ψ2
edt+ d ˆ
W,
for j=o, e. Isolating temporal derivatives, we find the amplitudes Ao(t) and Ae(τ)
obey the following pair of nonlinear stochastic differential equations
dAo(t) =hϕo, w H00(Ucθc)ψ2
ei
2hϕo, ψoiAe(τ)2dthϕo,dˆ
Wi
hϕo, ψoi(3.4a)
dAe(τ) = µhϕe, w [H0(Ucθc)]i
hϕe, ψei+hϕe, w H00(Ucθc)ψ2
ei
2hϕe, ψeiAe(τ)2(3.4b)
hϕe,dˆ
Wi
hϕe, ψei.
Utilizing the formulas for H0(Ucθc) (2.4) and H00(Ucθc) (2.19) we derived in the
previous section, we can simplify the expressions in (3.4). Additionally, we make use
of the fact that
dˆ
Wo(τ) := hϕo,dˆ
Wi
hϕo, ψoi=1
2hψo(ac)d ˆ
W(ac, τ ) + ψo(ac)d ˆ
W(ac, τ )i
=dˆ
W(ac, τ )dˆ
W(ac, τ )
2,
dˆ
We(τ) := hϕe,dˆ
Wi
hϕe, ψei=1
2hψe(ac)d ˆ
W(ac, τ ) + ψe(ac)d ˆ
W(ac, τ )i
=dˆ
W(ac, τ )+dˆ
W(ac, τ )
2.
Utilizing the fact that hdˆ
W(x, τ )d ˆ
W(y, τ 0)i=C(xy)δ(ττ0)dτdτ0, it is straightfor-
ward to compute the variances hˆ
Wo(τ)2i=Doτ= (C(0)C(2ac))t/2 and hˆ
We(τ)2i=
Deτ= (C(0) + C(2ac))t/2. Clearly, for spatially flat correlation functions C(x)¯
C,
noise will have no impact on the odd amplitude Ao(t) since Do0. Thus, (3.4)
becomes
dAo(t) = εdWo(t),(3.5a)
dAe(τ) = µdτ|w0(2ac)|
w(0)2Ae(τ)2dτ+ d ˆ
We(τ),(3.5b)
where we have converted the noise term in (3.5a) back to the original time coordinate:
dWo(t) = d ˆ
Wo(εt)/ε[19]. Note that in equation (3.5a), we essentially recover the
diffusion approximation of the translating mode of the bump hAo(t)2i=εDot, which is
analyzed in [27]. Equation (3.5b) is a stochastic amplitude equation, so that the noise
term dWeis projected onto the direction of the neutrally stable even perturbation ψe.
15
t
0 20 40 60 80 100
maxxu(x, t)
0
0.5
1
1.5
2
Uc(x)εψe(0)
B
Fig. 3.1.Noise-induced extinction of bumps in the stochastic neural field (1.5) on x[π, π]
for a cosine weight w(x) = cos(x). (A) A single realization of the equation (1.5) with the initial
condition u(x, 0) = Uc(x) = 2 cos(x)leads to a stochastically wandering bump that eventual ly
crosses a separatrix at t70, leading to extinction. The noise-free system possesses a stable
bump solution since µ=0.2<0;ε= 0.4. (B) The large deviation can easily be detected by
tracking maxxu(x, t), which departs the bottleneck of the noise-free system, whose lower bound lies
at maxx[Uc(x)εψe(0)] = 2(1 ε).
3.2. Metastability and bump extinction. To analyze the one-dimensional
nonlinear SDE (3.5b), we further rescale the equation by setting A:= |w0(2ac)|
w(0)2Ae:
dA(t) = m+A(t)2dt+ d ˆ
W(t),(3.6)
where m:= |w0(2ac)|
w(0)2µ. Thus, the effective diffusion coefficient of the rescaled noise
term is hˆ
W(τ)2i==w0(2ac)2(C(0) + C(2ac))t/ 2w(0)4. Note the rescaled
equation (3.6) has an effective potential [32,39]:
V(A) = A3
3+mA, (3.7)
the derivative V0(A) of which yields the deterministic part of the right hand side. As
the bifurcation parameter mis varied, the potential exhibits a minimum (at A=m)
and a maximum (at A=m) when m < 0, a saddle point (at A= 0) when m= 0,
and no extrema for m > 0 (Fig. 3.2A). For all parameter values m, the state of the
stochastic system (3.6) will eventually escape to the limit A→ −∞ as τ→ ∞. Such
trajectories were observed in the noise-free system in the case m > 0, as demonstrated
in Fig. 2.2 of the previous section. However, we show here that noise qualitatively
alters the dynamics of the system, so its state will not remain in the vicinity of the
stable attractor (at A=m) when m < 0.
As before, we study the problem of bump extinction using the stochastic am-
plitude equation (3.6) in the case m > 0. We show that the noise decreases the
average amount of time until an extinction event will occur. For clarity, we as-
sume the initial condition A(0) = 0 (correspondingly u(x, 0) = Uc(x)). We take
the bottleneck to be the region Ae[1,1], which in the rescaled variable is A
[−|w0(2ac)|/w(0)2,|w0(2ac)|/w(0)2]. The residence time τbin the bottleneck is given
by the amount of time it takes for Ato escape this region. We can determine the
statistics of τbby considering it as a first passage time problem.
Let p(A, t) be the probability density for the stochastic process A(t) given the
16
A
-2 -1 0 1 2
V(A)
-2
-1
0
1
2
node
saddle
m=1
m= 0
m= 1
A
m
-0.4 -0.2 0 0.2 0.4
¯
tb
0
5
10
15
20
25
30
35
B
Fig. 3.2.(A) Potential function (3.7) associated with the stochastic amplitude equation (3.6)
has zero (m > 0); one (m0); or two (m < 0) extrema - associated with equilibria of ˙
A=mA2.
When m < 0, crossing the saddle point requires stochastic forcing. (B) Mean time ¯
tbuntil bump
extinction is approximated by a mean first passage time problem of the stochastic amplitude equation
(3.6). Numerical simulations (circles) of the full system (1.5) are well approximated by this theory
(line) given by (3.12) for ε= 0.6.
initial condition A(0) = A0. Then the corresponding Fokker-Planck equation is given
∂p
∂t =(m+A2)p(A, t)
∂A +D
2
2p(A, t)
∂A2≡ −∂J (A, t)
∂A ,(3.8)
where
J(A, t) = D
2
∂p(A, t)
∂A (m+A2)p(A, t),(3.9)
and p(A, 0) = δ(AA0). We focus on the three different scenarios discussed above.
First, if m < 0, there there is a single stable fixed point of the deterministic equation
˙
A=mA2at A=mand a single unstable fixed point at A=m. The basin of
attraction of A=mis given by the interval (m, ). When D > 0, fluctuations
can induce rare transitions on exponentially long timescales whereby A(t) crosses the
point A=m, leaving the basin of attraction. For the non-generic case m= 0, the
timescale of departure scales algebraically [38]. When m > 0, noise simply modulates
the flows of the deterministic equation ˙
A=mA2, leading to an average speed-up
in the departure from the bottleneck. In general, we consider solving the first passage
time problem as an escape from the domain (α, ) where α:= |w0(2ac)|
w(0)2(equivalently
where Ae=1) [19]. To do so, we impose an absorbing boundary condition at α:
p(α, t) = 0. Now let T(A) denote the stochastic first passage time for which (3.6)
first reaches the point α, given it started at A(α, ). The first passage time
distribution is related to the survival probability that the system has not yet reached
α:
S(t)Z
α
p(A, t)dA,
which is S(t) := Pr(t>T(A)), so the first passage time density is [19]
F(t) = dS
dt=Z
α
∂p
∂t (A, t)dA.
17
Substituting for the expression for ∂p/∂t using the Fokker-Planck equation (3.8) and
the formula for the flux (3.9) shows
F(t) = Z
α
∂J (A, t)
∂A dA=J(α, t),
where we have utilized the fact that limA→∞ J(A, t) = 0. Thus, the first passage time
density F(t) can be interpreted as the total probability flux through the absorbing
boundary at A=α. To calculate the mean first passage time T(A) := hT(A)i, we
use standard analysis to associate T(A) with the solution of the backward equation
[19]:
(m+A2)dT
dA+D
2
d2T
dA2=1,(3.10)
with the boundary conditions T(α) = 0 and T0() = 0. Solving (3.10) yields the
closed form solution
T(A) = 2
DZA
αZ
y
φ(z)
φ(y)dzdy, (3.11)
where
φ(A) = exp 2 [V(α)V(A)]
D,
and V(x) is the potential function (3.7). Explicit expressions for the integral (3.11)
can be found in some special cases [32,38]. For our purposes, we simply integrate
(3.11) numerically to generate theoretical relationships between the mean first passage
time and model parameters. For comparison, we focus on the case the weight function
w(x) = cos(x) and the correlations C(x) = cos(x), so that Uc(x) = 2 cos(x), ac=π
4,
w(0) = 1, w0(2ac) = 1, C(0) = 1, and C(2ac) = 0. Therefore, α= 1, m=µ,
D= 1/2 This allows us to write the formula (3.11) at A= 0 as
T(0) = 4 Z0
1Z
y
exp 4z3y3
3+µ(zy)dzdy. (3.12)
Lastly, note that by rescaling time t=ετ, we have that the mean first passage time
in units of twill be ¯
tb=T(0). We compare our theory (3.12) with the results of
numerical simulations of the full stochastic neural field (1.5) in Fig. 3.2B.
4. Discussion. We have developed a weakly nonlinear analysis for saddle-node
bifurcations of bumps in deterministic and stochastic neural field equations. While
most of our analysis has focused upon Heaviside firing rate functions, we have also
demonstrated the techniques can easily be extended to arbitrary smooth nonlinear-
ities. Our main finding is that even symmetric eigenmodes, associated with linear
stability of bumps, can be described by quadratic amplitude equations in the vicinity
of the saddle-node. For deterministic neural fields, this low dimensional approxima-
tion can be used to approximate the trajectory and lifetime of bumps as they slowly
extinguish. To do so, we focused on the initial time epoch in the bottleneck surround-
ing the ghost of the critical bump Uc(x). In stochastic neural fields with appropriate
noise scaling, a stochastic amplitude equation for the even mode of the bump can be
18
derived. We then cast the lifetime of the bump in terms of a mean first passage time
problem of the reduced system.
Our work extends a variety of recent studies that have derived low-dimensional
nonlinear approximations of neural field pattern dynamics in the vicinity of bifur-
cations [5,7,17,25,27,28]. As in our work, most of these previous studies derived
approximations where the location of the bifurcation was unaffected by noise terms.
On the other hand, Hutt et al. showed that noise can in fact shift the position of
Turing bifurcations in neural fields, and the amplitude of the bifurcation threshold
shift was proportional to the noise variance [25]. Note, it was necessary in our work
to apply a specific noise scaling (ε5/2), as compared to the distance from criticality
(ε2), in order for the noise to simply appear as a modification of the even mode am-
plitude equation. Were we to have selected noise of larger amplitude, this could have
induced bifurcation shifts analogous to that found in [25]. Another potential future
direction would be to consider the impact of axonal propagation delays [24] on the
dynamics close to the saddle-node. As demonstrated in this work, the neural field
(1.1) is quite sensitive to small perturbations near criticality, so delays may serve to
alter the duration of the bottleneck or even shift the saddle-node bifurcation point.
REFERENCES
[1] S. Amari,Dynamics of pattern formation in lateral-inhibition type neural fields, Biol. Cybern.,
27 (1977), pp. 77–87.
[2] D. Bl¨
omker, M. Hairer, and G. Pavliotis,Multiscale analysis for stochastic partial differ-
ential equations with quadratic nonlinearities, Nonlinearity, 20 (2007), p. 1721.
[3] M. Bode,Front-bifurcations in reaction-diffusion systems with inhomogeneous parameter dis-
tributions, Physica D: Nonlinear Phenomena, 106 (1997), pp. 270–286.
[4] C. A. Brackley and M. S. Turner,Random fluctuations of the firing rate function in a
continuum neural field model, Phys. Rev. E, 75 (2007), p. 041913.
[5] P. Bressloff and S. Folias,Front bifurcations in an excitatory neural network, SIAM Journal
on Applied Mathematics, 65 (2004), pp. 131–151.
[6] P. C. Bressloff,Spatiotemporal dynamics of continuum neural fields, J Phys. A: Math.
Theor., 45 (2012), p. 033001.
[7] P. C. Bressloff and Z. P. Kilpatrick,Nonlinear langevin equations for wandering pat-
terns in stochastic neural fields, SIAM Journal on Applied Dynamical Systems, 14 (2015),
pp. 305–334.
[8] P. C. Bressloff and M. A. Webber,Front propagation in stochastic neural fields, SIAM J
Appl. Dyn. Syst., in press (2012).
[9] A. Compte, N. Brunel, P. S. Goldman-Rakic, and X. J. Wang,Synaptic mechanisms and
network dynamics underlying spatial working memory in a cortical network model, Cereb.
Cortex, 10 (2000), pp. 910–23.
[10] S. Coombes,Waves, bumps, and patterns in neural field theories, Biol. Cybern., 93 (2005),
pp. 91–108.
[11] S. Coombes and M. R. Owen,Bumps, breathers, and waves in a neural network with spike
frequency adaptation, Phys. Rev. Lett., 94 (2005), p. 148102.
[12] S. Coombes and H. Schmidt,Neural fields with sigmoidal firing rates: approximate solutions,
Discrete and Continuous Dynamical Systems. Series S, (2010).
[13] S. Coombes, H. Schmidt, C. R. Laing, N. Svanstedt, and J. A. Wyller,Waves in random
neural media, Disc. Cont. Dynam. Syst. A, 32 (2012), pp. 2951–2970.
[14] B. Ermentrout,Neural networks as spatio-temporal pattern-forming systems, Rep. Prog.
Phys., 61 (1998), pp. 353–430.
[15] O. Faugeras, R. Veltz, and F. Grimbert,Persistent neural states: Stationary localized
activity patterns in the nonlinear continuous n-population, q-dimensional neural networks,
Neural Comput., 21 (2009), pp. 147–187.
[16] S. Folias and P. Bressloff,Breathing pulses in an excitatory neural network, SIAM J Appl.
Dyn. Syst., 3 (2004), pp. 378–407.
[17] S. E. Folias,Nonlinear analysis of breathing pulses in a synaptically coupled neural network,
SIAM Journal on Applied Dynamical Systems, 10 (2011), pp. 744–787.
19
[18] S. Funahashi, C. J. Bruce, and P. S. Goldman-Rakic,Mnemonic coding of visual space in
the monkey’s dorsolateral prefrontal cortex, J Neurophysiol., 61 (1989), pp. 331–49.
[19] C. W. Gardiner,Handbook of stochastic methods for physics, chemistry, and the natural
sciences, Springer-Verlag, Berlin, 3rd ed ed., 2004.
[20] P. S. Goldman-Rakic,Cellular basis of working memory, Neuron, 14 (1995), pp. 477–85.
[21] Y. Guo and C. Chow,Existence and stability of standing pulses in neural networks: I. exis-
tence, SIAM J Appl. Dyn. Syst., 4 (2005), pp. 217–248.
[22] D. Hansel and H. Sompolinsky,Modeling feature selectivity in local cortical circuits, in Meth-
ods in neuronal modeling: From ions to networks, C. Koch and I. Segev, eds., Cambridge:
MIT, 1998, ch. 13, pp. 499–567.
[23] X. Huang, W. C. Troy, Q. Yang, H. Ma, C. R. Laing, S. J. Schiff, and J.-Y. Wu,Spiral
waves in disinhibited mammalian neocortex, J Neurosci., 24 (2004), pp. 9897–9902.
[24] A. Hutt, M. Bestehorn, and T. Wennekers,Pattern formation in intracortical neuronal
fields, Network, 14 (2003), pp. 351–68.
[25] A. Hutt, A. Longtin, and L. Schimansky-Geier,Additive noise-induced turing transitions in
spatial systems with application to neural fields and the swift–hohenberg equation, Physica
D, 237 (2008), pp. 755–773.
[26] J. P. Keener,Principles of Applied Mathematics:., Addison-Wesley, 1995.
[27] Z. P. Kilpatrick and B. Ermentrout,Wandering bumps in stochastic neural fields, SIAM
J. Appl. Dyn. Syst., 12 (2013), pp. 61–94.
[28] Z. P. Kilpatrick and G. Faye,Pulse bifurcations in stochastic neural fields, SIAM Journal
on Applied Dynamical Systems, 13 (2014), pp. 830–860.
[29] K. Kishimoto and S.-i. Amari,Existence and stability of local excitations in homogeneous
neural fields, Journal of Mathematical Biology, 7 (1979), pp. 303–318.
[30] C. R. Laing and A. Longtin,Noise-induced stabilization of bumps in systems with long-range
spatial coupling, Physica D, 160 (2001), pp. 149 – 172.
[31] C. R. Laing, W. C. Troy, B. Gutkin, and G. B. Ermentrout,Multiple bumps in a neuronal
model of working memory, SIAM J Appl. Math., 63 (2002), pp. 62–97.
[32] B. Lindner, A. Longtin, and A. Bulsara,Analytic expressions for rate and cv of a type i
neuron driven by white gaussian noise, Neural computation, 15 (2003), pp. 1761–1788.
[33] D. J. Pinto and G. B. Ermentrout,Spatially structured activity in synaptically coupled
neuronal networks: I. Traveling fronts and pulses, SIAM J Appl. Math., 62 (2001), pp. 206–
225.
[34] D. J. Pinto, S. L. Patrick, W. C. Huang, and B. W. Connors,Initiation, propagation, and
termination of epileptiform activity in rodent neocortex in vitro involve distinct mecha-
nisms, J Neurosci., 25 (2005), pp. 8131–8140.
[35] K. A. Richardson, S. J. Schiff, and B. J. Gluckman,Control of traveling waves in the
mammalian cortex, Phys. Rev. Lett., 94 (2005), p. 028103.
[36] F. Sagues, J. M. Sancho, and J. Garcia-Ojalvo,Spatiotemporal order out of noise, Rev.
Mod. Phys., 79 (2007), pp. 829–882.
[37] P. Sch¨
utz, M. Bode, and H.-G. Purwins,Bifurcations of front dynamics in a reaction-
diffusion system with spatial inhomogeneities, Physica D: Nonlinear Phenomena, 82 (1995),
pp. 382–397.
[38] D. Sigeti and W. Horsthemke,Pseudo-regular oscillations induced by external noise, Journal
of statistical physics, 54 (1989), pp. 1217–1222.
[39] S. H. Strogatz,Nonlinear dynamics and chaos: with applications to physics, biology, chem-
istry, and engineering, Westview press, 2014.
[40] R. Veltz and O. Faugeras,Local/global analysis of the stationary solutions of some neural
field equations, SIAM J Appl. Dyn. Syst., 9 (2010), pp. 954–998.
[41] H. R. Wilson and J. D. Cowan,Excitatory and inhibitory interactions in localized populations
of model neurons, Biophysical journal, 12 (1972), p. 1.
[42] K. Wimmer, D. Q. Nykamp, C. Constantinidis, and A. Compte,Bump attractor dynamics
in prefrontal cortex explains behavioral precision in spatial working memory, Nat Neurosci,
17 (2014), pp. 431–9.
20
... One method for analyzing the effects of noise in attractor networks is to use the theory of stochastic neural fields. The latter has received growing attention recently, not only within the context of working memory [16][17][18][19], but also with regard to traveling waves [20][21][22][23], binocular rivalry [24], and stimulus-dependent neural variability [25]. In these studies, noise is typically assumed to be weak. ...
... It follows that the mean time to reach a 0 , starting at the stable bump solution is T (a * ), and the corresponding mean extinction time is thus 2T (a * ). The mean time to bump extinction has been numerically studied in [16] and analytically approximated in [18] when the system is near a saddle-node bifurcation. However, to our knowledge, there has never been an exact formula. ...
Article
Full-text available
Continuous attractor neural networks are used extensively to model a variety of experimentally observed coherent brain states, ranging from cortical waves of activity to stationary activity bumps. The latter are thought to play an important role in various forms of neural information processing, including population coding in primary visual cortex (V1) and working memory in prefrontal cortex. However, one limitation of continuous attractor networks is that the location of the peak of an activity bump (or wave) can diffuse due to intrinsic network noise. This reflects marginal stability of bump solutions with respect to the action of an underlying continuous symmetry group. Previous studies have used perturbation theory to derive an approximate stochastic differential equation for the location of the peak (phase) of the bump. Although this method captures the diffusive wandering of a bump solution, it ignores fluctuations in the amplitude of the bump. In this paper, we show how amplitude fluctuations can be analyzed by reducing the underlying stochastic neural field equation to a finite-dimensional stochastic gradient dynamical system that tracks the stochastic motion of both the amplitude and phase of bump solutions. This allows us to derive exact expressions for the steady-state probability density and its moments, which are then used to investigate two major issues: (i) the input-dependent suppression of neural variability and (ii) noise-induced transitions to bump extinction. We develop the theory by considering the particular example of a ring attractor network with SO(2) symmetry, which is the most common architecture used in attractor models of working memory and population tuning in V1. However, we also extend the analysis to a higher-dimensional spherical attractor network with SO(3) symmetry which has previously been proposed as a model of orientation and spatial frequency tuning in V1. We thus establish how a combination of stochastic analysis and group theoretic methods provides a powerful tool for investigating the effects of noise in continuous attractor networks.
... Thus, the stability of the bump will be determined by the sign of w(2h). Typically, the wider bump has w(2h) < 0 ( Fig. 3a), so it is linearly stable (Amari 1977;Kilpatrick 2016). ...
... There are both wide stable and narrow unstable bumps of form Eq. (2.13). A critical value of θ defines the point where these branches of Eq. (2.14) annihilate in a saddlenode (SN) bifurcation (Kilpatrick 2016). Differentiating with respect to h, the SN bifurcation occurs where G (h) = (1 − 2h)2Ae −2h = 0 which can be solved for h c = 1/2. ...
Article
Full-text available
Working memory (WM) is limited in its temporal length and capacity. Classic conceptions of WM capacity assume the system possesses a finite number of slots, but recent evidence suggests WM may be a continuous resource. Resource models typically assume there is no hard upper bound on the number of items that can be stored, but WM fidelity decreases with the number of items. We analyze a neural field model of multi-item WM that associates each item with the location of a bump in a finite spatial domain, considering items that span a one-dimensional continuous feature space. Our analysis relates the neural architecture of the network to accumulated errors and capacity limitations arising during the delay period of a multi-item WM task. Networks with stronger synapses support wider bumps that interact more, whereas networks with weaker synapses support narrower bumps that are more susceptible to noise perturbations. There is an optimal synaptic strength that both limits bump interaction events and the effects of noise perturbations. This optimum shifts to weaker synapses as the number of items stored in the network is increased. Our model not only provides a circuit-based explanation for WM capacity, but also speaks to how capacity relates to the arrangement of stored items in a feature space.
... There are both wide stable and narrow unstable bumps of form Eq. (2.9). A critical value of θ defines the point where these branches of Eq. (2.10) annihilate in a saddle-node (SN) bifurcation (Kilpatrick, 2016). Differentiating with respect to h, the SN bifurcation occurs where G (h) = (1 − 2h)2Ae −2h = 0 which can be solved for hc = 1/2. ...
Article
Working memory (WM) is limited in its temporal length and capacity. Classic conceptions of WM capacity assume the system possesses a finite number of slots, but recent evidence suggests WM may be a continuous resource. Resource models typically assume there is no hard upper bound on the number of items that can be stored, but WM fidelity decreases with the number of items. We analyze a neural field model of multi-item WM that associates each item with the location of a bump in a finite spatial domain, considering items that span a one-dimensional continuous feature space. Our analysis relates the neural architecture of the network to accumulated errors and capacity limitations arising during the delay period of a multi-item WM task. Networks with stronger synapses support wider bumps that interact more, whereas networks with weaker synapses support narrower bumps that are more susceptible to noise perturbations. There is an optimal synaptic strength that both limits bump interaction events and the effects of noise perturbations. This optimum shifts to weaker synapses as the number of items stored in the network is increased. Our model not only provides a neural circuit explanation for WM capacity, but also speaks to how capacity relates to the arrangement of stored items in a feature space.
... STM of an item has been thought to remain stable for as long as attention is sustained. Elapsing time has often been considered responsible for the decay of information over a retention interval, with evidence supporting models based on rehearsal [41], or drift [42] and extinction [43] in neural representations. Against this, it has been shown that memory decay can be reduced if the gap between trials (when nothing is happening) is much longer than the retention interval [44]. ...
Article
Full-text available
Space and time appear to play key roles in the way that information is organized in short-term memory (STM). Some argue that they are crucial contexts within which other stored features are embedded, allowing binding of information that belongs together within STM. Here we review recent behavioral, neurophysiological and imaging studies that have sought to investigate the nature of spatial, sequential and duration representations in STM, and how these might break down in disease. Findings from these studies point to an important role of the hippocampus and other medial temporal lobe structures in aspects of STM, challenging conventional accounts of involvement of these regions in only long-term memory.
Article
Full-text available
1. An oculomotor delayed-response task was used to examine the spatial memory functions of neurons in primate prefrontal cortex. Monkeys were trained to fixate a central spot during a brief presentation (0.5 s) of a peripheral cue and throughout a subsequent delay period (1-6 s), and then, upon the extinction of the fixation target, to make a saccadic eye movement to where the cue had been presented. Cues were usually presented in one of eight different locations separated by 45 degrees. This task thus requires monkeys to direct their gaze to the location of a remembered visual cue, controls the retinal coordinates of the visual cues, controls the monkey's oculomotor behavior during the delay period, and also allows precise measurement of the timing and direction of the relevant behavioral responses. 2. Recordings were obtained from 288 neurons in the prefrontal cortex within and surrounding the principal sulcus (PS) while monkeys performed this task. An additional 31 neurons in the frontal eye fields (FEF) region within and near the anterior bank of the arcuate sulcus were also studied. 3. Of the 288 PS neurons, 170 exhibited task-related activity during at least one phase of this task and, of these, 87 showed significant excitation or inhibition of activity during the delay period relative to activity during the intertrial interval. 4. Delay period activity was classified as directional for 79% of these 87 neurons in that significant responses only occurred following cues located over a certain range of visual field directions and were weak or absent for other cue directions. The remaining 21% were omnidirectional, i.e., showed comparable delay period activity for all visual field locations tested. Directional preferences, or lack thereof, were maintained across different delay intervals (1-6 s). 5. For 50 of the 87 PS neurons, activity during the delay period was significantly elevated above the neuron's spontaneous rate for at least one cue location; for the remaining 37 neurons only inhibitory delay period activity was seen. Nearly all (92%) neurons with excitatory delay period activity were directional and few (8%) were omnidirectional. Most (62%) neurons with purely inhibitory delay period activity were directional, but a substantial minority (38%) was omnidirectional. 6. Fifteen of the neurons with excitatory directional delay period activity also had significant inhibitory delay period activity for other cue directions. These inhibitory responses were usually strongest for, or centered about, cue directions roughly opposite those optimal for excitatory responses.(ABSTRACT TRUNCATED AT 400 WORDS)
Article
Full-text available
We analyze the effects of additive, spatially extended noise on spatiotemporal patterns in continuum neural fields. Our main focus is how fluctuations impact patterns when they are weakly coupled to an external stimulus or another equivalent pattern. Showing the generality of our approach, we study both propagating fronts and stationary bumps. Using a separation of time scales, we represent the effects of noise in terms of a phase-shift of a pattern from its uniformly translating position at long time scales, and fluctuations in the pattern profile around its instantaneous position at short time scales. In the case of a stimulus-locked front, we show that the phase-shift satisfies a nonlinear Langevin equation (SDE) whose deterministic part has a unique stable fixed point. Using a linear-noise approximation, we thus establish that wandering of the front about the stimulus-locked state is given by an Ornstein-Uhlenbeck (OU) process. Analogous results hold for the relative phase-shift between a pair of mutually coupled fronts, provided that the coupling is excitatory. On the other hand, if the mutual coupling is given by a Mexican hat function (difference of exponentials), then the linear-noise approximation breaks down due to the co-existence of stable and unstable phase-locked states in the deterministic limit. Similarly, the stochastic motion of mutually coupled bumps can be described by a system of nonlinearly coupled SDEs, which can be linearized to yield a multivariate OU process. As in the case of fronts, large deviations can cause bumps to temporarily decouple, leading to a phase-slip in the bump positions.
Article
Full-text available
We analyze the effects of extrinsic multiplicative noise on front propagation in a scalar neural field with excitatory connections. Using a separation of time scales, we represent the fluctuating front in terms of a diffusive-like displacement (wandering) of the front from its uniformly translating position at long time scales, and fluctuations in the front profile around its instantaneous position at short time scales. One major result of our analysis is a comparison between freely propagating fronts and fronts locked to an externally moving stimulus. We show that the latter are much more robust to noise, since the stochastic wandering of the mean front profile is described by an Ornstein–Uhlenbeck process rather than a Wiener process, so that the variance in front position saturates in the long time limit rather than increasing linearly with time. Finally, we consider a stochastic neural field that supports a pulled front in the deterministic limit, and show that the wandering of such a front is now subdiffusive.
Article
Full-text available
We study the effects of additive noise on traveling pulse solutions in spatially extended neural fields with linear adaptation. Neural fields are evolution equations with an integral term characterizing synaptic interactions between neurons at different spatial locations of the network. We introduce an auxiliary variable to model the effects of local negative feedback and consider random fluctuations by modeling the system as a set of spatially extended Langevin equations whose noise term is a $Q$-Wiener process. Due to the translation invariance of the network, neural fields can support a continuum of spatially localized bump solutions that can be destabilized by increasing the strength of the adaptation, giving rise to traveling pulse solutions. Near this criticality, we derive a stochastic amplitude equation describing the dynamics of these bifurcating pulses when the noise and the deterministic instability are of comparable magnitude. Away from this bifurcation, we investigate the effects of additive noise on the propagation of traveling pulses and demonstrate that noise induces wandering of traveling pulses. Our results are complemented with numerical simulations.
Article
Full-text available
In this paper we show how a local inhomogeneous input can stabilize a stationary-pulse solution in an excitatory neural network. A subsequent reduction of the input amplitude can then induce a Hopf instability of the stationary solution resulting in the formation of a breather. The breather can itself undergo a secondary instability leading to the periodic emission of traveling waves. In one dimension such waves consist of pairs of counterpropagating pulses, whereas in two dimensions the waves are circular target patterns.
Article
Full-text available
We consider the existence of standing pulse solutions of a neural network integro-differential equa- tion. These pulses are bistable with the zero state and may be an analogue for short term memory in the brain. The network consists of a single layer of neurons synaptically connected by lateral inhibition. Our work extends the classic Amari result by considering a nonsaturating gain function. We consider a specific connectivity function where the existence conditions for single pulses can be reduced to the solution of an algebraic system. In addition to the two localized pulse solutions found by Amari, we find that three or more pulses can coexist. We also show the existence of nonconvex "dimpled" pulses and double pulses. We map out the pulse shapes and maximum firing rates for different connection weights and gain functions.
Article
Neural field models of firing rate activity have had a major impact in helping to develop an understanding of the dynamics seen in brain slice preparations. These models typically take the form of integro-differential equations. Their non-local nature has led to the development of a set of analytical and numerical tools for the study of waves, bumps and patterns, based around natural extensions of those used for local differential equation models. In this paper we present a review of such techniques and show how recent advances have opened the way for future studies of neural fields in both one and two dimensions that can incorporate realistic forms of axo-dendritic interactions and the slow intrinsic currents that underlie bursting behaviour in single neurons.
Article
Prefrontal persistent activity during the delay of spatial working memory tasks is thought to maintain spatial location in memory. A 'bump attractor' computational model can account for this physiology and its relationship to behavior. However, direct experimental evidence linking parameters of prefrontal firing to the memory report in individual trials is lacking, and, to date, no demonstration exists that bump attractor dynamics underlies spatial working memory. We analyzed monkey data and found model-derived predictive relationships between the variability of prefrontal activity in the delay and the fine details of recalled spatial location, as evident in trial-to-trial imprecise oculomotor responses. Our results support a diffusing bump representation for spatial working memory instantiated in persistent prefrontal activity. These findings reinforce persistent activity as a basis for spatial working memory, provide evidence for a continuous prefrontal representation of memorized space and offer experimental support for bump attractor dynamics mediating cognitive tasks in the cortex.
Book
The Handbook of Stochastic Methods covers systematically and in simple language the foundations of Markov systems, stochastic differential equations, Fokker-Planck equations, approximation methods, chemical master equations, and quatum-mechanical Markov processes. Strong emphasis is placed on systematic approximation methods for solving problems. Stochastic adiabatic elimination is newly formulated. The book contains the "folklore" of stochastic methods in systematic form and is suitable for use as a reference work.