Page 1

MUTUALLY EXCLUSIVE SPIKY PATTERN AND SEGMENTATION MODELLED BY

THE FIVE-COMPONENT MEINHARDT-GIERER SYSTEM

JUNCHENG WEI AND MATTHIAS WINTER

Abstract. We consider the five-component Meinhardt-Gierer model for mutually exclusive patterns and seg-

mentation which was proposed in [11]. We prove rigorous results on the existence and stability of mutually

exclusive spikes which are located in different positions for the two activators. Sufficient conditions for existence

and stability are derived, which depend in particular on the relative size of the various diffusion constants. Our

main analytical methods are the Liapunov-Schmidt reduction and nonlocal eigenvalue problems. The analytical

results are confirmed by numerical simulations.

1. Introduction

We analyze the five-component Meinhardt-Gierer system whose components are two activators and one

inhibitor as well as two lateral activators. It has been introduced and very successfully used in various modeling

aspects by Meinhardt and Gierer [11]. In particular, it can explain the phenomenon of mutual exclusion and

handle segmentation in the simplest case of two different segments. This model has been reviewed and its many

implications have been discussed in detail by Meinhardt in Chapter 12 of [10].

The most important features of this system can be highlighted as lateral activation of mutually exclusive

states. To each of the local activators a lateral activator is associated in a spatially nonlocal and time-delayed

way. The consequence of the presence of the two lateral activators in the system is the possibility to have

stable patterns which for the two activators are mutually exclusive, or in other words, the patterns for the two

activators are located in different positions. It is clear that mutually exclusive patterns are not possible for a

three-component system with only two activators and one inhibitor since mutually exclusive patterns for the

two activators could destabilize each other in various ways. Therefore the lateral activators are needed.

Numerical simulations of mutually exclusive patterns have been performed in [11], [10]. Many interesting

features have been discovered and explained but those works do not give analytical solutions and they are not

mathematically rigorous. To obtain mathematically rigorous results, in this study we show the existence and

stability of mutually exclusive spikes in such a system.

The overall feedback mechanism of the system can be summarized as follows: Lateral activation is coupled

with self-activation and overall inhibition. We will explain this in more detail after the system has been

formulated quantitatively.

A widespread pattern in biology is segmentation. The mutual exclusion effect described in this paper

is a special case of segmentation where only two different segments are present.

segmentation are the body segments of insects or the segments of insect legs. The segments usually resemble

each other strongly, but on the other hand they are different from each other. Segments may for example

have an internal polarity which is often visible by bristles or hairs. This internal pattern within a segment

depends on the position of the segment within the sequence in its natural state. In some biological cases a

good understanding of how segment position and internal structure are related has been obtained. One famous

example are surgical experiments on insects, e.g. for cockroach legs. Creating a discontinuity in the normal

neighborhood of structures by cutting a leg and pasting one piece to the end of another partial leg creates a

discontinuity in the segment structure as some segments are missing their natural neighbors. This forces the

emergence of new stable patterns in the cockroach leg such that all segments get back their natural neighbors.

However, the resulting pattern can be very different from any naturally occurring pattern.

For example for cockroach legs, if the normal sequence of structures within a segment is 123...9, a combi-

nation of a partial leg 12345678 to which the piece 456789 is added first leads to the structure 12345678456789.

Note the presence of the jump discontinuity in this sequence between the numbers 8 and 4. Now segment regu-

lation adds the piece 765 which removes the discontinuity and leads to the final structure 12345678765456789.

This is different from the original natural structure but nevertheless each segment has the same neighbors as

in the natural situation.

In this example which was experimentally verified by Bohn [1], it is not the natural sequence but the normal

neighborhood which is regulated. It is exactly this neighboring structure which can be modelled mathematically

Examples for biological

1991 Mathematics Subject Classification. Primary 35B35, 92C15; Secondary 35B40, 92D25.

Key words and phrases.Pattern Formation, Mutual Exclusion, Stability, Steady states.

1

Page 2

2 JUNCHENG WEI AND MATTHIAS WINTER

using the system from [11] which is considered here and this paper can be the starting point to a rigorous

understanding of more complex segmentation phenomena.

Now we give a sociological application of mutual exclusion (see[11]): Consider two families. They can hardly

live in exactly the same house as this would lead to overcrowding and is therefore less preferable. But if

they live in the same street or neighborhood they can support, nurture and benefit each other. Thus this

collaborative behavior can lead to a rather stable situation. Indeed, stable coexisting states with concentration

peaks remaining close but keeping a certain characteristic distance from each other are typical phenomena

which are observed in quantitative models of systems modelling mutual exclusion and they obviously resemble

real-world behavior in this example very well.

This feedback mechanism of lateral activation coupled with overall inhibition can be quantified by formulating

the effects of “activation”, “lateral activation” and “inhibition” using the language of molecular reactions and

invoking the law of mass action. Now we are going to discuss this in a quantitative manner. We will introduce

the resulting model system first and then explain how these feedback mechanisms are represented by the terms

in the model.

The original system from [11] (after re-scaling and some simplifications) can be stated as follows:

g1,t= ?2g1,xx− g1+cs2g2

τrt= Drrxx− r + cs2g2

τs1,t= Dss1,xx− s1+ g1,

1

r

1+ cs1g2

,g2,t= ?2g2,xx− g2+cs1g2

2,

τs2,t= Dss2,xx− s2+ g2.

2

r

,

(1.1)

Here 0 < ? << 1, Dr > 0 and Ds > 0 are diffusion constants, c is a positive reaction constant and τ is

nonnegative time-relaxation constant (in [11] the choice τ = 1 was made).

The x-indices indicate spatial derivatives. We will derive results for the system (1.1) on a bounded interval

Ω = (−L,L) for L > 0 with Neumann boundary conditions. Some results for the system on the real line

(L = ∞) will also be established and they will be compared with the bounded interval case.

The first two components, the activators g1and g2activate themselves locally which is due to the terms g2

and g2

The lateral activators are introduced in (1.1) by the fourth and fifth components s1and s2as follows: To

both the activators, gi, i = 1,2, there are nonlocal and delayed versions si. Now s1acts as an activator to g2

and s2acts as in activator to g1due to the terms s2in the first and s1in the second equation which have a

positive feedback. The expression lateral activation is used since giactivate g3−ilaterally through its nonlocal

counterpart sirather than locally through giitself.

Lateral activation is finally coupled with overall inhibition as follows: The third component r acts as an

inhibitor to both g1and g2due to the term r in the first and second equations which has a negative feedback.

Note also that both the local and the nonlocal activators have a positive feedback on r due to the terms s2g2

and s1g2

This feedback mechanism is a generalization of the well-known Gierer-Meinhardt system [6] which has one

local activator coupled to an inhibitor. We recall that the classical Gierer-Meinhardt system as well as the

five-component system considered here are both Turing systems [13] as they allow spatial patterns to arise out

of a homogeneous steady state by the so-called Turing instability. (Some analytical results for the existence

and stability of spiky Turing pattern for the Gierer-Meinhardt system have been obtained for example in [3],

[4], [5], [9], [12], [14], [17], [18], [19].)

Now we state our rigorous results on the existence and stability of stationary, mutually exclusive, spiky

patterns for the system (1.1).

We prove the existence of a spiky pattern with one spike for g1and one spike for g2which are located in

different positions under the following conditions:

(i) the diffusivities of the two lateral activators are large compared to the inhibitor diffusivity and

(ii) the inhibitor diffusivity is large compared to the diffusivities of the two (local) activators.

We summarize the two main conditions (i), (ii) which guarantee the existence of mutually-exclusive spike

patterns for (1.1) in the following:

1

2, respectively, in the first two equations.

1

2in the third equation.

We assume that

?2<< C1Dr≤ Ds

for some constant C1> 0.

(1.2)

We also prove the stability of these mutually exclusive spiky patterns provided certain conditions are met

which are of the type (1.2) with C1replaced by some new constant C2.

In this paper we consider a pattern displaying one spike for g1and one for g2which are located in different

positions.

In particular, we prove the existence of a mutually exclusive two-spike solution to the system (1.1) if Ds/Dr>

4. We show that this solution is stable if (i) Ds/Dr> 43.33 for L = ∞, or in general if (5.3) holds (condition

for O(1) eigenvalues) and if (ii) Ds/Dr> 4 (condition for o(1) eigenvalues).

Page 3

MEINHARDT-GIERER SYSTEM 3

The main results will be stated in Theorem 1 (Section 3) on the existence of solutions and in Theorem 2

(Section 5) as well as Theorem 3 (Section 6) on the large and small eigenvalues of the linearized problem at

the solutions, respectively.

What do these results tell us about segmentation? As a first step, we have proved that in the case of two

segments which we call 1 and 2 the sequence 12 can exist and be stable, and we have found sufficient conditions

for this effect to happen.

The case of n > 2 components will lead to a system with 2n+1 components which is very large and not easy

to handle. Even in the case n = 2 for the five-component system investigated in this paper the analysis becomes

rather lengthy. We expect that, following our approach, we will be able to prove existence and stability of n

spikes in n different locations. We do not see any major obstacle, only the proofs become more technical. We

are currently working on this issue.

The outline of the paper is as follows: In Section 2, we compute the amplitudes. In Section 3, we locate

the spikes and show the existence of solutions. In Section 4, we first derive the eigenvalue problem. Then we

compute the large (i.e. O(1)) eigenvalues and we derive sufficient conditions for the stability of solutions with

respect to these. In Section 5, we solve a nonlocal eigenvalue problem which has been delayed from Section

4. In Section 6, we give the most important steps and state the main result on the stability of solutions with

respect to small (i.e. o(1)) eigenvalues. Sufficient conditions for this stability are derived. The technical details

of the analysis of small eigenvalues is delayed to the appendices. Finally, in Section 7, our results are confirmed

by numerical simulations.

Acknowledgements: The work of JW is supported by an Earmarked Grant of RGC of Hong Kong. The

work of MW is supported by a BRIEF Award of Brunel University. MW thanks the Department of Mathematics

at CUHK for their kind hospitality.

2. Computing the Amplitudes

We construct steady states of the form

g1(x) = t1w

?x − x1

?

?

(1 + O(?)),g2(x) = t2w

?x − x2

?

?

(1 + O(?)),

where w(y) is the unique positive and even homoclinic solution of the equation

wyy− w + w2= 0(2.1)

on the real line decaying to zero at ±∞. Here we assume that the spikes for g1and g2have the same amplitude,

i.e. t1= t2. We often use different notations for the two amplitudes as this will be important later when we

consider stability since there could be an instability which breaks the symmetry of having the same amplitudes.

The analysis will show that t1, t2and x1, x2depend on ? but to leading order and after suitable scaling are

independent of ?. To keep notation simple we will not explicitly indicate this dependence.

All functions used throughout the paper belong to the Hilbert space H2(−L,L) and the error terms are

taken in the norm H2(−L,L) unless otherwise stated. After integrating (2.1), we get the relation

?

which will be used frequently, often without explicitly stating it. We denote

?x − x1

?

Note that g1and g2are small-scale variables, as ? << 1, and r, s1, and s2are large-scale (with respect to

the spatial variable). For steady states, using Green functions, these slow variables, to leading order, can be

expressed by an integral representation.

To get this representation, g1in the last three equations of (1.1) can be expanded as

??

where δx1(x) = δ(x − x1) is the Dirac delta distribution located at x1. Similarly, for g2we have

g2(x) = t2?

Rw

Using the Green function GD(x,y) which is defined as the unique solution of the equation

Rw(y)dy =

?

Rw2(y)dy

(2.2)

w1(x) = w

?

,w2(x) = w

?x − x2

?

?

.

(2.3)

g1(x) = t1?

Rw

?

δx1(x) + O(?2),g2

1(x) = t2

1?

??

Rw2

?

δx1(x) + O(?2),

??

?

δx2(x) + O(?2),g2

2(x) = t2

2?

??

Rw2

?

δx2(x) + O(?2).

D∆GD(x,y) − GD(x,y) + δy(x) = 0,

−L < x < L,GD,x(−L,y) = GD,x(L,y) = 0,

(2.4)

Page 4

4JUNCHENG WEI AND MATTHIAS WINTER

we can represent s1(x) by using the fourth equation of (1.1) as

s1(x) = t1?

??

Rw

?

GDs(x,x1) + O(?2).

(2.5)

An elementary calculation gives

GD(x,y) =

θ

sinh(2θL)coshθ(L + x)coshθ(L − y),

θ

sinh(2θL)coshθ(L − x)coshθ(L + y),

−L < x < y < L,

−L < y < x < L

(2.6)

with θ = 1/√D. Note that

GD(x,y) =

1

2√De−|x−y|/√D− HD(x,y),

(2.7)

where HDis the regular part of the Green function GD. In particular, for L = ∞, we have

GD(x1,x2) =

1

2√De−|x−y|/√D=: KD(x1,x2).

(2.8)

In the same way, we derive

s2(x) = t2?

??

Rw

?

GDs(x,x2) + O(?).

(2.9)

Now we compute the last two terms on the r.h.s. of the third equation of (1.1) as follows:

??

and, similarly,

cs2g2

1(x) = cs2(x1)t2

1?

Rw

?

δx1(x) + O(?2) = ct2

1t2?2

??

Rw

?2

δx1(x)GDs(x1,x2) + O(?3)

cs1g2

2(x) = ct1t2

2?2

??

Rw

?2

δx2(x)GDs(x1,x2) + O(?3).

Now, using the third equation of (1.1), we can represent r(x) by the Green function GDr

??

Going back to the first equation in (1.1), we get

?2∆g1− g1+cs2g2

r

To have the same amplitudes of the two contributions in (2.11), we require

r(x) = ct1t2?2

Rw

?2

GDs(x1,x2)(t1GDr(x,x1) + t2GDr(x,x2)) + O(?3).

(2.10)

1

= t1(?2∆w1− w1) +cs2t2

1w2

r

1

+ O(?) = t1

?cs2t1

r

− 1

?

w2

1+ O(?).

(2.11)

cs2(x1)t1

r(x1)

= 1 + O(?).

(2.12)

Now we rewrite (2.12), using (2.9) and (2.10):

cs2(x1)t1

r(x1)

=

1

?(?

Rw)(t1GDr(x1,x1) + t2GDr(x1,x2))+ O(?).

(2.13)

Thus, (2.12), for x = x1, gives

t1GDr(x1,x1) + t2GDr(x1,x2) =

1

Rw+ O(1).

??

(2.14)

In the same way, from the second equation in (1.1), we get

t1GDr(x1,x2) + t2GDr(x2,x2) =

1

Rw+ O(1).

??

(2.15)

The relations (2.14), (2.15) are a linear system for the amplitudes t1, t2of the spikes if their positions state that

the amplitudes x1, x2are known. Note that the amplitudes depend on the positions in leading order as also

the Green function GDrdepends on its arguments in leading order. We say that the amplitudes are strongly

coupled to the positions.

Note that the system (2.14), (2.15) has a unique solution t1, t2since by (2.6)

GDr(x1,x1)GDr(x2,x2) − (GDr(x1,x2))2=

×[coshθr(L + x1)coshθr(L − x2) − coshθr(L − x1)coshθr(L + x2)] > 0

θ2

r

sinh2(2θrL)coshθr(L − x1)coshθr(L + x2)

Page 5

MEINHARDT-GIERER SYSTEM 5

for −L < x2< x1< L, where θr= 1/√Dr.

By symmetry, for x1= −x2, we have t1= t2. This is the case we are interested in. But we have not shown

that there are such positions x1, x2, yet. This will be done in the next section.

For the special case L = ∞, we have GDr(x1,x2) =

by

t1+ t2e−|x1−x2|/√Dr=2√Dr

??

Lemma 1. Assume that ? > 0 is small enough. Then for spike-solutions of (1.1) of the type

?x − x1

?

where w(y) is the unique positive and even solution of the equation

1

2√Dre−|x−y|/√Drand (2.14), (2.15) in this case are given

t2+ t1e−|x1−x2|/√Dr=2√Dr

??

Rw,

Rw.

Finally, we summarize the main result of this section

g1(x) = t1w

?

(1 + O(?)),g2(x) = t2w

?x − x2

?

?

(1 + O(?)),

wyy− w + w2= 0

on the real line decaying to zero at ±∞, the amplitudes t1and t2are given as the unique solution of the system

t1GDr(x1,x1) + t2GDr(x1,x2) =

??

where GDis the Green function defined in (2.4).

1

Rw+ O(1),t1GDr(x1,x2) + t2GDr(x2,x2) =

1

Rw+ O(1),

??

3. Existence of Mutually Exclusive Spikes

In this section, we use the Liapunov-Schmidt reduction method to rigorously prove the existence of mutually

exclusive spikes. We will get a sufficient condition on the locations of the spikes.

The problem here is that the linearization of the r.h.s. of the first equation in (1.1) around w1 has an

approximate nontrivial kernel. This comes from the fact that a derivative of the equation (2.1) with respect to

y gives

(wy)yy− wy+ 2wwy= 0.

Thus, wybelongs to the kernel of the linearization of (2.1) around w. Note that the function wyrepresents the

translation mode. Therefore a direct application of the implicit function theorem is not possible, but one has

to deal with this kernel first. This is the goal in this section.

Recall that for given g1,g2∈ H2

in H2(Ω?) satisfying the Neumann boundary condition, by the fourth equation of (1.1) s1is uniquely determined,

by the fifth equation s2is uniquely determined, and finally by the third equation r is uniquely determined.

Therefore, the steady state problem is reduced to solving the first two equations.

We are looking for solutions which satisfy

?x − x1

?

with g1(x) = g2(−x)(x1 > 0). By this reflection symmetry the problem is reduced to determining just one

function: g1(x) = t1w1(x) + v.

We are now going to determine this function in two steps. Denoting the r.h.s. of the first equation of (1.1)

by S?[t1w1+v], which is well-defined for steady states, our problem can be written as follows: S?[t1w1+v] = 0,

where S? : H2

First Step. Determine a small v ∈ H2(Ω?) with

S?[t1w1+ v] = β?dw1

N(Ω?), where Ω?= (−L/?,L/?) and H2

N(Ω?) denotes the space of all functions

g1(x) = t1w

?

(1 + O(?)),g2(x) = t1w

?x + x1

?

?

(1 + O(?))

N(Ω?) → L2(Ω?).

?

Ωvdw1

dxdx = 0 such that

dx.

(3.1)

Second Step. Choose x1such that

β = 0.

(3.2)

We begin with the first step. To this end, we need to study the linearized operator

˜L?,x1: H2(Ω?) → L2(Ω?)

where S

We define the approximate kernel and co-kernel, respectively, as follows:

?

dx

defined by

˜L?,x1:= S

?

?[t1w1]φ,

?

?[t1w1] denotes the Frechet derivative of the operator S?at t1w1.

K?,x1:= span

?dw1

?

⊂ H2(Ω?),

C?,x1:= span

?

?dw1

dx

?

⊂ L2(Ω?).

Page 6

6JUNCHENG WEI AND MATTHIAS WINTER

By projection, we define the operator

L?,x1= π⊥

?,x1◦˜L?,x1: K⊥

?,x1→ C⊥

?,x1.

?,x1,

where π⊥

Then we have the following key result for the Liapunov-Schmidt reduction.

Proposition 1. There exist positive constants ¯ ?,¯δ,λ such that we have for all ? ∈ (0,¯ ?), x1∈ Ω with min(|L+

x1|,|L − x1|) >¯δ,

?L?,x1φ?L2(Ω?)≥ λ?φ?H2(Ω?)

Further, the map L?,x1is surjective.

Proof of Proposition 1: We proceed by deriving a contradiction.

Suppose that (3.3) is false. Then there exist sequences {?k}, {x1k}, {φk} with ?k→ 0, x1k∈ Ω, min(|L +

xk

?k,xk

?,x1is the orthogonal projection in L2(Ω?) onto C⊥

for all

φ ∈ K⊥

?,x1.

(3.3)

1|,|L − xk

1|) >¯δ, φk= φ?k∈ K⊥

?L?k,x1kφk?L2(Ω?k)→ 0,

At first (after rescaling) φ?is only defined on Ω?. However, by a standard result (compare [7]) it can be

extended to R such that its norm in H2(R) is still bounded by a constant independent of ? and x1for ? small

enough. It is then a standard procedure to show that this extension converges strongly in H2(Ω?) to some limit

φ1with ?φ1?L2(R)= 1. For the details of the argument, we refer to [8].

The same analysis is performed for w2and its perturbation φ?,2. Then Φ = (φ1,φ2)Tsolves the system

?

1, k = 1,2,... such that

as k → ∞,

?φk?H2(Ω?k)= 1,k = 1, 2, ... .

(3.4)

L0φ1−

1

?

Rwdy

2ˆt1GDr(x1,x1)

??

Rwφ1dy

?

+ 2ˆt1GDr(x1,x2)

??

Rwφ2dy

?

+ˆt2GDr(x1,x2)

??

φ1dy

?

??

?

−ˆt1GDr(x1,x2)

??

φ2dy

??

= 0,

(3.5)

L0φ2−

1

?

Rwdy

?

2ˆt2GDr(x2,x2)

wφ2dy

?

+ 2ˆt2GDr(x1,x2)

??

wφ1dy

?

+ˆt1GDr(x1,x2)

??

Rφ2dy

−ˆt2GDr(x1,x2)

??

Rφ1dy

??

= 0,

(3.6)

where L0φ = ?2φyy− φ + 2wφ and

α?=

?

1

??

Rwdy

?

and

ˆti= (α?)−1ti.

(3.7)

This system the special case with λ = 0 of (4.7), (4.8) derived in Section 4. To avoid doing this computation

twice we have delayed it to Section 4, where a more general case is considered.

Now, adding (3.5) and (3.6), we obtain

?

This implies by Theorem 1.4 of [15] that φ1= −φ2, and, setting φ := φ1, for φ we must have

4

4 − c0

where 0 < c0< 2 (compare (5.1) for λ = 0). Now by Theorem 1.4 of [15] we must have φ = 0. This contradicts

?φ?L2(R)= 1. Therefore, (3.3) must be true.

By the Closed Range Theorem it follows that the map L?,x1is surjective. (The details are given for example

in [8].)

L0(φ1+ φ2) − w2

2?

Rw(φ1+ φ2)dy

?

?

Rw2dy

?

= 0.

L0φ −

w2

?w2dy

wφdy = λφ,

(3.8)

?

Based on this key result for the Liapunov-Schmidt reduction it is now fairly standard (see for example the

works [8] and [16]) to derive that there exists a small v ∈ H2(Ω?) with

S[t1w1+ v] = β?dw1

?

Ωvdw1

dxdx = 0 such that

dx.

Page 7

MEINHARDT-GIERER SYSTEM 7

This completes the first step.

We now turn to the second step. We have to show that β = 0 for a certain x1. This amounts to showing

that

?

for a certain x1. Note that computing x1in fact means determining the locations of the spikes.

To this end, we have to expand S[t1w1+ v](x1+ ?y).

We compute

?cs2(x1+ ?y)t1

Using (2.9), (2.10) and the expansions

GD(x1+ ?y,x2) = GD(x1,x2) + GD,x1(x1,x2)?y + O(?2|y|2)

and

GD(x1+ ?y,x1) = GD(x1,x1) −

where we have used (2.7), we get

cs2(x1+ ?y)t1

r(x1+ ?y)

GDs(x1,−x1) +1

GDr(x1,x1) + GDr(x1,−x1) − ?|y|/(2D) +1

= 1 +GDs,x1(x1,−x1)

2[GDr(x1,x1) + GDr(x1,−x1)]

This implies

?

=1

2

GDs(x1,−x1)

where W?(x1) = O(?), uniformly for 0 ≤ x1≤ L.

Using (2.6), we further compute

F(x1) :=GDs,x1(x1,−x1)

GDs(x1,−x1)

= −θssinh2θs(L − x1)

where θ = 1/√D. We have to determine x1such that F(x1) = 0. Note that

F(0) = −θssinh2θsL

if

θs

θr

2

The inequality (3.10) is satisfied if, for fixed L, θris large compared to θs.

In the limit L → 0 the condition (3.10) converges toθs

For general L ∈ (0,∞) we can write (3.10) as follows:

Going back to the original diffusion constants, the inequality (3.10) is equivalent to

> 4tanh2θsL

tanh2θrL.

In the limit L → 0, (3.11) givesDs

For all L ∈ (0,∞) we can write (3.11) as follows:

Note that (3.11) holds if

Ds

Dr

ΩS[t1w1+ v](x)?dw1

dxdx = 0

S[t1w1+ v](x1+ ?y) = t1

r(x1+ ?y)

− 1

?

w2

1(x1+ ?y) + O(?2).

1

2D?|y| −1

2HD,x1(x1,x1)?y + O(?2|y|2),

=GDr(x1,x1) + GDr(x1,−x1)

GDs(x1,−x1)

2GDs,x1(x1,−x1)?y + O(?2|y|2)

2(−HDr,x1(x1,x1) + GDr,x1(x1,−x1))?y

×

2GDs(x1,−x1)?y −GDr,x1(x1,−x1) − HDr,x1(x1,x1)

?y + O(?2y2) + even term in y.

(3.9)

ΩS[w1+ v](x)?dw1

−GDr,x1(x1,−x1) − HDr,x1(x1,x1)

GDr(x1,x1) + GDr(x1,−x1)

dxdx =

?GDs,x1(x1,−x1)

?

?y

?

Ryw2dw

dydy + ?2W?(x1),

−GDr,x1(x1,−x1) − HDr,x1(x1,x1)

GDr(x1,x1) + GDr(x1,−x1)

sinh2θrx1− sinh2θr(L − x1)

coshθr(L − x1)[coshθr(L − x1) + coshθr(L + x1)],

cosh2θs(L − x1)− θr

cosh2θsL+ θr

sinh2θrL

2cosh2θrL> 0

<1tanhθrL

tanhθsL.

(3.10)

θr< 1/√2. In the limit L → ∞, (3.10) givesθs

θs

θr< α(L)

θr< 1/2.

with

1

2< α(L) <

1

√2.

Ds

Dr

(3.11)

Dr> 2 and, in the limit L → ∞, we can write (3.11) as follows:

Ds

Dr> β(L) for some continuous function β(L) ∈ (2,4).

Ds

Dr> 4.

> 4.

(3.12)

Page 8

8 JUNCHENG WEI AND MATTHIAS WINTER

This is not the optimal condition, but it is rather handy and easy to check.

On the other hand,

F(L/2) = −θs

sinhθsL

cosh2(θsL/2)< 0.

By the intermediate value theorem, under the condition (3.11), there exists an x1∈ (0,L/2) such that F(x1) =

0. There exists no such x1∈ [L/2,L) since the function F is negative in that interval.

Note that F(L/2) → 0 as θs→ 0. This implies that x1→ L/2 as θs→ 0.

We now show that the zero x1∈ [0,L/2] of F is unique by proving that F?(x1) < 0 for x1∈ (0,L/2) if

θs

θr

<

tanh(θrL/2)

√2tanh(θsL/2).

(3.13)

We compute

F?(x1) = 2θ2

s

1

cosh2θs(L − x1)− θ2

r

1

cosh2θr(L − x1)

−θ2

r

[coshθr(L − x1) + coshθr(L + x1)]2− [sinhθr(L − x1) + sinhθr(L + x1)]2

[coshθr(L − x1) + coshθr(L + x1)]2

Therefore, taking into consideration only the first two terms and noting that the last term is negative, we

have F?(x1) < 0 if (3.13) holds, and in this case, the solution for x1is unique.

Note that (3.13) holds ifθs

θr<

Therefore (3.10) and (3.13) are both true ifθs

Now for (3.13), since F?(x1) ?= 0, a standard degree argument shows that for ? << 1 there exists a unique

x?

.

1

√2or, equivalently,Ds

Dr> 2.

2or, equivalently,Ds

θr<1

Dr> 4.

1depending on ? such that

?

ΩS[w1+ v](x)?dw1

dxdx = 0. Further, x?

1→ x1as ? → 0, where x1satisfies

GDs,x1(x1,−x1)

GDs(x1,−x1)

−GDr,x1(x1,−x1) − HDr,x1(x1,x1)

GDr(x1,x1) + GDr(x1,−x1)

= 0.

Thus we have shown existence and at the same time located the positions of the spikes. We summarize this

result in the following theorem:

Theorem 1. There exist mutually exclusive, spiky steady states to (1.1) in (−L,L) with Neumann boundary

conditions such that

?x − x?

?

with

1

??

and x?

g?

1(x) = t?

1w

1

?

(1 + O(?)),g?

2(x) = t?

1w

?x + x?

1

?

?

(1 + O(?))(3.14)

t?

1=

Rwdy (GDr(x1,x1) + GDr(x1,−x1))+ O(1)(3.15)

1→ x1as ? → 0, where

GDs,x1(x1,−x1)

GDs(x1,−x1)

−GDr,x1(x1,−x1) − HDr,x1(x1,x1)

GDr(x1,x1) + GDr(x1,−x1)

= 0.

(3.16)

If Ds/Dr > 4 equation (3.16) has a unique solution x1 ∈ (0,L/2] and no solution in (L/2,L]. Further,

x1→ L/2 as θs→ 0.

Finally, we compute the equation for x1in the limit L → ∞. In this limit, x1satisfies

θs

θr

for some C > 0 independent of x1. This is equivalent to

?

Dr

=

e−2θrx1

1 + e−2θrx1+ O(e−CL)

e2|x1|/√Dr=

Ds

− 1 + O(e−CL).

(3.17)

This concludes our study of existence. In the following sections we consider the stability issue.

Page 9

MEINHARDT-GIERER SYSTEM 9

4. Stability I: The Eigenvalue Problem and the Large Eigenvalues

Now we study the (linearized) stability of this mutually exclusive steady state. To this end, we first derive

the linearized operator around the steady state (g?

We perturb the steady state as follows:

1, g?

2, r?, s?

1, s?

2) given in Theorem 1.

g1= g?

1+ φ?

s1= s?

1eλt,g2= g?

2+ φ?

s2= s?

2eλt,r = r?+ ψ?eλt,

1+ η?

1eλt,

2+ η?

2eλt.

By linearization we obtain the following eigenvalue problem (dropping superscripts ?):

where all components belong to the space H2

We now analyze the case λ?→ λ0?= 0 (large eigenvalues). After re-scaling and taking the limit ? → 0 in

(4.1) and noting that φiconverges locally in H2(−L/?,L/?), we get for the first two components, using the

approximations of g1and g2given in Theorem 1:

λ?φ1= ?2φ1,xx− φ1+cη2g2

λ?φ2= ?2φ2,xx− φ2+cη1g2

τλ?ψ = Drψxx− ψ + cη2g2

τλ?η1= Dsη1,xx− η1+ φ1,

τλ?η2= Dsη2,xx− η2+ φ2.

1

r

+2cs2g1φ1

+2cs1g2φ2

r

−cs2g2

−cs1g2

1ψ

r2

2

rr

2ψ

r2

2+ 2cs1g2φ2,

,

1+ 2cs2g1φ1+ cη1g2

(4.1)

N(Ω).

?2∆φ1− φ1+2cs2(x1)t1w1φ1

r(x1)

−cs2(x1)t2

r2(x1)

1w2

1

ψ(x1) +cη2(x1)t2

1w2

1

r(x1)

= λφ1,

(4.2)

?2∆φ2− φ2+2cs1(x2)t2w2φ2

r(x2)

−cs1(x2)t2

r2(x2)

2w2

2

ψ(x2) +cη2(x2)t2

2w2

2

r(x2)

= λφ1.

(4.3)

Now, in (4.2) and (4.3) we calculate the terms ψ(x) and η1(x) and η2(x), respectively. To get ψ(x), using the

Green function GDr, we solve the linear equation for ψ given by

Drψxx− ψ + 2cs2t1w1φ1+ 2cs1t2w2φ2+ cη2t2

where again for g1and g2we have used the asymptotic expansions of Theorem 1. For simplicity, we study the

case τ = 0. Then the stability result extends to small τ as well, since we know that |λ?| ≤ C for all eigenvalues

such that λ?> −c0for some small c0> 0, which can be shown by a simple argument based on quadratic forms.

This gives

?

?

Similarly, using GDs, we compute

?

Recalling from (2.5) and (2.9) that

?

we get from (4.4)

?

?

Further, recall from (2.10) that

?

1w2

1+ cη1t2

2w2

2= 0,

ψ(x) ∼

2cs2(x1)t1?(

?

Rwφ1dy) + cη2(x1)t2

1?

?

Rw2dy

?

GDr(x,x1)

+2cs1(x2)t2?(

?

Rwφ2dy) + cη1(x2)t2

2?

?

Rw2dy

?

GDr(x,x2).

(4.4)

η1(x) ∼ ?GDs(x,x1)

Rφ1dy,η2(x) ∼ ?GDs(x,x2)

?

Rφ2dy.

(4.5)

s1(x) ∼ ?t1(

Rwdy)GDs(x,x1),s2(x) ∼ ?t2(

?

Rwdy)GDs(x,x2),

ψ(x) ∼

2ct1t2?2(

?

Rwdy)(

?

Rwφ1dy) + ct2

1?2(

?

Rwdy)

?

Rφ2dy

?

GDs(x1,x2)GDr(x,x1)

+2ct1t2?2(

?

Rwdy)(

?

Rwφ2dy) + ct2

2?2(

?

Rwdy)

?

Rφ1dy

?

GDs(x1,x2)GDr(x,x2).

(4.6)

r(x) = ct1t2?2(

Rwdy)2GDs(x1,x2)(t1GDr(x,x1) + t2GDr(x,x2)) + O(?3).

Page 10

10JUNCHENG WEI AND MATTHIAS WINTER

Substituting into (4.2), we get for the coefficient of

−cs2(x1)t2

w2

1

s2(x1)?2(

Similarly, the coefficient for

−cs2(x1)t2

r2(x1)

w2

1

s2(x1)?2(

= −?t2

t2

Here we have used (2.14). Then (4.2) gives the nonlocal eigenvalue problem (NLEP)

?

?

Rφ1dy on the r.h.s.

1w2

1

r2(x1)

c?2(

?

Rwdy)t2

2GDs(x1,x2)GDr(x1,x2) + O(?2)

= −

?

Rwdy)t2

?

1

c?2(

2GDs(x1,x2)GDr(x1,x2) + O(?2) = −?t2w2

Rφ2dy is calculated as

?

?

GDr(x1,x1) +

t2

1GDr(x1,x2) + O(?2).

1w2

Rw2dy)t2

1GDs(x1,x2)GDr(x1,x1) +c?GDs(x1,x2)t2

1w2

1

r(x1)

+ O(?2)

= −

Rw2dy)t2

1GDs(x1,x2)GDr(x1,x1) +

w2

1

s2(x1)?t1GDs(x1,x2) + O(?2)

1w2

1

t1

Rwdyw2

?

1+ O(?2) = ?t1w2

1GDr(x1,x2) + O(?2).

L0φ1−

1

?

Rwdy

2ˆt1GDr(x1,x1)

??

Rwφ1dy

?

+ 2ˆt1GDr(x1,x2)

??

Rwφ2dy

?

+ˆt2GDr(x1,x2)

??

Rφ1dy

?

−ˆt1GDr(x1,x2)

??

Rφ2dy

??

= λφ1,

(4.7)

where L0φ = ?2φyy− φ + 2wφ andˆtihas been defined in (3.7). In the same way, for (4.3) we obtain

1

?

+ˆt1GDr(x1,x2)

L0φ2−

Rwdy

?

2ˆt2GDr(x2,x2)

??

Rwφ2dy

?

+ 2ˆt2GDr(x1,x2)

??

Rwφ1dy

?

??

Rφ2dy

?

−ˆt2GDr(x1,x2)

??

Rφ1dy

??

= λφ2,

(4.8)

where φ1, φ2∈ H2(R). Set φ = (φ1,φ2) and denote by Lφ the left-hand sides of (4.7) and (4.8), respectively.

Then, writing (4.7), (4.8) in matrix notation, we have following the vectorial NLEP:

?

where

?

GDr(x1,x1) + GDr(x1,x2)

and

?

GDr(x1,x1) + GDr(x1,x2)

Lφ = ∆φ − φ + 2wφ −B

?

Rφdy + 2C

??

Rwφdy

????

Rwdy

?−1

w2,

B = GDr(x1,x2)

ˆt2

−ˆt1

−ˆt2

ˆt1

?

=

GDr(x1,x2)

?

1

−1

−11

?

(4.9)

C =

ˆt1GDr(x1,x1)

ˆt2GDr(x1,x2)

ˆt1GDr(x1,x2)

ˆt2GDr(x2,x2)

?

=

1

?GDr(x1,x1)

GDr(x1,x2)

GDr(x2,x2)

GDr(x1,x2)

?

.

(4.10)

Here we have used that (2.14), (2.15) imply

ˆt1GDr(x1,x1) +ˆt2GDr(x1,x2) = 1,

and therefore

ˆt1GDr(x1,x2) +ˆt2GDr(x2,x2) = 1(4.11)

ˆti=

GDr(x3−i,x3−i) − GDr(x1,x2)

GDr(x1,x1)GDr(x2,x2) − (GDr(x1,x2))2,i = 1, 2.

(4.12)

In the special case when GDr(x1,x1) = GDr(x2,x2) we have

ˆt1=ˆt2=

1

GDr(x1,x1) + GDr(x1,x2).

(4.13)

Now, adding (4.7) and (4.8), we obtain

L0(φ1+ φ2) − w2

?

2?

Rw(φ1+ φ2)dy

?

Rw2dy

?

= λ(φ1+ φ2)

Page 11

MEINHARDT-GIERER SYSTEM11

which implies by Theorem 1.4 of [15] that φ1+ φ2= 0 if Re(λ0) ≥ 0. So we set φ2= −φ1= −φ.

From (4.7), we obtain a scalar NLEP for φ

?

where

c0=2(GDr(x1,x1) − GDr(x1,x2))

GDr(x1,x1) + GDr(x1,x2)

Note that 0 < c0< 2 and 0 < d0< 1.

In the following section we study the NLEP (4.14). It determines the stability or instability of the large

eigenvalues of (4.1) if 0 < ? < ?0for a suitably chosen ?0. By our analysis instabilities for small ? > 0 imply

instabilities for ? = 0. On the other hand, by an argument of Dancer [2], an instability for ? = 0 also gives an

instability for small ? > 0.

Note that the NLEP problem here is quite different from those studied in [4], [5], [14] and [15].

In the next section we study this eigenvalue problem and complete the investigation of O(1) eigenvalues for

(4.1).

L0φ −

w2

?

Rw2dy

c0

?

Rwφdy + d0

?

Rφdy

?

= λφ,

(4.14)

,d0=

2GDr(x1,x2)

GDr(x1,x1) + GDr(x1,x2).

(4.15)

5. Stability II: A Nonlocal Eigenvalue Problem

In this section, we study the NLEP (4.14) to determine if or if not there are large eigenvalues, i.e. eigenvalues

of the order O(1) as ? → 0, which destabilize the mutually exclusive spiky pattern. Integrating (4.14), we have

?

Substituting this back into (4.14), we can eliminate the term

?

Here we have used that c0+ 2d0= 2. Applying inequality (2.22) of [18], we get

?

Observe that after multiplying (2.1) by w and by w?, respectively, and integrating we get

?

So, assuming without loss of generality that λ0= +√−1λI, we get for the l.h.s. in (5.2)

6

5

λ0+ 1 + d0

Rφdy =

2 − c0

λ + 1 + d0

?

Rwφdy.

?

where

Rφdy. This gives

L0φ − µ(λ)

w2

?

Rw2dy

Rwφdy = λφ,µ(λ) =

c0λ + 2

λ + 2 − c0/2.

(5.1)

Rw3dy

?

Rw2dy|µ(λ0) − 1|2+ Re(λ0(µ(λ0) − 1)) ≤ 0 if Re(λ0) ≥ 0.(5.2)

Rw3dy =6

5

?

Rw2dy.

????

c0λ0+ 2

− 1

????

2

+ Re

?

?

λ0(

c0λ0+ 2

λ0+ 1 + d0

− 1)

?

=6

5

(c0− 1)2|λ0|2+ (1 − d0)2

|λ0+ 1 + d0|2

=|λ0|2[1.2(1 − c0)2+ (1 + d0)c0− 2] + 1.2(1 − d0)2

|λ0+ 1 + d0|2

Thus if 1.2(1 − c0)2+ (1 + d0)c0− 2 > 0, we have stability by (5.2). Using c0+ 2d0= 2, we calculate that this

is equivalent to 7c2

We compute, using (2.6),

c0=2(coshθr(L + x1) − coshθr(L − x1))

coshθr(L + x1) + coshθr(L − x1)

Note that for L = ∞ we have

c0=2(e2θr|x1|− 1)

e2θr|x1|+ 1

By (3.17), this implies

stability.

We summarize the stability result for the O(1) eigenvalues as follows:

+ Re

(c0|λ0|2+ 2λ0)(λ0+ 1 + d0)

|λ0+ 1 + d0|2

?

.

0− 4c0− 8 > 0 which is true if c0>2

7(1 +√15) ≈ 1.3923.

,d0=

2coshθr(L − x1)

coshθr(L + x1) + coshθr(L − x1).

,d0=

2

e2θr|x1|+ 1.

Dr> 43.33. If the last condition is valid, we have

?

Ds

Dr− 1 = e2θr|x1|> 5.5822 andDs

Page 12

12 JUNCHENG WEI AND MATTHIAS WINTER

Theorem 2. The mutually exclusive, spiky steady state given in Theorem 1 is linearly stable with respect to

large eigenvalues λ?= O(1) for τ ≥ 0 and ? > 0 small enough if

coshθr(L + x1) − coshθr(L − x1)

coshθr(L + x1) + coshrθr(L − x1)>1

7(1 +

√15).

(5.3)

For L = ∞, this corresponds to

Ds

Dr

> 43.33.

Now the study of the large eigenvalues is completed. In the next section we study the small eigenvalues.

6. Stability III: The Small Eigenvalues

Now we study the small eigenvalues for (6.3), namely those with λ? → 0 as ? → 0. In this section we

summarize the main steps and results in several lemmas. Their proofs are rather technical and we therefore

delay them to the appendices.

For given f ∈ L2(Ω), let Tr[f] be the unique solution in H2

Dr∆(Tr[f]) − Tr[f] + α?f = 0.

In the same way, the operator Tsis defined with Drreplaced by Ds.

Let

¯ g?,1=ˆt1w?,x?

N(Ω) of the problem

(6.1)

1+ φ?,x?

1,

¯ g2=ˆt2w?,x?

2+ φ?,x?

2,

¯ r?= cTr[Ts[¯ g?,2]¯ g2

?,1] + Ts[¯ g?,1]¯ g2

?,2],

¯ s?,1= Ts[¯ g?,2],

¯ s?,2= Ts[¯ g?,1],

(6.2)

whereˆtihas been defined in (3.7) After re-scaling, the eigenvalue problem (4.1) becomes

where all functions are in H2

For simplicity, we set τ = 0. Since τλ?<< 1 the results in this section are also valid for τ finite. The case

of general τ > 0 can be treated as in [18]. We will see that the small eigenvalues are of the order O(?2). To

compute them, we will need to expand the eigenfunction up to the order O(?) term.

Let us define

?x − x?

r0

where χ(x) is a smooth cut-off function such that χ(x) = 1 for |x| < 1 and χ(x) = 0 for |x| > 2. Further,

r0=

10

In a similar way as in Section 3, we define approximate kernel and co-kernel, but in contrast now we can use

the exact solution given in Theorem 1:

?

?

where x?= (x?

−L

Then it is easy to see that

¯ gi(x) = ˜ g?,i(x) + e.s.t.,

λ?φ?,1= ?2∆φ?,1− φ?,1+cη?,2¯ g2

?,1

¯ r?

+2c¯ s?,2¯ g?,1φ?,1

¯ r?

−c¯ s?,2¯ g2

?,1ψ?

¯ r2

?

,

λ?φ?,2= ?2∆φ?,2− φ?,2+cη?,1¯ g2

?,2

¯ r?

+2c¯ s?,1¯ g?,2φ?,2

¯ r?

−c¯ s?,1¯ g2

?,2ψ?

¯ r2

?

,

τλ?ψ?= Dr∆ψ?− ψ?+ cα?η?,2¯ g2

?,1+ 2cα?¯ s?,2¯ g?,1φ?,1+ cα?η?,1¯ g2

?,2+ 2cα?¯ s?,1¯ g?,2φ?,2,

τλ?η?,1= Ds∆η?,1− η?,1+ α?φ?,1,

τλ?η?,2= Ds∆η?,2− η?,2+ α?φ?,2,

N(Ω), and α?has been defined in (3.7).

(6.3)

˜ g?,j(x) = χ

j

?

¯ g?,j(x),j = 1, 2,

(6.4)

1

?

1 + x2, 1 − x1,1

2|x1− x2|

?

.

(6.5)

Knew

?,x? := span

?d

dx˜ g?,1

?

?

⊕ span

?

?

?d

dx˜ g?,2

?

?

⊂ (H2

N(Ω?))2,

Cnew

?,x? := span

?

?d

dx˜ g?,1

⊕ span

?d

dx˜ g?,2

⊂ (L2(Ω?))2,

1,x?

2) and Ω?=

?,L

?

?

.

i = 1, 2.

(6.6)

Page 13

MEINHARDT-GIERER SYSTEM 13

Note that, by Theorem 1, ˜ g?,j(x) ∼ˆtjw

?x−x?

j

?

?

in H2

loc(Ω?) and ˜ g?,jsatisfies

?2∆˜ g?,j− ˜ g?,j+(˜ g?,j)2¯ s?,3−j

¯ r?

+ e.s.t. = 0,j = 1, 2.

Thus ˜ g

?

?,j:=d˜ g?,j

dx

satisfies

?2∆˜ g

?

?,j− ˜ g

?

?,j+2c˜ g?,j¯ s?,3−j

¯ r?

˜ g

?

?,j+c˜ g2

?,j

¯ r?

¯ s

?

?,3−j−c˜ g2

?,j¯ s?,3−j

(¯ r?)2

¯ r

?

?+ e.s.t. = 0.

(6.7)

Let us now decompose

φ?,j= ?a?

j˜ g

?

?,j+ φ⊥

?,j,j = 1, 2,

(6.8)

with complex numbers a?

j, where the factor ? is for scaling purposes, to achieve that a?

φ⊥

jis of order O(1), and

?= (φ⊥

?,1,φ⊥

?,2) ∈ (Knew

?,x?)⊥,

where orthogonality is taken for the scalar product of the product space (L2(Ω?))2. Note that, by definition,

φ?= (φ?,1,φ?,2) ∈ Knew

Suppose that ?φ??H2(Ω?)= 1. Then we need to have |a?

Similarly, we decompose

?,x?.

j| ≤ C.

ψ?= ?

2

?

j=1

a?

jψ?,j+ ψ⊥

?,η?,j= ?a?

jη0

?,j+ η⊥

?,j,j = 1, 2,

(6.9)

where ψ?,jsatisfies

Dr∆ψ?,j− ψ?,j+ 2α?c˜ g?,j˜ g

?

?,j¯ s?,3−j+ α?c˜ g2

?,3−jη0

?,j= 0,

(6.10)

η0

?,iis given by

Ds∆η0

?,i− η0

?,i+ α?˜ g

?

?,i= 0,

(6.11)

ψ⊥

?satisfies

Dr∆ψ⊥

?− ψ⊥

?+ 2α?c˜ g?,1¯ s?,2φ⊥

?,1+ α?c˜ g2

?,1η⊥

?,2+ 2α?c˜ g?,2¯ s?,1φ⊥

?,2+ α?c˜ g2

?,2η⊥

?,1= 0,

(6.12)

and finally η⊥

iis given by

Ds∆η⊥

?,i− η⊥

?,i+ α?φ⊥

?,i= 0.

(6.13)

Substituting the decompositions of φ?,i, ψ?and η?,iinto (6.3) we have

?

¯ r2

?

k=1

?ca?

j

(˜ g?,j)2¯ s?,3−j

¯ r

?

?−

2

?

a?

k

(˜ g?,j)2¯ s?,3−j

¯ r2

?

ψ?,k

?

− ?c

?

a?

j

(˜ g?,j)2

¯ r?

¯ s

?

?,3−j− a?

3−j

(˜ g?,j)2

¯ r?

η0

?,3−j

?

+?2∆φ⊥

?,j− φ⊥

?,j+2c˜ g?,j¯ s?,3−j

¯ r?

φ⊥

?,j−c˜ g2

?,j¯ s?,3−j

¯ r2

?

ψ⊥

?+c˜ g2

?,j

¯ r?

η⊥

?,3−j− λ?φ⊥

?,j+ e.s.t.

= λ??a?

j˜ g

?

?,j,j = 1, 2,

(6.14)

since

?2∆˜ g

?

?,j− ˜ g

?

?,j+2c˜ g?,j¯ s3−j,?

¯ r?

˜ g

?

?,j+ e.s.t. = 0.

Multiplying both sides of (6.14) for j = 1,2 by ˜ g

?

?,lfor l = 1,2 and integrating over (−L,L), we obtain

?

r.h.s. of (6.14) = λ?a?

j?

?L

−L˜ g

?

?,j˜ g

?

?,ldx = λ?δjla?

l(ˆtl)2

R(w

?(y))2dy (1 + o(1))(6.15)

and

l.h.s. of (6.14) = c?

2

?

k=1

a?

kδjl

?L

?

−L

(˜ g?,j)2¯ s?,3−j

¯ r2

?

?

δjk¯ r

?

?− ψ?,k

?

˜ g

?

?,ldx

+c?

2

?

k=1

a?

kδjl

?L

−L

(˜ g?,j)2

¯ r?

δj,3−kη0

?,3−j− δj,k¯ s

?

?,3−j

?

˜ g

?

?,ldx

Page 14

14 JUNCHENG WEI AND MATTHIAS WINTER

+cδjl

?L

(˜ g?,j)2¯ s?,3−j

¯ r?

−L

(˜ g?,l)2¯ s?,3−j

¯ r?

?

¯ r

¯ r?

?

?

−¯ s

?

?,3−j

¯ s?,3−j

−ψ⊥

¯ r?

?

φ⊥

?,jdx

+cδjl

?L

−L

?η⊥

¯ s?,3−j

?,3−j

?

?

˜ g

?

?,ldx + o(?2)

= J1,l+ J2,l+ J3,l+ J4,l:= Jl,

(6.16)

where Ji,l, i = 1,...,4 are defined by the last equality. The following is the key lemma for the asymptotic

behavior of the small eigenvalues:

Lemma 2. We have

??

k=1

?δk,3−l∇x?

GDs(x?

Jl= −?2

R

1

3w3dy

?

2

?

a?

k

??

−ˆtl∇x?

l∇x?

k(HDr(x?

l,x?

l)) +ˆt3−l∇x?

?

l∇x?

k(GDr(x?

l,x?

3−l))

?

−∇x?

l

3−lGDs(x?

l,x?

l,x?

3−l)

3−l)

+

?

(∇x?

kˆtl(x?

1,x?

2))∇x?

lGDr(x?

l,x?

l) + (∇x?

kˆt3−l(x?

1,x?

2))∇x?

lGDr(x?

l,x?

3−l)

??

+ o(?2).

(6.17)

?

Lemma 2 follows from the following series of lemmas:

Lemma 3. We have

η0

?,k(x?

3−k) =ˆtk∇x?

kGDs(x?

3−k,x?

k) + O(?).

(6.18)

Lemma 4. We have

¯ s

?

?,k(x?

3−k) =ˆtk∇x?

3−kGDs(x?

3−k,x?

k) + O(?).

(6.19)

Lemma 5. For k,l = 1,2 we have

?

δkl¯ r

?

?− ψ?,k

?

(x?

l) = cˆt1ˆt2

?

−ˆtl∇x?

k

?HDr(x?

1

2√Dr

l,x?

l)GDs(x?

l,x?

3−l)?+ˆt3−l∇x?

?

k

?GDr(x?

l,x?

3−l)GDs(x?

3−l,x?

l)?

(6.20)

+

ˆtl∇x?

kGDs(x?

l,x?

3−l)+ O(?).

Similar to Lemma 5, we get

Lemma 6. For k,l = 1,2 we have

?

δkl¯ r

?

?− ψ?,k

?

(x?

l+ ?y) −

?

δkl¯ r

?

?− ψ?,k

?

(x?

l) = ?ycˆt1ˆt2

?

−ˆtl∇x?

l∇x?

k

?HDr(x?

l,x?

l)GDs(x?

l,x?

3−l)?

+ˆt3−l∇x?

l∇x?

k

?GDr(x?

l,x?

3−l)GDs(x?

3−l,x?

l)?+

1

2√Dr

ˆtl∇x?

l∇x?

kGDs(x?

l,x?

3−l)

?

+ O(?2).

(6.21)

Lemma 2 will be shown in Appendix A, proving Lemmas 3 – 6 first.

After obtaining the asymptotic behavior of the small eigenvalues, our next goal is to study their stability.

Combining Lemma 2 with (6.15) and (6.16), the small eigenvalues λ?are given by the following two-

dimensional eigenvalue problem, where (a?

??

k=1

?δk,3−l∇x?

GDs(x?

?

?

1,a?

2) are the corresponding eigenvectors:

−?2ˆtl

R

1

3w3dy

?

2

?

a?

k

??

−ˆtl∇x?

l∇x?

k(HDr(x?

l,x?

l)) +ˆt3−l∇x?

l∇x?

k

?GDr(x?

l,x?

3−l)??

−∇x?

l

3−lGDs(x?

l,x?

l,x?

3−l)

3−l)

?

+(∇x?

kˆtl(x?

1,x?

2))∇x?

lGDr(x?

l,x?

l) + (∇x?

kˆt3−l(x?

1,x?

2))∇x?

lGDr(x?

l,x?

3−l)

??

+ o(?2).

= λ?δjla?

l(ˆtl)2

R(w

?(y))2dy (1 + o(1)).

(6.22)

Page 15

MEINHARDT-GIERER SYSTEM 15

From (6.22) it follows that the eigenvectors (a0

(1,1), up to a constant factor.

For the eigenvector (a0

Section 3. We get

1,a0

2) = lim?→0(a?

1,a?

2) satisfy (a0

1,a0

2) = (1,−1) or (a0

1are similar to those given in

1,a0

2) =

1,a0

2) = (1,−1), the computations of the eigenvalue λ?

λ?

1= C3?2M?(x?

1) + o(?2),

where

M(x) = −2θstanhθs(L − x) + θrtanhθr(L − x) + θrsinhθr(L − x) − sinhθr(L + x)

coshθr(L − x) + coshθr(L + x)

and

C3=

1

3ˆtl

?

Rw3dy

R(w

?

?)2dy> 0.

(6.23)

This implies

M

?(x) =

2θ2

s

cosh2θs(L − x)−

θ2

r

cosh2θr(L − x)− θ2

r

?

1 −[sinhθr(L − x) − sinhθr(L + x)]2

[coshθr(L − x) − coshθr(L + x)]2

?

.

Obviously, M?(x) < 0 if θs= 0 or if θsis small compared to θr. A simple sufficient condition is obtained by

taking into account the first two terms of M?(x) which has been derived in Section 3 and is given by (3.13).

Recall that (3.13) holds if Ds/Dr> 4.

If Ds/Dr> 4, the eigenvalue λ?

Now we consider the eigenvalue λ?

Lemma 7. Suppose λ?

λ?

where C3> 0 has been defined in (6.23),

and

?(∇x?

1has negative real part.

2with eigenvector such that lim?→0(a?

2is the eigenvalue with eigenvector lim?→0(a?

2= C3?2P(x?

1,a?

2) = (1,1). We have

2) = (1,1). Then we have

1,a?

1,x?

2) + o(?2),

(6.24)

P(x?

1,x?

2) = (∇x?

1+ ∇x?

2)

1− ∇x?

GDs(x?

2)GDs(x?

1,x?

1,x?

2)

2)

−ˆt?

1(x?

1,x?

2)(∇x?

1− ∇x?

2)HDr(x?

1,x?

1) −ˆt?

2(x?

1,x?

2)(∇x?

1− ∇x?

2)HDr(x?

1,x?

1)

?

.

We have P(x?

Lemma 7 will be proved in Appendix B.

By the argument of Dancer [2] the eigenvalue problem (6.22) captures all converging sequences of small

eigenvalues λ?and so λ?

main result on o(1) eigenvalues:

Theorem 3. Suppose Ds/Dr> 4 and lim?→0x?

in Theorem 1 is linearly stable with respect to small eigenvalues λ?= o(1) if τ ≥ 0 and ? > 0 are both small

enough. More precisely, we have Re(λ?) ≤ c?2for some c > 0 independent of ? and τ.

7. Numerical Simulations

For the simulations we use the domain Ω = (−1,1) and Neumann boundary conditions for all components.

The constants in the five-component Meinhardt-Gierer system are chosen as follows:

?2= .001, Dr= .1, Ds= 1, c = 1, τ = 1.

The pictures show the numerically obtained long-term limit of the five components g1, g2, r, s1, s2, i.e. the

state at t = 3,000. After that the solution is numerically stable and does not change anymore. This confirms

the analytical result that the steady state with two mutually exclusive spikes for the two activators which are

located in different positions is stable.

Our simulations support the conjecture that the spikes are not only linearly stable as steady states. However,

at least locally, they are also dynamically stable for the parabolic reaction-diffusion system.

The choice of constants for the numerical simulations has been motivated by the analysis. In particular, Dr

has to be rather small compared to Ds by the stability result in Section 4. On the other hand, Dr cannot

be too small since otherwise by the results in Section 3 the distance between the spikes becomes very large

and there is no such solution on the interval (−1,1). So the parameters have to be chosen very carefully, and

without any analytical results it would be very hard to find the parameter range for which stable mutually

exclusive spikes exist.

The pictures show that the inhibitor r has two peaks which are near the peaks of the local activators g1and

g2. The profile of the peaks of r is “smoother” than for those of the local activators. The lateral activator si

has a peak near the peak of giand its profile again is smoother than the latter.

1,x?

2) ≤ 0 with equality if and only if x?

1= x?

2= 0.

1and λ?

2are all o(1) eigenvalues for ? small enough. Therefore we have the following

1= x1?= 0. The mutually exclusive, spiky steady state given

Page 16

16JUNCHENG WEI AND MATTHIAS WINTER

Figure 1. The stable, mutually exclusive, two-spike steady state. All five components have been plotted to

highlight the interactions between them.

We expect Hopf bifurcation and oscillating spikes to occur for sufficiently large tau. We analyzed only the

case τ = 0 and did not observe oscillations numerically for τ = 1. The instabilities of the spikes which we

encountered in the numerical calculations were (i) disappearance of spikes when there amplitudes becomes

unstable (related to large eigenvalues) – this happens if the ratio of the diffusion constantsDs

movement of the spikes to the boundary when their positions became unstable (related to small eigenvalues) –

this occurs if Dris too small.

For numerical simulations with very large τ we expect oscillations to occur.

Dris too large (ii)

Page 17

MEINHARDT-GIERER SYSTEM17

8. Appendix A: Proof of Lemma 2

In this Appendix we prove Lemma 2 in a sequence of lemmas. First we introduce some notation.

Using the notation (3.7), we introduce matrix notation

?ˆt1,ˆt2

Gij=?G(xi,xj)

we get

The system (8.25) has a unique solution (ˆt,∇x1ˆt,∇x2ˆt) since det(G) ?= 0 which can be written as follows:

ˆt = G−1e,

Let us put

e = (1,1)T,t =

?T,∇xiˆt =

∇xiGkl=?∇xiG(xk,xl)

e = Gˆt,

0 = (∇x1G)ˆt + G

0 = (∇x2G)ˆt + G

?

∇xiˆt1,∇xiˆt2

?T,i = 1,2,

?,

?, i,j = 1,2,i,j,k = 1,2,

?

∇x1ˆt

?

?

,

∇x2ˆt

?

.

(8.25)

∇xiˆt = −G−1(∇xiG)G−1e,i = 1,2.

(8.26)

˜L?,jφ⊥

?:= ?2∆φ⊥

?,j− φ⊥

?,j+2c˜ g?,j¯ s?,3−j

¯ r?

φ⊥

?,j−c˜ g2

?,j¯ s?,3−j

¯ r2

?

ψ⊥

?+c˜ g2

?,j

¯ r?

η⊥

?,3−j

(8.27)

and a?:= (a?

We now prove the key lemma, Lemma 2, in a sequence of lemmas.

Proof of Lemma 3: Note that for k = 3 − l we have

η0

1,a?

2)T.

?,k(x?

l) = α?

?L

−LGDs(x?

l,z)˜ g

?

?,k(z)dz + O(?) = α?ˆtk∇x?

?

kGDs(x?

l,x?

k)

?L

−Lzw?

?z − xk

?

?

(z)dz

= −ˆtk∇x?

kGDs(x?

l,x?

k)α?

?

?∞

−∞w(y)dy

?

+ O(?) = −ˆtk∇x?

kGDs(x?

l,x?

k) + O(?).

(8.28)

?

Proof of Lemma 4: Note that for k = 3 − l we have

¯ s

l

?

?,k(x?

l) = α?∇x?

?L

−LGDs(x?

l,z)˜ g?,k(z)dz = α?∇x?

?

lGDs(x?

l,x?

k)

?L

−L

ˆtkw

?z − xk

?

?

(z)dz + O(?)

= α?ˆtk∇x?

lGDs(x?

l,x?

k)

?

?∞

−∞w(y)dy

?

+ O(?) =ˆtk∇x?

lGDs(x?

l,x?

k) + O(?).

(8.29)

?

Proof of Lemma 5: We first consider the case k = l and compute ψ?,l(x?

?L

?∞

−c(α?)2

?L

= c(α?)2

l) as follows:

?

ψ?,l(x?

l) = cα?

−LGDr(x?

?

−LHDr(x?

l,z)

?

2˜ g

?

?,l˜ g?,l¯ s?,3−l+ ˜ g2

?,3−lη0

?,l

(z)dz + O(?)

= c(α?)2

−∞KDr(|z|)

?L

+c(α?)2

2˜ g?,l(x?

l+ z)˜ g

?d

?

?,l(x?

l+ z)

??L

?L

??L

−LGDs(z,y)˜ g?,3−l(y)dy dz

−LGDs(x?

l+ z,y)˜ g?,3−l(y)dy dz

l,z)

dz(˜ g?,l(z))2

−LGDr(x?

l,z)(˜ g?,3−l(z))2

−LGDs(z,y)˜ g

??L

l,x?

GDs(x?

?

?,l(y)dy dz + O(?)

?∞

−∞KDr(|z|)

?

2˜ g?,l(x?

l+ z)˜ g

??

?

GDr(x?

?

?,l(x?

l+ z)

−LGDs(x?

l,x?

l+ z,y)˜ g?,3−l(y)dy dz

?

?

+ O(?).

+c

2ˆt1ˆt2ˆtl

+cˆt1ˆt2ˆtl

∇x?

HDr(x?

lHDr(x?

l)

?

3−l)

l,x?

l)∇x?

lGDs(x?

l,x?

3−l)

−cˆt1ˆt2ˆt3−l

?

l,x?

3−l)∇x?

lGDs(x?

3−l,x?

l)

?

(8.30)

Page 18

18 JUNCHENG WEI AND MATTHIAS WINTER

Next we consider the case k = 3 − l and compute ψ?,3−l(x?

ψ?,3−l(x?

?∞

+c(α?)2

l) as follows:

l) = cα?

?L

−LGDr(x?

l,z)

?

2˜ g

?

?,3−l˜ g?,3−l¯ s?,l+ ˜ g2

?L

dz(˜ g?,3−l(z))2

?L

l+ z))2

?

GDr(x?

?,lη?,3−l

?

?

?,3−l(y)dy dz

(z)dz + O(?)

= c(α?)2

−∞KDr(|z|)(˜ g?,l(x?

?L

?L

?∞

l,x?

l+ z))2

−LGDs(x?

??L

−LGDs(z,y)˜ g

?L

− cˆt1ˆt2ˆt3−l

l+ z,y)˜ g

−LGDr(x?

−LHDr(x?

−∞KDr(|z|)(˜ g?,l(x?

l)∇x?

−cˆt1ˆt2ˆt3−l

l,z)

?d

−LGDs(z,y)˜ g?,l(y)dy dz

?

?,3−l(y)dy dz + O(?)

−c(α?)2

l,z)(˜ g?,l(z))2

= c(α?)2

−LGDs(x?

??

3−lGDs(x?

l+ z,y)˜ g

?

?,3−l(y)dy dz

+cˆt1ˆt2ˆtl

?

HDr(x?

3−lGDs(x?

l,x?

3−l)

∇x?

3−lGDr(x?

?

l,x?

3−l)

?

GDs(x?

3−l,x?

l)

?

?

l,x?

3−l)∇x?

3−l,x?

l)+ O(?).

(8.31)

Next we compute ¯ r?(x?

l):

¯ r?(x?

l) = α?c

?L

−LGDr(x?

?

l,z)

?

˜ g2

?,1¯ s?,2+ ˜ g2

?,2¯ s?,1

?

(z)dz + O(?)

= (α?)2c

?∞

−∞KDr(|z|)

−(α?)2c

?L

(˜ g?,l(x?

l+ z))2

?L

?L

?L

−LGDs(x?

−LGDs(z,y)˜ g?,3−l(y)dy

−LGDs(z,y)˜ g?,l(y)dy

l+ z,y)˜ g?,3−l(y)dy

?

dz

?L

−LHDr(x?

−LGDr(x?

l,z)

?

?

(˜ g?,l(z))2

?

dz

+(α?)2c

l,z) (˜ g?,3−l(z))2

?

dz + O(?).

So we have

¯ r

?

?(x?

l) = (α?)2c

?∞

−∞KDr(|z|)

??

2˜ g?,l(x?

l+ z)˜ g

?

?,l(x?

l+ z)

??L

−LGDs(x?

l+ z,y)˜ g?,3−l(y)dy

?

+(˜ g?,l(x?

l+ z))2

?L

−L∇x?

l,z)(˜ g?,l(z))2

lGDs(x?

l+ z,y)˜ g?,3−l(y)dy

?L

?L

l+ z)

dz

−(α?)2c

?L

−L∇x?

lHDr(x?

−LGDs(z,y)˜ g?,3−l(y)dy dz

−LGDs(z,y)˜ g?,l(y)dy + O(?)

??L

l+ z,y)˜ g?,3−l(y)dy

?

+(α?)2c

?L

−L∇x?

lGDr(x?

??

l+ z))2

l,z)(˜ g?,3−l(z))2

= (α?)2c

?∞

−∞KDr(|z|)

+(˜ g?,l(x?

2˜ g?,l(x?

l+ z)˜ g

?L

?

?

?,l(x?

−LGDs(x?

l+ z,y)˜ g?,3−l(y)dy

?

−L∇x?

lGDs(x?

dz

−c

2ˆt1ˆt2ˆtl

?

(∇x?

lHDr(x?

l,x?

l))GDs(x?

l,x?

3−l)+ cˆt1ˆt2ˆt3−l

(∇x?

lGDr(x?

l,x?

3−l))GDs(x?

3−l,x?

l)

?

+ O(?).

(8.32)

Now we compute

First, for k = l, we get

?

δkl¯ r

?

?− ψ?,k

?

(x?

l). Again we consider the two cases k = l and k ?= l separately.

?

¯ r

?

?− ψ?,l

?

(x?

l) = −cˆt1ˆt2ˆtl∇x?

+cˆt1ˆt2ˆt3−l∇x?

l+ z))2

l

?HDr(x?

l,x?

l,x?

l)GDs(x?

l,x?

l)?

3−l)?

l

?GDr(x?

3−l)GDs(x?

3−l,x?

+(α?)2c

?∞

−ˆtl∇x?

−∞KDr(|z|)(˜ g?,l(x?

?HDr(x?

?L

3−l)?+ˆt3−l∇x?

−L∇x?

l,x?

lGDs(x?

l+ z,y)˜ g?,3−l(y)dy dz + O(?)

?GDr(x?

= cˆt1ˆt2

?

l

l,x?

l)GDs(x?

l

l,x?

3−l)GDs(x?

3−l,x?

l)?

Page 19

MEINHARDT-GIERER SYSTEM19

+

1

2√Dr

ˆtl∇x?

lGDs(x?

l,x?

3−l)

?

+ O(?).

Next we consider the case k = 3 − l and get

−ψ?,3−l(x?

l) = −cˆt1ˆt2ˆtl∇x?

+ˆt1ˆt2ˆt3−l∇x?

3−l

?HDr(x?

l,x?

l,x?

l)GDs(x?

l,x?

l)?

3−l)?

3−l

?GDr(x?

3−l)GDs(x?

3−l,x?

+(α?)2c

?∞

−∞KDr(|z|)(˜ g?,l(x?

−ˆtl∇x?

l+ z))2

?L

3−l)?+ˆt3−l∇x?

l,x?

−L∇x?

lGDs(x?

l+ z,y)˜ g?,3−l(y)dy dz + O(?)

= cˆt1ˆt2

?

3−l

?HDr(x?

l,x?

l)GDs(x?

l,x?

3−l

?GDr(x?

+ O(?).

l,x?

3−l)GDs(x?

3−l,x?

l)?

+

1

2√Dr

ˆtl∇x?

3−lGDs(x?

3−l)

?

This implies (6.20). The proof of Lemma 5 is finished.

?

Remark: Note that Lemma 5 can be written in the simpler way

?

3−l)?+ˆt3−l∇x?

δkl¯ r

?

?− ψ?,k

?

(x?

l)

= cˆt1ˆt2

?

ˆtl∇x?

k

?GDr(x?

l,x?

l)GDs(x?

l,x?

k

?GDr(x?

l,x?

3−l)GDs(x?

3−l,x?

l)??

+ O(?)

(8.33)

with the understanding that at jump discontinuities the derivative is defined as the arithmetic mean of its left

hand and right hand derivatives.

Proof of Lemma 6: The proof of Lemma 6 follows along the same lines as that for Lemma 5 and is therefore

omitted.

?

Before we can complete the proof of Lemma 2, we need to study the asymptotic expansion of φ⊥

first. Let us denote

?

φ1

?,2

where wi, i = 1,2 have been defined in (2.3) and

?

Then we have the following estimate.

Lemma 8. For ? sufficiently small enough, it holds that

?φ⊥

Proof: To prove Lemma 8, we first need to derive a relation between φ⊥

to the proof of Proposition 1 in Section 3 it follows that˜L?is uniformly invertible from

By this uniform invertibility, we deduce that

?as ? → 0

φ1

?=

φ1

?,1

?

:= ?a?

1

?(∇x1t1)w1

(∇x1t2)w2

?

+ ?a?

2

?(∇x2t1)w1

(∇x2t2)w2

?

+ ?G−1WA0

?∇GDs(x1,x2)

GDs(x1,x2)

,

(8.34)

A0

?=

0

a?

2

0

a?

1

?

,

W =

?w1

0

0

w2

?

.

?− φ1

??(H2(Ω?))2 = O(?2).

(8.35)

?,j, η⊥

?,jand ψ⊥

?,j. Note that similarly

?

Knew

?,x?

?⊥to

?

Cnew

?,x?

?⊥.

?φ⊥

??(H2(Ω?))2 = O(?),

?,jas follows˜φ?,j=

where φ⊥

?=

?

?

φ⊥

?,1,φ⊥

?,2

?T∈ (Knew

?,j= ?˜φ?,j+ e.s.t.

?,x?)⊥.

(8.36)

Let us cut off and re-scale φ⊥

Choose φ?,jsuch that ?˜φ?,j?H1(R)= 1. Then we have, possibly for a subsequence, that˜φ?,j→ φj in H1

By (6.12) and (6.13), ψ⊥

φ⊥

?,j

?χ

?x−x?

j

r0

. Then φ⊥

loc(R).

?can be represented as follows (the proof is similar that of Lemma 5):

?L

?L

ψ⊥

?(x?

j) = ?(α?)2c

2

?

k=1

−LGDr(x?

j,z)

?

2˜ g?,k(z)˜φ?,k(z)

−LGDs(z,y)˜ g?,3−k(y)dy + ˜ g2

?,k(z)

?L

−LGDs(z,y)˜φ?,3−k(y)dy

?

dz

Page 20

20JUNCHENG WEI AND MATTHIAS WINTER

= ?α?c

2

?

k=1

GDr(x?

j,x?

k)GDs(x?

k,x?

3−k)

?

2ˆt3−k

?L

−L˜ g?,k˜φ?,kdx + (ˆtk)2

?

?

?L

−L

˜φ?,3−kdx

?

?

+ o(?)

= ?c

2

?

k=1

ˆtkGDr(x?

j,x?

k)GDs(x?

k,x?

3−k)

?

2ˆt3−k

Rwφkdy

?

j,x?

Rw2dy

+ˆtk

?

Rφ3−kdy

?

Rwφjdy

?

Rφjdy

?

?L

Rwdy

+ o(?)

=

?c

GDr(x?

1,x?

1) + GDr(x?

1,x?

2)

?

GDr(x?

j,x?

j)GDs(x?

3−j) 2ˆt3−j

?

Rw2dy

+ˆtj

?

Rφ3−jdy

?

+ o(?).

Rwdy

?

+GDr(x?

j,x?

3−j)GDs(x?

3−j,x?

j)

?

2ˆtj

?

Rwφ3−jdy

?

Rw2dy

+ˆt3−j

?

Rwdy

??

(8.37)

In the same way, we calculate

η⊥

?,3−j(x?

j) = ?α?

?L

−LGDs(x?

j,z)˜φ?,3−j(z)dz = ?α?GDs(x?

j,x?

3−j)

−L

˜φ?,3−jdx + O(?2)

= ?GDs(x?

j,x?

3−j)

?

Rφ3−jdy

?

j) = o(?).

Rwdy

+ o(?) (8.38)

and

η⊥

?,j(x?

(8.39)

Substituting (6.18), (6.19), (6.20), (8.37), (8.38) into (6.14) and calculating the limit ? → 0 as we have done

in Section 4, it follows that φ = (φ1,φ2)Tsatisfies

Lφ = ∆φ − φ + 2wφ −

?

B

?

φ + 2C

??

Rwφ

????

Rw

?−1

w2=ˆt1(a · ∇G)G−1ew2−

ˆt1A0∇GDs(x1,x2)

GDs(x1,x2)

w2.

(8.40)

In the previous calculation we have used (4.9), (4.10), (8.26), the notations

a = (a1,a2)T= lim

?→0(a?

1,a?

2)T,

a · ∇ = a1∇x1+ a2∇x2,

?

xj= lim

?→0x?

j, j = 1,2,

A0=

0

a2

a1

0

?

and (compare Section 2)

¯ r?(x?

j) = cˆt1ˆt2GDs(x?

j,x?

3−j) + O(?),

j,x?

j = 1,2,

(8.41)

¯ s?,3−j(x?

j) =ˆt3−jGDs(x?

3−j) + O(?),j = 1,2.

(8.42)

We compute

Id − B − 2C = −

1

GDr(x1,x1) + GDr(x1,x2)

?GDr(x1,x1)

GDr(x1,x2)

GDr(x2,x2)

GDr(x1,x2)

?

= −ˆt1G.

By the Fredholm alternative and since det(G) ?= 0, equation (8.40) has a unique solution φ which is given by

φ = −G−1(a · ∇G)G−1ew +G−1A0∇GDs(x1,x2)

Now we compare φ with φ1

?

= ?(a?· ∇x?ˆt)w + ?G−1A0∇GDs(x1,x2)

GDs(x1,x2)

GDs(x1,x2)

w.

(8.43)

?. By definition and using (8.26), we get

?

φ1

?=

?

?

a?

1∇x?

1ˆt1+ a?

2∇x?

2ˆt1

˜ g?,1, ?

?

a?

1∇x?

1ˆt2+ a?

2∇x?

2ˆt2

?

˜ g?,2

?T+ ?G−1WA0∇GDs(x1,x2)

w + o(?)

GDs(x1,x2)

= −?G−1(a · ∇G)G−1ew + ?G−1A0∇GDs(x1,x2)

On the other hand, using (8.43) gives

?˜φ?,1,˜φ?,2

GDs(x1,x2)

w + o(?).

(8.44)

φ⊥

?= ?

?T+ e.s.t. = ?

?

φj

?x − t?

j

?

??

j=1,2

+ o(?)

Page 21

MEINHARDT-GIERER SYSTEM 21

= −?G−1(a · ∇G)G−1ew + ?G−1A0∇GDs(x1,x2)

From (8.44) and (8.45), it follows that φ?= φ1

GDs(x1,x2)

w + o(?).

(8.45)

?+ o(1).

?

Finally, we complete the proof of the key lemma – Lemma 2.

Proof of Lemma 2: The computation of J1 follows from the Lemmas 5 and 6 and the equations (8.41),

(8.42). We get

2

?

2

?

2

?

= −?2ˆtl

R

k=1

?GDr(x?

??ˆtlGDr(x?

−

?GDr(x?

×

??

k=1

?∇x?

GDs(x?

?

?

In the previous computation of J1,lwe have used the condition for the positions of the spikes given in the

derivation of Theorem 1 which implies that

¯ r?

second line in the previous computation has only a contribution which was included into the error terms. We

will use the same condition in the computation of the other Ji,lwithout explicitly mentioning it again.

Similarly, we compute J2,l. We get

?L

?L

??

k=1

Note that we need to have k = 3 − j and j = l; otherwise J2,lis of the order o(?2).

The estimate J3,l= o(?2) follows by the fact that φ⊥

J1,l= c?

k=1

a?

kδjl

?L

−L

c(˜ g?,j)2¯ s?,3−j

¯ r?

?

?

δkl¯ r

?

?

¯ r?

−ψ?,k

¯ r?

?

?

˜ g

?

?,ldx

= ?

k=1

a?

kδjl

?L

c(˜ g?,j)2¯ s?,3−j

¯ r?

?

−Lc(˜ g?,j)2¯ s?,3−j

¯ r?

??

?

(x?

l)

δkl¯ r

?

?

¯ r?

−ψ?,k

¯ r?

˜ g

?

?,ldx

+?

k=1

a?

kδjl

?L

−L

δkl¯ r

?

?

¯ r?

−ψ?,k

¯ r?

?

(x?

l)

?

˜ g

?

?,ldx + o(?2)

??

1

3w3dy

2

?

3−l)GDs(x?

a?

k

?

∇x?

l

−ˆtl∇x?

k

?HDr(x?

1

2√Dr

?

l)GDs(x?

l,x?

l)GDs(x?

l,x?

3−l)?

3−l) +ˆt3−l∇x?

k

l,x?

3−l,x?

l)?+

l,x?

ˆtl∇x?

kGDs(x?

l,x?

?

l,x?

?

l) +ˆt3−lGDr(x?

?HDr(x?

l,x?

3−l)

GDs(x?

l,x?

3−l)

?−1

−ˆtl∇x?

k

l,x?

l,x?

3−l)?

ˆtl∇x?

?−2?

+ˆt3−l∇x?

k

3−l)GDs(x?

3−l,x?

??

l)?+

1

2√Dr

kGDs(x?

l,x?

3−l)

?

?

∇x?

?

lGDs(x?

l,x?

3−l)

GDs(x?

l,x?

3−l)+ o(?2)

= −?2ˆtl

R

1

3w3dy

2

?

a?

k

??

−ˆtl∇x?

l∇x?

kHDr(x?

l,x?

l) +ˆt3−l∇x?

?

l∇x?

kGDr(x?

l,x?

3−l)

?

+∇x?

l

kGDs(x?

l,x?

3−l)

3−l)

l,x?

−−ˆtl∇x?

kHDr(x?

l,x?

l) +ˆt3−l∇x?

kGDr(x?

l,x?

??

3−l)

?

×−ˆtl∇x?

lHDr(x?

l,x?

l) +ˆt3−l∇x?

lGDr(x?

l,x?

3−l)+ o(?2).

¯ s?,3−j

(x?

j) = O(?). More precisely, this condition implies that the

J2,l= ?

2

?

c(˜ g?,j)2¯ s?,3−j

¯ r?

k=1

a?

k

−L

c(˜ g?,j)2¯ s?,3−j

¯ r?

?(δjk∇x?

?

δ3−j,k

η0

¯ s?,3−j

3−j)GDs(x?

GDs(x?

?(δkl∇x?

GDs(x?

?,3−j

− δjk

¯ s

¯ s?,3−j

j,x?

?

?,3−j

?

˜ g

?

?,ldx

= −?

2

?

k=1

a?

k

−L

1

3w3(y)dy

j+ δ3−j,k∇x?

3−j)

j,x?

3−j)

?

3−l)

˜ g

?

?,ldx + o(?2)

= ?2ˆtl

R

?

2

?

a?

k∇x?

l

l+ δk,3−l∇x?

3−l)GDs(x?

l,x?

l,x?

3−l)

?

+ o(?2).

?,j⊥ ˜ g?,j.

Page 22

22JUNCHENG WEI AND MATTHIAS WINTER

Next we determine J4,l. We compute, using (8.37), (8.38) and Lemma 7, that

?L

??

k=1

?

?

Here we have used the relation

?L

which follows from the trivial identity

J4,l= cδjl

−L

(˜ g?,j)2¯ s?,3−j

¯ r?

?η⊥

¯ s?,3−j

?,3−j

−ψ⊥

¯ r?

?

?

˜ g

?

?,ldx

= −?2ˆtl

R

1

3w3dy

?

2

?

(∇x?

a?

k

??

(∇x?

kˆtl(x?

1,x?

2))∇x?

lGDr(x?

l,x?

l) + (∇x?

kˆt3−l(x?

1,x?

2))∇x?

?

lGDr(x?

l,x?

3−l)

?

−

kˆtl(x?

1,x?

2))GDr(x?

l,x?

l) + (∇x?

kˆt3−l(x?

1,x?

2))GDr(x?

??

l,x?

3−l)

×−ˆtl∇x?

lHDr(x?

l,x?

l) +ˆt3−l∇x?

lGDr(x?

l,x?

3−l)+ o(?2).

−L

c˜ g2

?,j¯ s?,3−j

¯ r?

η⊥

?,3−j

¯ s?,3−j?˜ g

?

?,jdx = o(?2)

∇x?

l

?GDs(x?

GDs(x?

j,x?

j,x?

3−j)

3−j)

?

= 0.

In a similar way, using the identity

∇x?

l

?ˆtjGDr(x?

j,x?

j,x?

j) +ˆt3−jGDr(x?

j) +ˆt3−jGDr(x?

j,x?

j,x?

3−j)

3−j)

1,x?

ˆtjGDr(x?

?

= 0,

it can be seen that the contribution of the term −?G−1WA0∇GDs(x?

Adding J1,l, J2,land J4,lwe get

??

2)

GDs(x?

1,x?

2)

in ψ⊥

?to J4,lis of the order o(?2).

Jl= −?2ˆtl

R

1

3w3dy

?

2

?

k=1

a?

k

??

−ˆtl∇x?

?δkl∇x?

l∇x?

kHDr(x?

l,x?

l) +ˆt3−l∇x?

?

l∇x?

kGDr(x?

l,x?

3−l)

?

+∇x?

l

lGDs(x?

GDs(x?

??

l+ δk,3−l∇x?

GDs(x?

l,x?

3−l)

3−l)

l,x?

−

?

−ˆtl∇x?

kHDr(x?

l,x?

l) +ˆt3−l∇x?

kGDr(x?

?(δkl∇x?

l,x?

3−l)

−ˆtl∇x?

3−l)GDs(x?

l,x?

lHDr(x?

l,x?

l) +ˆt3−l∇x?

3−l)

lGDr(x?

l,x?

3−l)

?

−∇x?

l

l,x?

3−l)

kˆt3−l(x?

?

+

??

∇x?

??

kˆtl(x?

1,x?

2)

?

∇x?

lGDr(x?

?

l,x?

l) +

?

?

∇x?

1,x?

2)

?

?

∇x?

lGDr(x?

l,x?

3−l)

?

?

−∇x?

?

kˆtl(x?

1,x?

2)

GDr(x?

l,x?

l) +

∇x?

kˆt3−l(x?

1,x?

2)

GDr(x?

??

l,x?

3−l)

×−ˆtl∇x?

lHDr(x?

l,x?

l) +ˆt3−l∇x?

lGDr(x?

l,x?

3−l)+ o(?2).

This expression consists of 3+1+2=6 parts, which are given in one line each, with the exception of the last

part which is given in the last two lines. Part 3 is minus Part 6 (up to o(?2)) by (8.25) and they cancel. Part

2 and Part 4 cancel partially.

Making these simplifications, we finally get

??

k=1

?δk,3−l∇x?

GDs(x?

?

This finishes the proof of Lemma 2.

Jl= −?2ˆtl

R

1

3w3dy

?

2

?

a?

k

??

−ˆtl∇x?

l∇x?

k(HDr(x?

l,x?

l)) +ˆt3−l∇x?

?

l∇x?

k

?GDr(x?

l,x?

3−l)??

−∇x?

l

3−lGDs(x?

l,x?

l,x?

3−l)

3−l)

+(∇x?

kˆtl(x?

1,x?

2))∇x?

lGDr(x?

l,x?

l) + (∇x?

kˆt3−l(x?

1,x?

2))∇x?

lGDr(x?

l,x?

3−l)

??

+ o(?2).

Page 23

MEINHARDT-GIERER SYSTEM23

?

9. Appendix B: Proof of Lemma 7

Proof of Lemma 7:

We show that

P(x?

1,x?

2) = (∇x?

1+ ∇x?

2)

?(∇x?

1− ∇x?

GDs(x?

2)GDs(x?

1,x?

1,x?

2)

2)

−ˆt?

1(x?

1,x?

2)(∇x?

1− ∇x?

2)HDr(x?

1,x?

1) −ˆt?

2(x?

1,x?

2)(∇x?

1− ∇x?

2)HDr(x?

1,x?

1)

?

< 0.

We compute

(∇x?

1+ ∇x?

2)GDs(x?

1,x?

2) = 0,

and

(∇x?

1+ ∇x?

2)(∇x?

1− ∇x?

2)GDs(x?

1,x?

2) = ((∇x?

1)2− (∇x?

2)2)GDs(x?

1,x?

2) = 0.

Therefore, the first term coming from GDsgives no contribution at all.

Further, we get

(∇x?

1+ ∇x?

2)ˆt?

1(x?

1,x?

2) =∇x?

2GDr(x?

detG

2,x?

2)

.

To simplify the previous expression, we use the identity

(∇x?

1+ ∇x?

2)(detG) = 0.

(9.46)

which is easy to derive.

Using (9.46), we get

(∇x?

1+ ∇x?

2)ˆt?

1(x?

1,x?

2) =∇x?

2GDr(x?

detG

2,x?

2)

(9.47)

which gives

−[(∇x?

1+ ∇x?

2)ˆt?

1(x?

1,x?

2)](∇x?

1− ∇x?

2)GDr(x?

1,x?

1) = −∇x?

2GDr(x?

detG

2,x?

2)

∇x?

1GDr(x?

1,x1?).

=∇x?

1GDr(x?

detG

1,x?

1)

∇x?

1GDr(x?

1,x?

1).

(9.48)

In analogy to (9.47), we get

(∇x?

1+ ∇x?

2)ˆt?

2(x?

1,x?

2) =∇x?

1GDr(x?

detG

1,x?

1)

(9.49)

which implies

−[(∇x?

1+ ∇x?

2)ˆt?

2(x?

1,x?

2)](∇x?

1− ∇x?

2)GDr(x?

1,x?

2) = −∇x?

1GDr(x?

detG

1,x?

1)

2∇x?

1GDr(x?

1,x?

2).

(9.50)

Finally, we compute

−ˆt?

1(x?

1,x?

2)(∇x?

1+ ∇x?

= −GDr(x?

2)(∇x?

1− ∇x?

2,x?

detG

2)GDr(x?

1,x?

1) = −ˆt?

2)

∇2

1(x?

1,x?

2)∇2

x?

1GDr(x?

1,x?

1)

2) − GDr(x?

1,x?

x?

1GDr(x?

1,x?

1).

(9.51)

Now P(x?

Using the explicit expression of the Green’s function (2.6), we get for the sum of (9.48) and (9.50):

∇x?

detG

=

sinh22θrLdetGsinh(2θr− x?

For (9.51), we get

−GDr(x?

detG

= −

1,x?

2) is given by the sum of (9.48), (9.50) and (9.51).

1GDr(x?

1,x?

1)

?

∇x?

1GDr(x?

1,x?

1) − 2∇x?

1GDr(x?

1,x?

2)

?

θ4

r

1)[sinh2θrx?

1+ sinh2θr(L − x?

1)].

2,x?

2) − GDr(x?

1,x?

2)

∇2

x?

1GDr(x?

1,x?

1)

θ4

r

sinh22θrLdetGcosh2θr(L + x?

2)[coshθr(L − x?

2) − coshθr(L − x?

1)]2cosh2θrx?

1.

Page 24

24 JUNCHENG WEI AND MATTHIAS WINTER

Adding all up, we get

P(x?

1,x?

2) =

θ4

r

sinh22θrLdetG

?

− 2cosh2θr(L + x?

2)[coshθr(L − x?

2) − coshθr(L − x?

?

?

1)]cosh2θrx?

1

+sinh2θrx?

1[sinh2θrx?

1+ sinh2θr(L − x?

1)]

=

θ4

r

sinh22θrLdetG

1we have

?

cosh2θrL · [1 − cosh2θrx?

1]

.

Note that for x1= lim?→0x?

cosh2θrL · [1 − cosh2θrx1] ≤ 0

and

cosh2θrL · [1 − cosh2θrx1] = 0 if and only if x1= 0.

1,x?

Therefore, if x1?= 0, then for ? small enough we have P(x?

This concludes the proof of Lemma 7.

2) < 0.

?

References

[1] H. BOHN, Interkalare Regeneration und segmentale Gradienten bei den Extremit¨ aten von Leucophaea-Larven, Wilhelm

Roux’ Arch., 165 (1970), pp. 303–341.

[2] E. N. DANCER , On stability and Hopf bifurcations for chemotaxis systems, Methods Appl. Anal., 8 (2001), pp. 245–256.

[3] M. DEL PINO, M. KOWALCZYK and X. CHEN, The Gierer-Meinhardt system: the breaking of homoclinics and multi-

bump ground states, Commun. Contemp. Math., 3 (2001), pp. 419–439.

[4] A. DOELMAN, R. A. GARDNER, and T. J. KAPER, Stability analysis of singular patterns in the 1D Gray-Scott model:

a matched asymptotics approach, Phys. D, 122 (1998), pp. 1-36.

[5] A. DOELMAN, R. A. GARDNER, and T. J. KAPER, Large stable pulse solutions in reaction-diffusion equations, Indiana

Univ. Math. J., 50 (2001), pp. 443–507.

[6] A. GIERER and H. MEINHARDT, A theory of biological pattern formation, Kybernetik (Berlin), 12 (1972), pp. 30–39.

[7] D. GILBARG and N.S. TRUDINGER Elliptic Partial Differential Equations of Second Order. (Grundlehren Math. Wiss.,

Vol. 224), Springer, Berlin Heidelberg New York, 1983

[8] C. GUI and J. WEI, Multiple interior peak solutions for some singular perturbation problems, J. Differential Equations,

158 (1999), pp. 1–27.

[9] D. IRON, M. WARD, and J. WEI, The stability of spike solutions to the one-dimensional Gierer-Meinhardt model, Physica

D., 50 (2001), pp. 25–62.

[10] H. MEINHARDT, Models of biological pattern formation, Academic Press, London, 1982.

[11] H. MEINHARDT and A. GIERER, Generation and regeneration of sequences of structures during morphogenesis, J.

Theor. Biol., 85 (1980), pp. 429–450.

[12] W. SUN, M.J. WARD and R. RUSSELL, The slow dynamics of two-spike solutions for the Gray-Scott and Gierer-

Meinhardt systems: competition and oscillatory instabilities, SIAM J. Appl. Dyn. Syst., 4 (2005), pp. 904–953.

[13] A. M. TURING, The chemical basis of morphogenesis, Phil. Trans. Roy. Soc. Lond. B, 237 (1952), pp. 37–72.

[14] M.J. WARD and J. WEI, Hopf Bifurcations and Oscillatory Instabilities of Spike Solutions for the One-Dimensional

Gierer-Meinhardt Model, J. Nonlinear Sci., 13 (2003), pp. 209–264.

[15] J. WEI, On single interior spike solutions of Gierer-Meinhardt system: uniqueness, spectrum estimates and stability

analysis, Euro. J. Appl. Math., 10 (1999), pp. 353–378.

[16] J. WEI and M. WINTER, Stationary solutions for the Cahn-Hilliard equation, Ann. Inst. H. Poincar´ e Anal. Non Lin´ eaire,

15 (1998), pp. 459–492.

[17] J. WEI and M. WINTER, On the two-dimensional Gierer-Meinhardt system with strong coupling, SIAM J. Math. Anal.,

30 (1999), pp. 1241–1263.

[18] J. WEI and M. WINTER, Spikes for the two-dimensional Gierer-Meinhardt system: the weak coupling case, J. Nonlinear

Sci., 11 (2001), pp. 415–458.

[19] J. WEI and M. WINTER, Spikes for the two-dimensional Gierer-Meinhardt system: The strong coupling case, J. Differ-

ential Equations, 178 (2002), pp. 478–518.

Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong, China

E-mail address: wei@math.cuhk.edu.hk

Brunel University, Department of Mathematical Sciences, Uxbridge UB8 3PH, United Kingdom

E-mail address: matthias.winter@brunel.ac.uk