Content uploaded by Minoo Kamrani

Author content

All content in this area was uploaded by Minoo Kamrani on Mar 30, 2014

Content may be subject to copyright.

IMA Journal of Numerical Analysis (2013) Page 1 of 24

doi:10.1093/imanum/drs035

Full discretization of the stochastic Burgers equation with correlated noise

Dirk Blömker∗

Institut für Mathematik, Universität Augsburg, 86135 Augsburg, Germany

∗Corresponding author: dirk.bloemker@math.uni-augsburg.de

Minoo Kamrani and S. Mohammad Hosseini

Department of Applied Mathematics, Tarbiat Modares University, P.O. Box 14115-175, Tehran, Iran

[Received on 8 December 2011; revised on 13 May 2012]

The main purpose of this paper is to investigate the spectral Galerkin method for spatial discretization. We

combine it with the method introduced by Jentzen et al. (2011, Efﬁcient simulation of nonlinear parabolic

SPDEs with additive noise. Ann. Appl. Probab.,21, 908–950) for temporal discretization of stochastic

partial differential equations and study pathwise convergence. We consider the case of coloured noise,

instead of the usual space-time white noise that was used before for the spatial discretization. The rate of

convergence in uniform topology is estimated for the stochastic Burgers’ equation. Numerical examples

illustrate the estimated convergence rate.

Keywords: stochastic partial differential equations; coloured noise; Galerkin approximation; stochastic

Burgers’ equation.

1. Introduction

In this article, the numerical approximation of nonlinear parabolic stochastic partial differential equa-

tions (SPDEs) is considered. Following the ideas of Blömker & Jentzen (2009) for the case of space-

time white noise, a numerical method for simulating nonlinear SPDEs with additive noise for the case of

coloured noise is proposed and analysed. The main novelty in this article is the estimation of the spatial

and temporal discretization error in the L∞topology in the case of coloured noise. This is different from

the usual space-time white noise, which was considered before in Blömker & Jentzen (2009) for spatial

discretization.

We consider as a forcing term an inﬁnite-dimensional stochastic process expanded in the eigenfunc-

tions of the linear operator Apresent in the SPDE. We focus on the case where the Brownian motions

are not independent. This is due to the fact that the spatial covariance operator of the forcing does not

commute with A.

In order to illustrate the main result of this article, we consider the stochastic Burgers’ equation

with Dirichlet boundary conditions on a bounded domain. To be more precise, let T>0, (Ω,F,P)be

a probability space, and let the space-time continuous stochastic process X:[0,T]×Ω→C([0, 1], R)

be the unique solution of the SPDE

dXt=∂2

∂x2Xt−Xt·∂

∂xXtdt+dWt,Xt(0)=Xt(1)=0, X0=0, (1.1)

for t∈[0, T] and x∈(0, 1). The noise is given by a cylindrical Wiener process Wt,t∈[0, T] deﬁned later.

c

The authors 2013. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

IMA Journal of Numerical Analysis Advance Access published January 8, 2013

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

2of24 D. BLÖMKER ET AL.

There are numerous publications for coloured or correlated noise of the type presented here, which

is white in time and coloured in space. For Burgers’ equation, see Da Prato et al. (1994), Da Prato &

Gatarek (1995), Goldys & Maslowski (2005) and Blömker & Duan (2007) for example. Here we refrain

from the usual assumption that the covariance of Wand the Laplacian are jointly diagonal.

The existence and uniqueness of solutions of the stochastic Burgers’ equation was studied by Da

Prato & Gatarek (1995) for coloured noise. Da Prato & Zabczyk (1992,1996) studied (1.1) for space-

time white noise and Gyöngy & Nualart (1999) studied the equation on the whole real line.

Alabert and Gyöngy obtained a spatial discretization of this equation in L2topology (Alabert &

Gyöngy,2006). Recently, Blömker & Jentzen (2009) obtained a bound on the spatial discretization

error in uniform topology by the spectral Galerkin method for the case of space-time white noise.

The spectral Galerkin method has been extensively studied for SPDEs with space-time white noise;

see, for example, Jentzen (2009), Kloeden & Shott (2001), Liu (2003), Lord & Rougemont (2004) and

Lord & Shardlow (2007).

Hausenblas investigated the discretization error of semilinear stochastic evolution equations in Lp

spaces and Banach spaces, and quasi-linear evolution equations driven by nuclear or space-time white

noise in Hausenblas (2002,2003). Shardlow (1999) and Gyöngy (1999) apply ﬁnite differences in order

to approximate the mild solution of parabolic SPDEs driven by space-time white noise. Yoo investigates

the mild solution of parabolic SPDEs by ﬁnite differences in Yoo (1999).

Our aim here is to extend the result of Blömker & Jentzen (2009). First, we discuss the case of

coloured noise not diagonal with respect to the eigenfunctions of the Laplacian. Secondly, using the

time discretization that was introduced in Jentzen et al. (2011), we obtain an error estimate for the full

space-time discretization.

The remainder of this paper is organized as follows. Section 2 gives the setting and the assump-

tions. In Section 3, we investigate the spatial discretization error and in Section 4 the temporal error is

obtained. Finally, in the last section numerical examples are presented.

2. Setting and assumptions

Fix T>0, and let (Ω,F,P)be a probability space and both (V,·

V)and (W,·

W)be R-Banach

spaces. Moreover, let PN:V→V,N∈Nbe a sequence of bounded linear operators.

Throughout this article the following assumptions will be used.

Assumption 2.1 Let S:(0, T]→L(W,V)be a continuous mapping satisfying

sup

0<tT

(tαStL(W,V))<∞,sup

N∈N

sup

0tT

(tαNγSt−PNStL(W,V))<∞, (2.1)

where α∈[0, 1)and γ∈(0, ∞)are given constants.

Assumption 2.2 Let F:V→Wbe a locally Lipschitz continuous mapping, which satisﬁes

sup

vV,wVr

F(v)−F(w)W

v−wV

<∞(2.2)

for every r>0.

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

FULL DISCRETIZATION OF THE STOCHASTIC BURGERS EQUATION 3of24

Assumption 2.3 Let O:[0,T]×Ω→Vbe a stochastic process with continuous sample paths and

sup

N∈N

sup

0tT

NγOt(ω) −PN(Ot(ω))V<∞(2.3)

for every ω∈Ω, where γ∈(0, ∞)is given in Assumption 2.1.

Assumption 2.4 Let XN:[0,T]×Ω→V,N∈Nbe a sequence of stochastic processes with continu-

ous sample paths such that

sup

M∈N

sup

0sT

XM

s(ω)V<∞(2.4)

and

XN

t(ω) =t

0

PNSt−sF(XN

s(ω)) ds+PN(Ot(ω)) (2.5)

for every t∈[0, T], ω∈Ωand every N∈N.

Blömker & Jentzen (2009) obtained the following theorem.

Theorem 2.1 Let Assumptions 2.1–2.4 be fulﬁlled. Then there exists a unique stochastic process X:

[0, T]×Ω→Vwith continuous sample paths, which fulﬁls

Xt(ω) =t

0

St−sF(Xs(ω)) ds+Ot(ω) (2.6)

for every t∈[0, T] and every ω∈Ω. Moreover, there exists an F/B([0, ∞))-measurable mapping C:

[0, ∞)→Ωsuch that

sup

0tT

Xt(ω) −XN

t(ω) VC(ω) ·N−γ(2.7)

holds for every N∈Nand every ω∈Ω, where γ∈(0, ∞)is given in Assumption 2.1.

3. Spatial discretization for the case of coloured noise

Now we will show that Assumptions 2.1–2.4 are satisﬁed for Burgers’ equation in the case of coloured

noise. Therefore, from Theorem 2.1 we can conclude the convergence of the Galerkin method for this

equation. Most of the results are already proved in Blömker & Jentzen (2009). We only state the results

needed later in the proofs, and the modiﬁcations necessary due to the presence of coloured noise.

In the remainder of the paper deﬁne V=C0([0, 1]),W=H−1(0, 1). The mapping ∂:V→W,

given by

(∂v)(ϕ) =(v)(ϕ) :=−v,ϕL2=−1

0

v(x)ϕ(x)dx

for every v∈Vand ϕ∈H1(0, 1), is a bounded linear mapping from Vto W.

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

4of24 D. BLÖMKER ET AL.

From Blömker & Jentzen (2009, Lemmas 4.6 and 4.8), we have the following lemmas.

Lemma 3.1 The mapping S:(0, T]→L(H−1(0, 1),C0([0, 1])), given by

(St(w))(x)=

N

n=1

(2·e−n2π2t·w(sin(nπ(·))) ·sin(nπx))

for every x∈[0, 1], w∈H−1(0, 1)and every t∈(0, T], is well deﬁned and satisﬁes Assumption 2.1.

From Assumption 2.1, we derive

sup

0<tT

(tαSt∂L(V,V))<∞, (3.1)

where αwas introduced in Assumption 2.1.

Remark 3.1 As we can see from Blömker & Jentzen (2009, Lemma 4.6), Assumption 2.1 is satisﬁed

for α=3

4and γ∈[0, 1

2).

Lemma 3.2 The mapping F:C0([0, 1])→H−1(0, 1), given for every v∈C0([0, 1])by F(v)=∂x(v2),

satisﬁes Assumption 2.2.

In the following, we present details of the Q-Wiener process Wcorresponding to the coloured noise

in order to prove Assumption 2.3 later. We focus on a d-dimensional setting, while the result needed

later is for d=1. Let βi:[0,T]×Ω→R,i∈Ndbe a family of Brownian motions that are not neces-

sarily independent. They are correlated as given by

E(βk(t)β l(t)) =Qek,el·t,k=(k1,...,kd)∈Nd,t>0, l=(l1,...,ld)∈Nd,

where for every k∈Nd,

ek: [0, 1]d→R,ek(x)=2d/2sin(k1πx1)·...·sin(kdπxd),x∈[0, 1]d

are smooth functions. Furthermore, Qis a symmetric non-negative operator such that

Qek,el=1

01

0

ek(x)el(y)q(x−y)dydx(3.2)

for k,l∈Ndand some positive-deﬁnite function q.

Moreover, for every k∈Nddeﬁne the real numbers λk=π2(k2

1+···+k2

d)∈R.

Lemma 3.3 Assume that there exists a ρ>0 such that, in dimension d∈{1, 2, 3},

i∈Nd

j∈Nd

iρ−1

2jρ−1

2|Qei,ej| <∞.

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

FULL DISCRETIZATION OF THE STOCHASTIC BURGERS EQUATION 5of24

Then there exists a stochastic process O:[0,T]×Ω→V, which satisﬁes

sup

0t1t2T

Ot2(ω) −Ot1(ω)V

(t2−t1)θ<∞,

sup

N∈N

sup

0tT

NγOt(ω) −PN(Ot(ω))V<∞

(3.3)

for every ω∈Ω, every θ∈(0, min{1

2,ρ

2}), and every γ∈(0, ρ). Furthermore, Osatisﬁes

P⎡

⎣lim

N→∞ sup

0tT

Ot−

i∈{1,...,N}d−λit

0

e−λi(t−s)βi

sds+βi

tei

V

=0⎤

⎦=1,

sup

N∈N⎡

⎣Esup

0tT

Ot−PNOtp

V1/p

Nγ⎤

⎦+sup

0t1t2T

(E[Ot2−Ot1p

V])1/p

(t2−t1)θ<∞

for every p∈[1, ∞)and every γ∈(0, ρ).

We need some technical lemmas ﬁrst in order to prove this lemma.

Lemma 3.4 For every t1,t2∈[0, T], with t1t2, and every r∈(0, 1)we have

Et2

0

e−λi(t2−s)dβi

s−t1

0

e−λi(t1−s)dβi

st2

0

e−λj(t2−s)dβj

s−t1

0

e−λj(t1−s)dβj

s

2(λi+λj)r−1(t2−t1)rQei,ej(3.4)

for all i,j∈Nd.

Proof. Fix t1,t2∈[0, T], with t1t2, and i,j∈Nd. Deﬁne Δt=t2−t1and Λij =λi+λj. We obtain

Et2

0

e−λi(t2−s)dβi

s−t1

0

e−λi(t1−s)dβi

s·t2

0

e−λj(t2−s)dβj

s−t1

0

e−λj(t1−s)dβj

s

=Et2

t1

e−λi(t2−s)dβi

s+(e−λiΔt−1)t1

0

e−λi(t1−s)dβi

s

×t2

t1

e−λj(t2−s)dβj

s+(e−λjΔt−1)t1

0

e−λj(t1−s)dβj

s

=Δt

0

e−ΛijsQei,ejds+(e−Λij Δt−e−λiΔt−e−λjΔt+1)·Qei,ej· 1−e−Λijt1

Λij

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

6of24 D. BLÖMKER ET AL.

=(1−e−ΛijΔt+(e−Λij Δt−e−λiΔt−e−λjΔt+1)(1−e−Λijt1)) ·Qei,ej

Λij

(1−e−ΛijΔt+(1−e−Λij Δt)(1−e−Λijt1)) ·Qei,ej

Λij

2·1−e−ΛijΔt

Λij

·Qei,ej.

Therefore, for every r∈(0, 1)we derive

Et2

0

e−λi(t2−s)dβi

s−t1

0

e−λi(t1−s)dβi

s·t2

0

e−λj(t2−s)dβj

s−t1

0

e−λj(t1−s)dβj

s

2·sup

x>0

1

x(1−e−x)r

·Λr−1

ij (Δt)r·|Qei,ej|

=2·Λr−1

ij (Δt)r·|Qei,ej|.

Lemma 3.5 For every t1,t2∈[0, T], with t1t2,N∈N,p∈[1, ∞), and every α,θ∈(0, 1

2] we have

Esup

x∈[0,1]d

|ON

t2(x)−ON

t1(x)|p1/p

C

i,j∈IN

i2θ+2α−1

2j2θ+2α−1

2|Qei,ej|(t2−t1)θ,

where C=C(d,p,α,θ) is a constant depending only on d,p,αand θ. The stochastic process ON:

[0, T]×Ω→C([0, 1]d)is given by

ON

t=

i∈INt

0

e−λi(t−s)dβi

s·ei(3.5)

for every t∈[0, T] and every N∈N, where IN={1, ...,N}d.

Proof. Consider ﬁrst

(ON

t2(x)−ON

t1(x)) −(ON

t2(y)−ON

t1(y))

=

i∈INt2

0

e−λi(t2−s)dβi

s−t1

0

e−λi(t1−s)dβi

s·(ei(x)−ei(y)),

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

FULL DISCRETIZATION OF THE STOCHASTIC BURGERS EQUATION 7of24

P-almost surely for every x,y∈[0, 1]d. Hence, expanding the square of the series as a double sum and

using Lemma 3.4, we obtain (again with Δt=t2−t1and Λij =λi+λj)

E|(ON

t2(x)−ON

t1(x)) −(ON

t2(y)−ON

t1(y))|2

i,j∈IN

Λ2θ−1

ij (Δt)2θ|Qei,ej| · |(ei(x)−ei(y))(ej(x)−ej(y))|

C

i,j∈IN

Λ2θ−1

ij (Δt)2θ|Qei,ej| · (i2

2x−y2

2)α(j2

2x−y2

2)α

C(Δt)2θx−y4α

2

i,j∈IN

(i2

2+j2

2)2θ−1i2α

2j2α

2|Qei,ej|,

where we used that ekis bounded and Lipschitz. Therefore,

E|(ON

t2(x)−ON

t1(x)) −(ON

t2(y)−ON

t1(y))|2

C(Δt)2θx−y4α

2

i,j∈IN

i2θ+2α−1

2j2θ+2α−1

2|Qei,ej|. (3.6)

Again from Lemma 3.4 we derive in a similar way, for every x∈[0, 1]d,

E[|ON

t2(x)−ON

t1(x)|2]C

i,j∈IN

Λ2θ−1

ij (Δt)2θ|Qei,ej|

C

i,j∈IN

(i2

2+j2

2)2θ−1(Δt)2θ|Qei,ej|. (3.7)

The Sobolev embedding of the fractional space Wα,pinto C0([0, 1]d), given in (Runst & Sickel,1996,

Theorem 2.1, Section 2.2.4), yields

E[ON

t2−ON

t1p

C0([0,1]d)]C(0,1)d(0,1)d

(E[|(ON

t2(x)−ON

t1(x)) −(ON

t2(y)−ON

t1(y))|2])p/2

x−yd+pα

2

dxdy

+C(0,1)d

(E[|ON

t2(x)−ON

t1(x)|2])p/2dx,

where we have used Gaussianity for the pth moment. In the following, for shorthand notation, all spatial

integrals are over (0, 1)d.

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

8of24 D. BLÖMKER ET AL.

Therefore, by (3.6) and (3.7),

E[ON

t2−ON

t1p

C0([0,1]d)]

C ((Δt)2θx−y4α

2)p/2

x−yd+pα

2

dxdy⎛

⎝

i,j∈IN

(i2j2)2θ+2α−1|Qei,ej|⎞

⎠

p/2

+C(Δt)pθ⎛

⎝

i,j∈IN

i2θ−1

2j2θ−1

2|Qei,ej|⎞

⎠

p/2

dx

C1+ x−yαp−d

2dxdy·(Δt)pθ·⎛

⎝

i,j∈IN

(i2j2)2θ+2α−1|Qei,ej|⎞

⎠

p/2

.

By the fact that, with arbitrary d∈N,

(x−y2)−αdxdy(3d)d

d−α

for every α∈(0, d), we derive

(EON

t2−ON

t1p

C0([0,1]d))1/pC⎛

⎝

i,j∈IN

(i2j2)2θ+2α−1|Qei,ej|⎞

⎠

1/2

(Δt)θ.

Lemma 3.6 For every N,M∈N,NM,p∈[1, ∞)and every α∈(0, 1

2], we have

Esup

0tT

ON

t−OM

tp

C0([0,1]d)1/p

C⎛

⎝

i,j∈IN\IM

i4α−1

2j4α−1

2|Qei,ej|⎞

⎠

1/2

,

where IN={1, ...,N}d,IM={1, ...,M}dand C=C(d,p,α,θ) is a constant only depending on d,p,α

and θ.

Proof. Throughout this proof we assume α∈(0, 1

2)and p>1/α. Moreover, N>Mis ﬁxed. Deﬁne,

for every t∈[0, T],

YN,M

t=

i∈IN\IMt

0

(t−s)−αe−λi(t−s)dβi

sei.

The celebrated factorization method of Da Prato & Zabczyk (1992) yields

Esup

0tT

ON

t−OM

tp

C0([0,1]d)=Esup

0tT

sin(πα)

πt

0

(t−s)α−1St−sYN,M

sds

p

C0([0,1]d)

Esup

0tT

t

0

(t−s)α−1St−sYN,M

sds

p

C0([0,1]d)

.

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

FULL DISCRETIZATION OF THE STOCHASTIC BURGERS EQUATION 9of24

Therefore, using the Hölder inequality and the boundedness of StL(C0([0,1]d)) yields

Esup

0tT

ON

t−OM

tp

C0([0,1]d)sup

0tTt

0

(t−s)p(α−1)/(p−1)dsp−1

·ET

0

YN,M

s

p

C0([0,1]d)ds

CT

0

EYN,M

sp

C0([0,1]d)ds.

Hence

Esup

0tT

ON

t−OM

tp

C0([0,1]d)1/p

Csup

0tT

(EYN,M

tp

C0([0,1]d))1/p. (3.8)

Again, using the embedding of Wα,pinto C0,

EYN,M

tp

C0([0,1]d)C(0,1)d(0,1)d

(E|YN,M

t(x)−YN,M

t(y)|2)p/2

x−yd+pα

2

dxdy

+C(0,1)d

(E|YN,M

t(x)|2)p/2dx. (3.9)

For the ﬁrst term on the right-hand side of (3.9) we proceed completely analogously to Lemma 3.5 in

order to obtain

E|YN,M

t(x)−YN,M

t(y)|2

C

i,j∈IN\IM∞

0

s−2αe−sds·(λi+λj)2α−1·|Qei,ej| · i2α

2j2α

2x−y4α

2.

Therefore,

E|YN,M

t(x)−YN,M

t(y)|2C

i,j∈IN\IM

|Qei,ej|

i1−4α

2j1−4α

2

x−y4α

2. (3.10)

For the second term on the right-hand side of (3.9) we establish

E|YN,M

t(x)|2

i,j∈IN\IMt

0

(t−s)−2αe−(λi+λj)(t−s)ds|Qei,ejei(x)||ej(x)|

C

i,j∈IN\IM

i2α−1

2j2α−1

2|Qei,ej|.(3.11)

Hence, using (3.10) and (3.11), we obtain from (3.9),

sup

0tT

(EYN,M

tp

C0([0,1]d))1/pC⎛

⎝

i,j∈IN\IM

i4α−1

2j4α−1

2|Qei,ej|⎞

⎠

1/2

. (3.12)

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

10 of 24 D. BLÖMKER ET AL.

Finally, (3.8) and (3.12) yield

Esup

0tT

ON

t(x)−OM

t(x)p

C0([0,1]d)1/p

C⎛

⎝

i,j∈IN\IM

i4α−1

2j4α−1

2Qei,ej⎞

⎠

1/2

.

Now we are ready to present the remaining parts of the proof of Lemma 3.3.

Proof of Lemma 3.3.From Lemma 3.6, we obtain

Esup

0tT

ON

t−OM

tp

C0([0,1]d)1/p

C⎛

⎝

i,j∈Nd\IM

i4α−1

2j4α−1

2|Qei,ej|⎞

⎠

1/2

CM 4α−ρ⎛

⎝

i,j∈Nd

iρ−1

2jρ−1

2|Qei,ej|⎞

⎠

1/2

for every N,M∈Nwith NM,p∈[1, ∞), and α∈(0, min{1

2,ρ

4}). The processes ONform a Cauchy

sequence in

Vp:=Lp((Ω,F,P),(C0([0, T]×[0, 1]d))).

Hence, there exists a stochastic process ˜

O:[0,T]×Ω→C0([0, 1]d)with ˜

O∈Vpand

Esup

0tT

˜

Ot−ON

tp

C0([0,1]d)1/p

CN4α−ρ⎛

⎝

i,j∈Nd

iρ−1

2jρ−1

2|Qei,ej|⎞

⎠

1/2

for every N∈N,p∈[1, ∞), and α∈(0, min{1

2,ρ

4}).

Therefore

sup

N∈N⎧

⎨

⎩

NγEsup

0tT

˜

Ot−ON

tp

C0([0,1]d)1/p⎫

⎬

⎭

<∞

for every γ∈(0, ρ) and every p∈[1, ∞). This yields (Jentzen et al.,2009, Lemma 1)

Psup

N∈N

sup

0tT

{Nγ˜

Ot−ON

tC0([0,1]d)}<∞=1.

In particular

Plim

N→∞ sup

0tT

˜

Ot−ON

tC0([0,1]d)=0=1

and

Psup

N∈N

sup

0tT

{Nγ˜

Ot−PN˜

OtC0([0,1]d)}<∞=1.

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

FULL DISCRETIZATION OF THE STOCHASTIC BURGERS EQUATION 11 of 24

From Lemma 3.5 we derive

(EON

t2−ON

t1p

C0([0,1]d))1/pC⎛

⎝

i,j∈IN

(i2j2)2θ+2((ρ/2)−θ)−1|Qei,ej|⎞

⎠

1/2

|t2−t1|θ

C⎛

⎝

i,j∈IN

iρ−1

2jρ−1

2|Qei,ej|⎞

⎠

1/2

|t2−t1|θ

for every t1,t2∈[0, T], N∈Nand θ∈(0, ρ/2). Provided θ1

2this furnishes

(E˜

Ot2−˜

Ot1p

C0([0,1]d))1/pC⎛

⎝

i,j∈IN

iρ−1

2jρ−1

2|Qei,ej|⎞

⎠

1/2

|t2−t1|θ.

Hence, for every θ∈(0, min{1

2,ρ

2}),

Psup

0t1,t2T

˜

Ot2−˜

Ot1C0([0,1]d)

|t2−t1|θ<∞=1.

Therefore

P∀θ∈0, min 1

2,ρ

2:sup

0t1,t2T

˜

Ot2−˜

Ot1C0([0,1]d)

|t2−t1|θ<∞=1.

In conclusion, this shows the existence of a process O:[0,T]×Ω→C0([0, 1]d), which satisﬁes

sup

0t1,t2T

Ot2(ω) −Ot1(ω)C0([0,1]d)

|t2−t1|θ<∞,

and

sup

N∈N

sup

0tT

(NγOt(ω) −PNOt(ω)C0([0,1]d))<∞

for every ω∈Ω,θ∈(0, min{1

2,ρ

2})and γ∈(0, ρ). Moreover, Ois indistinguishable from ˜

O, i.e.,

P[∀t∈[0, T]: Ot=˜

Ot]=1.

Summarizing our results, we can state the following lemma.

Lemma 3.7 Assume ρ>0, d∈{1, 2, 3}and

i,j∈Nd

iρ−1

2jρ−1

2|Qei,ej| <∞.

Furthermore, suppose that ξ:Ω→Vis F/V-measurable with

sup

N∈N

(Nρξ(ω) −PN(ξ(ω))V)<∞

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

12 of 24 D. BLÖMKER ET AL.

for every ω∈Ω. Then there exists a stochastic process O:[0,T]×Ω→V, with continuous sample

paths, satisfying

P⎡

⎣lim

N→∞ sup

0<t<T

Ot−Stξ−

i∈IN−λit

0

e−λi(t−s)βi

sds+βi

tei

V

=0⎤

⎦=1

and

sup

N∈N

sup

0tT

{NγOt(ω) −PN(Ot(ω))V}<∞

for every ω∈Ωand γ∈(0, ρ).

In particular, Osatisﬁes Assumption 2.3 for every γ∈(0, ρ).

Note that the process Oin the previous Lemma 3.7 is the solution of the following linear SPDE:

dOt=ΔOtdt+dWt,Ot|∂(0,1)d=0, O0=ξ,

for t∈[0, T], where Wis a Q-Wiener process.

Lemma 3.8 Let V=C0([0, 1]),W=H−1((0, 1)) and S:(0, T]→L(W,V), and let F:V→Wbe given

by Lemmas 3.1 and 3.2. Let O:[0,T]×Ω→Vbe a stochastic process with continuous sample paths

with

sup

N∈N

sup

0tT

PN(Ot(ω))V<∞

for every ω∈Ω. Then Assumption 2.4 is fulﬁlled.

Proof. The proof is exactly the same as the one for Blömker & Jentzen (2009, Lemma 4.9).

4. Time discretization

For time discretization of the ﬁnite-dimensional stochastic differential equations (SDEs) (2.5)we

study the method introduced by Jentzen et al. (2011). Consider the discretization scheme for Burg-

ers’ equation, i.e., F(u)=∂xu2in one dimension. This is for simplicity of presentation only, as we need

to bound various terms depending on XNand F(XN).

Through this section assume ρ>0 such that

i,j∈Nd

iρ−1

2jρ−1

2|Qei,ej| <∞. (4.1)

Moreover, assume θ∈(0, min{1

2,ρ

2}). For the time discretization we deﬁne the mapping YN,M

m:Ω→V

for m∈{1, ...,M}by

YN,M

m+1(ω) =SΔt(YN,M

m(ω) +Δt(PNF)(YN,M

m(ω))) +PN(O(m+1)Δt(ω) −SΔtOmΔt(ω)). (4.2)

The purpose of this section is to consider the discretization error in time

XN

mΔt(ω) −YN,M

m(ω)V,

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

FULL DISCRETIZATION OF THE STOCHASTIC BURGERS EQUATION 13 of 24

where

XN

mΔt(ω) =mΔt

0

PNSmΔt−sF(XN

s(ω)) ds+ON

mΔt(ω)

is the solution of the spatial discretization, which is evaluated at the grid points.

Recall that, as we proved in the last section, Assumptions 2.1–2.4 are satisﬁed for the stochastic

Burgers’ equation in one dimension.

Lemma 4.1 Let XN:[0,T]×Ω→Vbe the unique adapted stochastic process with continuous sample

paths, deﬁned in Assumption 2.4. Assume that ON:[0,T]×Ω→C0([0, 1]d)is the stochastic process

deﬁned in (3.5). Then we obtain

(XN

t2(ω) −ON

t2(ω)) −(XN

t1(ω) −ON

t1(ω))VC(ω)(t2−t1)1/4

for every ω∈Ωand all t1,t2∈[0, T], with t1<t2where Cis a ﬁnite random variable C:Ω→[0, ∞).

Proof. For every 0 t1t2Twe have

XN

t2(ω) −ON

t2(ω) −(XN

t1(ω) −ON

t1(ω))V

=

t2

0

PNSt2−sF(XN

s(ω)) ds−t1

0

PNSt1−sF(XN

s(ω)) ds

V

=

t2

t1

PNSt2−sF(XN

s(ω)) ds+t1

0

(St2−s−St1−s)PNF(XN

s(ω)) ds

V

t2

t1

PNSt2−s∂L(V,V)(XN

s(ω))2Vds+

t1

0

St1−s(St2−t1−I)PNF(XN

s(ω)) ds

V

.

From (3.1) and using the fact that Stis the semigroup generated by, the Laplacian operator, Δ,we

conclude

XN

t2(ω) −ON

t2(ω) −(XN

t1(ω) −ON

t1(ω))V

C1(ω) t2

t1

(t2−s)−3/4ds

+t1

0

PNSt1−sΔ1/4L(W,V)(St2−t1−I)Δ−1/4L(W,V)F(XN

s(ω))Wds

4C1(ω)(t2−t1)1/4+t1

0

(t1−s)−1/4ds(t2−t1)1/4F(XN

s(ω))W

4C1(ω)(t2−t1)1/4+C2(ω)(t2−t1)1/4T3/4

C(ω)(t2−t1)1/4,

where C1(ω) =supM∈Nsup0sTXM

s(ω)2

V,C2(ω) =supM∈Nsup0sTF(XM

s(ω))Ware ﬁnite due

to Assumptions 2.2 and 2.4, and therefore Cis an almost-surely ﬁnite random variable

C:Ω→[0, ∞).

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

14 of 24 D. BLÖMKER ET AL.

Before we begin with the ﬁrst part of the error, we deﬁne

R(ω) :=sup

N∈N

sup

0sT

F(XN

s(ω))W+sup

N∈N

sup

0sT

XN

s(ω)V

+sup

0t1,t2T

Ot2(ω) −Ot1(ω)V|t2−t1|−θ

+sup

N∈N

sup

0t1,t2T

XN

t2(ω) −ON

t2(ω) −(XN

t1(ω) −ON

t1(ω))V|t2−t1|−1/4,

where from Assumption 2.4 and Lemmas 3.3 and 4.1, R:Ω→Ris a ﬁnite random variable.

The main result of this section is stated below.

Theorem 4.1 For m∈{0, 1, ...,M}and every M,N∈N, there exists a ﬁnite random variable C:Ω→

[0, ∞)such that

XN

mΔt(ω) −YN,M

m(ω)VC(ω)(Δt)min(1/4,θ),

where XN:[0,T]×Ω→Vis the unique adapted stochastic process with continuous sample paths,

deﬁned in Assumption 2.4, and YN,M

m:Ω→Vfor m∈{0, 1, ...,M}, and N,M∈N, are given in (4.2).

Proof. For the proof it is sufﬁcient to prove the result for sufﬁciently small |t2−t1|.Dueto(2.5)

we have

XN

mΔt(ω) =mΔt

0

PNSmΔt−sF(XN

s(ω)) ds+ON

mΔt(ω)

=

m−1

k=0(k+1)Δt

kΔt

PNSmΔt−sF(XN

s(ω)) ds+ON

mΔt(ω) (4.3)

for every m∈{0, 1, ...,M}and every M∈N.

The mapping YN

m:Ω→V,m=1, 2, ...,M, is deﬁned by

YN

m(ω) =

m−1

k=0(k+1)Δt

kΔt

PNSmΔt−kΔtF(XN

kΔt(ω)) ds+ON

mΔt(ω). (4.4)

Our aim is to bound XN

mΔt(ω) −YN,M

m(ω)V. Therefore, we ﬁrst estimate the difference of the true

solution to YN

m

XN

mΔt(ω) −YN

m(ω)V(4.5)

for every m∈{0, 1, ...,M}and then the difference between YN

mand the full discretization in time,

YN

m(ω) −YN,M

m(ω)V. (4.6)

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

FULL DISCRETIZATION OF THE STOCHASTIC BURGERS EQUATION 15 of 24

For the ﬁrst error in (4.5) we have

XN

mΔt(ω) −YN

m(ω) =

m−2

k=0(k+1)Δt

kΔt

PNSmΔt−sF(XN

s(ω)) ds

−

m−2

k=0(k+1)Δt

kΔt

PNSmΔt−kΔtF(XN

kΔt(ω)) ds

+mΔt

(m−1)Δt

PNSmΔt−sF(XN

s(ω)) ds

−mΔt

(m−1)Δt

PNSΔtF(XN

kΔt(ω)) ds. (4.7)

Let us now bound the last two integrals in (4.7). For the ﬁrst one we derive

mΔt

(m−1)Δt

PNSmΔt−sF(XN

s(ω)) ds

V

=

mΔt

(m−1)Δt

PNSmΔt−s∂(XN

s(ω))2ds

V

mΔt

(m−1)Δt

PNSmΔt−s∂L(V,V)·XN

s(ω)2

Vds

sup

0st

XN

s(ω)2

VmΔt

(m−1)Δt

(mΔt−s)−3/4ds

R2(ω)(Δt)1/4.

For the second one we obtain

mΔt

(m−1)Δt

PNSΔtF(XN

kΔt(ω)) ds

V

=

mΔt

(m−1)Δt

PNSΔt∂(XN

kΔt(ω))2ds

V

mΔt

(m−1)Δt

PNSΔt∂L(V,V)·XN

kΔt(ω)2

Vds

sup

0st

XN

s(ω)2

VmΔt

(m−1)Δt

(Δt)−3/4ds

R2(ω)(Δt)1/4.

Therefore, we can conclude

XN

mΔt(ω) −YN

m(ω)V

m−2

k=0(k+1)Δt

kΔt

PNSmΔt−s(F(XN

s(ω)) −F(XN

kΔt(ω))) ds

V

+

m−2

k=0(k+1)Δt

kΔt

(PNSmΔt−s−PNS(mΔt−kΔt))F(XN

kΔt(ω)) ds

V

+R2(ω)(Δt)1/4.

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

16 of 24 D. BLÖMKER ET AL.

Thus, inserting the nonlinearity with the Ornstein–Uhlenbeck process in the ﬁrst term yields, for every

m∈{0, 1, ...,M},

XN

mΔt(ω) −YN

m(ω)V

m−2

k=0(k+1)Δt

kΔt

PNSmΔt−s(F(XN

s(ω)) −F(XN

kΔt(ω) +ON

s(ω) −ON

kΔt(ω))) ds

V

+

m−2

k=0(k+1)Δt

kΔt

PNSmΔt−s(F(XN

kΔt(ω) +ON

s(ω) −ON

kΔt(ω)) −F(XN

kΔt(ω))) ds

V

+

m−2

k=0(k+1)Δt

kΔt

(PNSmΔt−s−PNSmΔt−kΔt)F(XN

kΔt(ω)) ds

V

+R2(ω)(Δt)1/4. (4.8)

For the ﬁrst term in (4.8), by using Lemma 4.1 together with PNSt−s∂uVC(t−s)−3/4uV,we

conclude

m−2

k=0(k+1)Δt

kΔt

PNSmΔt−s(F(XN

s(ω)) −F(XN

kΔt(ω) +ON

s(ω) −ON

kΔt(ω))) ds

V

m−2

k=0(k+1)Δt

kΔt

(mΔt−s)−3/4XN

s(ω) −(XN

kΔt(ω) +ON

s(ω) −ON

kΔt(ω))V

·XN

s(ω) +(XN

kΔt(ω) +ON

s(ω) −ON

kΔt(ω))Vds

R(ω)

m−2

k=0(k+1)Δt

kΔt

(mΔt−s)−3/4(s−kΔt)1/4(2R(ω) +R(ω)(s−kΔt)θ)ds

2C(R(ω),T)(Δt)1/4, (4.9)

where the constant depends on Rand T.

For the second term in (4.8) we derive

m−2

k=0(k+1)Δt

kΔt

PNSmΔt−s(F(XN

kΔt(ω) +ON

s(ω) −ON

kΔt(ω)) −F(XN

kΔt(ω))) ds

V

2

m−2

k=0(k+1)Δt

kΔt

PNSmΔt−s∂(XN

kΔt(ω) ·(ON

s(ω) −ON

kΔt(ω)))Vds

+

m−2

k=0(k+1)Δt

kΔt

PNSmΔt−s∂((ON

s(ω) −ON

kΔt(ω))2)Vds

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

FULL DISCRETIZATION OF THE STOCHASTIC BURGERS EQUATION 17 of 24

2

m−2

k=0(k+1)Δt

kΔt

PNSmΔt−s∂L(V,V)XN

kΔt(ω)V(ON

s(ω) −ON

kΔt(ω))Vds

+

m−2

k=0(k+1)Δt

kΔt

PNSmΔt−s∂L(V,V)·(ON

s(ω) −ON

kΔt(ω))2Vds

2R2(ω)

m−2

k=0(k+1)Δt

kΔt

(mΔt−(k+1)Δt)−3/4·(s−kΔt)θds

+R2(ω)

m−2

k=0(k+1)Δt

kΔt

(mΔt−(k+1)Δt)−3/4(s−kΔt)2θds

C(R(ω),θ )(Δt)θ,

where the constant depends on Rand θ.

Finally, for the third term in (4.8), again by using the fact that Stis the semigroup generated by the

Laplacian, we have

m−2

k=0(k+1)Δt

kΔt

(PNSmΔt−s−PNSmΔt−kΔt)F(XN

kΔt(ω)) ds

V

m−2

k=0(k+1)Δt

kΔt

PNSmΔt−kΔt(SkΔt−s−I)F(XN

kΔt(ω))Vds

m−2

k=0(k+1)Δt

kΔt

(mΔt−kΔt)−1(kΔt−s)F(XN

kΔt(ω))Wds

C(R(ω),T)Δt, (4.10)

where we have used PNΔStL(W,V)Ct−1, together with Δ−1(St−I)L(W,V)t. Hence, from (4.9)

and (4.10) we derive

XN

mΔt(ω) −YN

m(ω)VC(R(ω),R2(ω),θ,T)(Δt)min{1/4,θ}.(4.11)

Let us now turn to the second error term in (4.6). Note that YN,M

m:Ω→Vsatisﬁes

YN,M

m(ω) =

m−1

k=0(k+1)Δt

kΔt

PNSmΔt−kΔtF(YN,M

k(ω)) ds+PNOmΔt(ω). (4.12)

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

18 of 24 D. BLÖMKER ET AL.

Thus, by using PNSt∂L(V,V)Ct−3/4, we can estimate

YN

m−YN,M

mV=

m−1

k=0(k+1)Δt

kΔt

PNSmΔt−kΔt(F(XN

kΔt)−F(YN,M

k))

V

m−1

k=0(k+1)Δt

kΔt

(mΔt−kΔt)−3/4(XN

kΔt−YN,M

k)2+2XN

kΔt(XN

kΔt−YN,M

k)Vds

m−1

k=0

Δt(mΔt−kΔt)−3/4(XN

kΔt−YN,M

k2

V+2R(ω)XN

kΔt−YN,M

kV). (4.13)

Combining (4.11) with (4.13), we have

XN

mΔt(ω) −YN,M

m(ω)VC(R(ω),θ,T)(Δt)min{1/4,θ}

+

m−1

k=0

XN

kΔt(ω) −YN,M

k(ω)2

V

+2R(ω)

m−1

k=0

XN

kΔt(ω) −YN,M

k(ω)V. (4.14)

If we assume that, for some δ>0 ﬁxed later,

sup

0kM

XN

kΔt(ω) −YN,M

k(ω)Vδ, (4.15)

then

XN

mΔt(ω) −YN,M

m(ω)VC(R(ω),θ,T)(Δt)min{1/4,θ}

+(δ +2R(ω))

m−1

k=0

XN

kΔt(ω) −YN,M

k(ω)V. (4.16)

Then, by the discrete Gronwall lemma, we can conclude

XN

mΔt(ω) −YN,M

m(ω)Ve(m−1)(δ+2R(ω)) C(R(ω),θ,T)(Δt)min{1/4,θ}.

In order to verify (4.15), we need

e(m−1)(δ+2R)C(R(ω),θ,T)(Δt)min{1/4,θ}δ,

which is true for any δ>0, provided Δtis sufﬁciently small. This completes the proof of the time

discretization.

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

FULL DISCRETIZATION OF THE STOCHASTIC BURGERS EQUATION 19 of 24

From Theorem 2.1, for the spatial discretization error we veriﬁed in Section 3,

XmΔt(ω) −XN

mΔt(ω)VC(ω) ·N−γ, (4.17)

and from Theorem 4.1, for the temporal discretization error we just established

XN

mΔt(ω) −YN,M

m(ω)VC(R(ω),θ,T)(Δt)min{1/4,θ}.

Therefore we have proved the following theorem for the stochastic Burgers’ equation.

Theorem 4.2 Assume ρ>0 such that

i,j∈N

iρ−1

2jρ−1

2|Qei,ej| <∞.

Let X:[0,T]×Ω→Vbe the solution of SPDE (2.6) and YN,M

m:Ω→V,m∈{0, 1, ...,M},M,N∈N

be the numerical solution given by (4.2). Fix θ∈(0, min{1

2,ρ

2})and γ∈[0, 1

2).

Then there exists a ﬁnite random variable C:Ω→[0, ∞)such that

XmΔt(ω) −YN,M

m(ω)VC(ω)(N−γ+(Δt)min{1/4,θ})(4.18)

for all m∈{0, 1, ...,M}and every M,N∈N.

5. Numerical results

In this section, we consider the numerical solution of the stochastic Burgers’ equation by the method

given in (4.2).

Consider the stochastic evolution equation (2.6) with S:(0, T]→L(W,V),F:V→Wgiven by

Lemma 3.1, Lemma 3.2 for T=1, d=1, and some initial condition ﬁxed to be (ξ(ω))(x)=6

5sin(x)for

all x∈[0, π].

We assume that O:[0,T]×Ω→Vis given by Lemma 3.3 where the Brownian motion βi:[0,T]×

Ω→R,i∈Nd, are related by

E(βkβl)=Qek,el,k,l∈N, (5.1)

where the covariance operator Qis explicitly given as a convolution operator

Qek,el=π

0π

0

ek(x)el(y)q(x−y)dydx, (5.2)

with kernel

q(x−y)=max 0, h−|x−y|

h2, (5.3)

where we deﬁne the orthonormal basis

ek(x)=2

πsin(kx)for k∈N. (5.4)

The possibly small quantity h>0 measures the correlation length of the noise. In this case the covari-

ance matrix, i.e., Qek,elk,l, is not diagonal. But, for small h>0, it is close to diagonal. In Fig. 1,

the covariance matrix is plotted for k,l∈{1, 2, ..., 100}for h=0.1, 0.01. Then, by some numerical

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

20 of 24 D. BLÖMKER ET AL.

0

20

40

60

80

100

0

50

100

−0.5

0

0.5

1

k

Covariance Matrix

l

Q(k,l )

Q(k,l )

0

20

40

60

80

100

0

50

100

−0.5

0

0.5

1

k

Covariance Matrix

l

(a) (b)

Fig. 1. Covariance matrix Qek,elk,lfor k,l∈{1, 2, ...,c100},for(a)h=0.1 and (b) h=0.01.

calculations we can show that the condition on Qfrom (4.1) is satisﬁed for any ρ∈(0, 1

2).

The stochastic evolution equation (2.6) reduces to

dXt=∂2

∂x2Xt−Xt·∂

∂xXtdt+dWt,X0(x)=6

5sin(x), (5.5)

with Xt(0)=Xt(π) =0fort∈[0, 1] and x∈[0, π].

The ﬁnite-dimensional SDE (2.5) reduces to

dXN

t=∂2

∂x2XN

t−PNXN

t·∂

∂xXN

tdt+PNdWt,XN

0(x)=6

5sin(x), (5.6)

with XN

t(0)=XN

t(π) =0fort∈[0, 1] and x∈[0, π], and all N∈N.

In Fig. 2the solution O:[0,T]×Ω→C0([0, π]), the solution of the linear SPDE

dOt=ΔOtdt+dWt,Ot|∂(0,π) =0, O0=6

5sin(x),

is plotted for T=1.

Theorem 4.2 yields the existence of a unique solution X:[0,π]×Ω→C0([0, π])of the SPDE

(5.5) such that

sup

0xπ

|XmΔt(ω,x)−YN,M

m(ω,x)|C(ω)(N−γ+(Δt)min{1/4,θ})(5.7)

for m=1, ...,M,M=1/Δtsuch that γ∈(0, 1

2),θ∈(0, 1

4).

By using Δt=T/N2, the solutions XN

t(ω,x)of the ﬁnite-dimensional SDEs (5.6) converge uni-

formly in t∈[0, 1] and x∈[0, π] to the solution Xt(ω,x)of the stochastic Burgers’ equation (5.5) with

the rate 1

2,asNgoes to inﬁnity for all ω∈Ω.InFig.3, the pathwise approximation error

sup

0xπ

sup

0mM

|XmΔt(ω,x)−YN,M

m(ω,x)|(5.8)

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

FULL DISCRETIZATION OF THE STOCHASTIC BURGERS EQUATION 21 of 24

Fig.2.Ot(ω,x),x∈[0, π], t∈[0, 1] and one random ω∈Ω,for(a)h=0.1 and (b) h=0.01.

100101102103

10−4

10−3

10−2

10−1

N

Pathwise approximation error

Stochastic Burgers equation

Pathwise approximation error

Orderlines 0.25, 0.5, 1

100101102103

10−4

10−3

10−2

10−1

N

Pathwise approximation error

Stochastic Burgers equation

Pathwise approximation error

Orderlines 0.25, 0.5, 1

(a) (b)

Fig. 3. Pathwise approximation error (5.8) against Nfor N∈{16, 32, ..., 256}for two random ω∈Ω, with h=0.1. These are

only two examples, but all other calculated trajectories behave similarly.

is plotted against Nfor N∈{16, 32, ..., 256}. As a replacement for the unknown solution, we use a

numerical approximation for Nsufﬁciently large.

Figure 3conﬁrms that, as expected from Theorem 4.2, the order of convergence is 1

2. Obviously,

these are only two examples, but all of a few hundred calculated examples behave similarly. Even their

means seem to behave with the same order of the error. Nevertheless, we did not prove this here and

also did not calculate the mean with a sufﬁciently good standard deviation.

Finally, as an example, in Fig. 4,Xt(ω),x∈[0, π] is plotted for t∈{0, 3

200 ,0.2,1},forh=0.01, 0.1.

Acknowledgements

The authors of Tarbiat Modares University would like to thank the Department of Mathematics of the

University of Augsburg for its support during the second-named author’s visit and also for providing an

opportunity for joint research collaboration.

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

22 of 24 D. BLÖMKER ET AL.

00.5 11.5 22.5 33.5

0

0.2

0.4

0.6

0.8

1

1.2

x

T=0.2

00.5 11.5 22.5 33.5

0

0.2

0.4

0.6

0.8

1

1.2

1.4

x

T=0. 2

00.5 11.5 22.5 33.5

0

0.2

0.4

0.6

0.8

1

1.2

1.4

x

T=0

00.5 11.5 22.5 33.5

0

0.2

0.4

0.6

0.8

1

1.2

1.4

x

T=0

X(T,x )

X(T,x )

X(T,x )

X(T,x )

X(T,x )X(T,x )X(T,x )

X(T,x )

00.5 11.5 22.5 33.5

−0.2

0

0.2

0.4

0.6

0.8

1

1.2

x

00.5 11.5 22.5 33.5

0

0.2

0.4

0.6

0.8

1

1.2

x

T=1 T=1

00.5 11.5 22.5 33.5

0

0.2

0.4

0.6

0.8

1

1.2

1.4

x

T=3/200

00.5 11.5 22.5 33.5

0

0.2

0.4

0.6

0.8

1

1.2

1.4

x

T=3/200

(a) (b)

Fig. 4. The stochastic Burgers’ equation Xt(ω,x),x∈[0, π], t∈{0, 3

200 ,0.2,1}, given by (5.5)for(a)h=0.1 and (b) h=0.01, for

one random ω∈Ω.

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

FULL DISCRETIZATION OF THE STOCHASTIC BURGERS EQUATION 23 of 24

Funding

Funded by the University of Augsburg and Tarbiat Modares University.

References

Alabert, A. & Gyöngy, I. (2006) On Numerical Approximation of Stochastic Burgers Equation (Y. Kabanov et al.

ed.). From Stochastic Calculus to Mathematical Finance. Berlin: Springer, pp. 1–15.

Blömker, D. & Duan, J. (2007) Predictability of the Burgers dynamics under model uncertainty. Stochastic Dif-

ferential Equations: Theory and Applications. A Volume in Honor of Professor Boris L. Rozovskii (P. Baxen-

dale & S. Lototsky eds.). Interdisciplinary Mathematical Sciences, vol. 2. Hackensack, NJ: World Scientiﬁc,

pp. 71–89.

Blömker, D. & Jentzen, A. (2009) Galerkin approximations for the stochastic Burgers equation, Accepted for

Publication in SIAM Journal on Numerical Analysis (SINUM).

Da Prato, G., Debussche, A. & Temam, R. (1994) Stochastic Burgers equation. Nonlinear Differential Equations

Appl.,1, 389–402.

Da Prato, G. & G atare k, D. (1995) Stochastic Burgers equation with correlated noise. Stoch. Rep.,52, 29–41.

Da Prato, G. & Zabczyk, J. (1992) Stochastic Equations in Inﬁnite Dimensions. Encyclopedia of of Mathematics

and its Applications, vol. 44. Cambridge: Cambridge University Press.

Da Prato, G. & Zabczyk, J. (1996) Ergodicity for Inﬁnite-Dimensional Systems. London Mathematical Society

Lecture Note Series, vol. 229. Cambridge: Cambridge University Press.

Goldys, B. & Maslowski, B. (2005) Exponential ergodicity for stochastic Burgers and 2D Navier–Stokes equa-

tions. J. Funct. Anal.,226, 230–255.

Gyöngy, I. (1999) Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven

by space-time white noise II. Potential Anal.,11, 1–37.

Gyöngy, I. & Nualart, D. (1999) On the stochastic Burgers equation in the real line. Ann. Probab. 27, 782–802.

Hausenblas, E. (2002) Numerical analysis of semilinear stochastic evolution equations in Banach spaces. J. Com-

put. Appl. Math.,147, 485–516.

Hausenblas, E. (2003) Approximation for semilinear stochastic evolution equations. Potential Anal.,18, 141–186.

Jentzen, A. (2009) Pathwise numerical approximations of SPDEs with additive noise under non-global Lipschitz

coefﬁcients. Potential Anal.,31, 375–404.

Jentzen, A., Kloeden, P. & Neuenkirch, A. (2009) Pathwise approximation of stochastic differential equations

on domains: higher order convergence rates without global Lipschitz coefﬁcients. Numer. Math.,11 2, 41–64.

Jentzen, A., Kloeden, P. & Winkel, G. (2011) Efﬁcient simulation of nonlinear parabolic SPDEs with additive

noise. Ann. Appl. Probab.,21, 908–950.

Kloeden, P. E. & Shott, S. (2001) Linear-implicit strong schemes for Itô-Galerkin approximations of stochastic

PDEs. J. Appl. Math. Stoch. Anal.,14, 47–53. Special issue: Advances in applied stochastics.

Liu, D. (2003) Convergence of the spectral method for stochastic Ginzburg equation driven by space-time white

noise. Commun. Math. Sci.,1, 361–375.

Lord,G.&Rougemont,J.(2004) A numerical scheme for stochastic PDEs with Gevrey regularity. IMA J. Numer.

Anal.,24, 587–604.

Lord, G. & Shardlow, T. (2007) Postprocessing for stochastic parabolic partial differential equations. SIAM J.

Numer. Anal.,45, 870–889.

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from

24 of 24 D. BLÖMKER ET AL.

Runst, T. & Sickel, W. (1996) Sobolev Spaces of Fractional Order, Nemytskij Operators, and Nonlinear Partial

Differential Equations. de Gruyter Series in Nonlinear Analysis and Applications, vol. 3. Berlin: Walter de

Gruyter Co.

Shardlow, T. (1999) Numerical methods for stochastic parabolic PDEs. Numer. Funct. Anal. Optim.,20, 121–145.

Yo o , H . (1999) Semi-discretization of stochastic partial differential equations on R by a ﬁnite difference method.

Math. Comp.,69, 653–666.

at University of Aberdeen on March 9, 2013http://imajna.oxfordjournals.org/Downloaded from