ArticlePDF Available

# Controllability and Qualitative properties of the solutions to SPDEsdriven by boundary L\'evy noise

Authors:

## Abstract

Let $u$ be the solution to the following stochastic evolution equation (1) du(t,x)& = &A u(t,x) dt + B \sigma(u(t,x)) dL(t),\quad t>0; u(0,x) = x taking values in an Hilbert space $\HH$, where $L$ is a $\RR$ valued L\'evy process, $A:H\to H$ an infinitesimal generator of a strongly continuous semigroup, $\sigma:H\to \RR$ bounded from below and Lipschitz continuous, and $B:\RR\to H$ a possible unbounded operator. A typical example of such an equation is a stochastic Partial differential equation with boundary L\'evy noise. Let $\CP=(\CP_t)_{t\ge 0}$ %{\CP_t:0\le t<\infty}$the corresponding Markovian semigroup. We show that, if the system (2) du(t) = A u(t)\: dt + B v(t),\quad t>0 u(0) = x is approximate controllable in time$T>0$, then under some additional conditions on$B$and$A$, for any$x\in H$the probability measure$\CP_T^\star \delta_x$is positive on open sets of$H$. Secondly, as an application, we investigate under which condition on %$\HH$and on the L\'evy process$L$and on the operator$A$and$B\$ the solution of Equation [1] is asymptotically strong Feller, respective, has a unique invariant measure. We apply these results to the damped wave equation driven by L\'evy boundary noise.
CONTROLLABILITY AND QUALITATIVE PROPERTIES OF THE
SOLUTIONS TO SPDES DRIVEN BY BOUNDARY L ´
EVY NOISE
ERIKA HAUSENBLAS, PAUL ANDR´
E RAZAFIMANDIMBY
Abstract. In the present paper we are interested in the qualitative properties of the Markov-
ian semigroups P= (Pt)t0associated to the solutions of certain stochastic partial diﬀerential
equations (SPDEs) with boundary noise. We assume that these problems can be written as
an abstract stochastic PDE on a Hilbert space Htaking the following form:
du(t, x) = Au(t, x)dt +B σ(u(t, x) ) dL(t), t > 0;
u(0, x) = xH.
(1)
Here Lis a real-valued L´evy process, A:D(H)HHis an inﬁnitesimal generator of
a strongly continuous semigroup, σ:HRis a Lipschitz continuous map bounded from
below and above, and B:RHa possibly unbounded operator. As typical examples of
such stochastic evolution equation we mainly consider and treat the damped wave equation
and heat equation both driven by boundary L´evy noise.
du(t, x) = Au(t, x)dt +B v(t)dt, t > 0;
u(0, x) = xH
(2)
is approximate controllable at time T > 0 with control v, then, under some additional condi-
tions on B,Aand L, the probability measure on Hinduced by u(t, x) at a given time t > 0,
xH, is positive on open subsets of H. Secondly, we investigate under which conditions on
the L´evy process Land on the operators Aand Bthe solution to equation (1) is asymptotically
strong Feller. It follows from our results that the wave equation with boundary L´evy noise
has at most one invariant measure which is non–degenerate.
1. Introduction
To present the aim of this paper, let Hbe a real separable Hilbert space and ube the unique
H–valued solution of the inﬁnite dimensional system with evy noise, formally written as
du(t, x) = Au(t, x)dt +RRB σ(u(t, x)) z˜η(dz, dt), t > 0,
u(0, x) = xH.
(3)
In this equation, A:A:D(H)HHis a linear operator generating a C0–semigroup on
H,B:RHand σ:HRare mappings satisfying some conditions which we will specify
later. The process η:B(R)× B(R+)N0∪ {∞} is a compensated Poisson random measure
over a probability space A= (Ω,F,(Ft)t0,P) with intensity measure ν. Let P= (Pt)t0be
the Markovian semigroup induced on Hby the stochastic process u, i.e.
Ptφ(x) := Eφ(u(t, x)), x H, t > 0, φ Cb(H).(4)
Date: April 3, 2015.
1991 Mathematics Subject Classiﬁcation. 60H07, 60H10, 60H15, 60J75.
Key words and phrases. SPDEs, Poisson Random measures, support theorem, invariant measure, Asymptot-
ically Strong Feller Property.
1
2 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
A typical example of such an equation is a stochastic partial diﬀerential equation (SPDE) with
boundary noise. The aim of this paper is to verify under which conditions on A,B,σand η
the Markovian semigroup deﬁned in (4) is irreducible. We also discuss the asymptotic strong
Feller property of this Markovian semigroup and its implications.
Regularity properties of the Markovian semigroup associated to a stochastic process play an
important role in studying the long time behavior of the process, especially for the investigation
of the uniqueness of invariant measure. In fact, if the Markovian semigroup is strong Feller
and satisﬁes an irreducible property, then it admits a unique invariant measure. To relax
these conditions Hairer and Mattingly introduced in [17] the so called asymptotic strong Feller
property. In particular, they proved that for the uniqueness of the invariant measure it is
suﬃcient to establish the existence of the invariant measure, some nondengeneracy property
and that the Markovian semigroup is asymptotically strong Feller.
Unlike the case of SPDEs driven by Wiener processes, there are very few works studying
support theorems and the uniqueness of the invariant measure for SPDEs driven by a L´evy
process. One of the ﬁrst results about the uniqueness of an invariant measure for L´evy driven
SPDEs were established in the articles of Chojnowska-Michalik [8, 9]. Fournier [16] investigated
SPDEs driven by space time L´evy noise. Applebaum analyzed in [3] the analytic property of
the generalized Mehler semigroup induced by L´evy noise and in [2] the self-decomposability
of a L´evy noise in Hilbert space. Further works are the two articles of Priola and Zabczyk
[29, 30]. We also refer to [20], [31], [32] for some recent results and review of progress for the
study of the ergodicity of the Markovian semigroup associated to the solution of a L´evy driven
SPDEs. The proofs of the results in [30, 31, 32] rely on the cylindrical and α-stability of the
noise, hence their approach does not cover the case we are treating in this paper.
In the present work we mainly prove that if a certain notion of null controllability is veriﬁed
for system (2), then the Markovian semigroup corresponding to equation (1) is irreducible
and enjoys the asymptotic strong Feller property. We apply our result to stochastic evolution
equations with L´evy boundary noise. For results related to SPDEs with white-noise boundary
condition we refer to [12],[22] and [7]. For stochastic evolution equations driven by Wiener
noise a similar result to ours was established long ago. Indeed, the Markovian semigroup of an
Ornstein-Uhlenbeck is irreducible and strong Feller if (2) is null controllable. For this result
we refer to the books of Da Prato and Zabczyk [12] and [14] and references therein. But their
result tells us nothing about the property of the Markovian semigroup when we consider an
Ornstein-Uhlenbeck driven by pure jump noise. Hence our work is an extension of some results
in [12] and [14] in the sense that we consider multiplicative and pure jump noise.
The structure of the paper is as follows. In Section 2 we give most of the hypotheses used
throughout the paper and prove an important relation between the irreducibility property and
(approximate) controllability. In particular, we prove in Section 2 that any ball centered at
xHhas positive measure with respect to the law of the solution to (1) if (2) is approximate
controllable. Section 3 is devoted to the proof of the uniqueness of the invariant measure
of the Markovian semigroup associated to the solution of (3). In fact, we establish that the
Markovian semigroup satisﬁes the asymptotic strong Feller property if Aand Bgenerate an null
controllable system. The asymptotic strong Feller and the irreducibility of the aforementioned
semigroup implies the uniqueness of the invariant measure. To illustrate our results, we apply
them in Section 4 and Section 5 to a damped wave equation and a heat equation driven by
boundary L´evy noise, respectively. The last part of the paper consists of some appendices
collecting technical results about the change of measure. The proofs of our results are a
April 3, 2015 3
combination of the change of measure formula given by Bismuth, Graveraux and Jacod [5] and
Sato [33] (see also [18]) and the method used by Maslowski and Seidler [23].
Notation 1. Let R+:= (0,),R+
0:= [0,),R:= (−∞,0),N0:= N∪ {0}and ¯
N:= N0
{∞}. Let (Z, Z)be a measurable space. By M+(Z)we denote the family of all positive measures
on Z, by M+(Z)we denote the σ-ﬁeld on M+(Z)generated by functions iB:M+(Z)µ7→
µ(B)R+
0,B∈ Z. By MI(Z)we denote the family of all σ–ﬁnite integer valued measures
on Z, by MI(Z)we denote the σ-ﬁeld on MI(Z)generated by functions iB:MI(Z)µ7→
µ(B)¯
N,B∈ Z. By M+
σ(Z)we denote the set of all σ–ﬁnite and positive measures on Z, by
M+
σ(Z)we denote the σ-ﬁeld on M+
σ(Z)generated by functions iB:M+
σ(Z)µ7→ µ(B)R,
B∈ Z. We denote by B(Z)the set of all Borel measurable, real-valued bounded functions.
For a Hilbert space Hwe denote by Cb(H)the space of all uniformly continuous and bounded
mappings φ:HRendowed with the norm |φ|:= supxH|φ(x)|.
The space of bounded linear maps from a Banach space Xinto another Banach space Yis
denoted by L(X, Y ).
Throughout the paper λwill denote the Lebesgue measure.
2. Irreducibility of the Markovian semigroup associated to equation (1)
In this section we will deﬁne the setting in which the results can be formulated. We begin
with the deﬁnition of a time homogenous (compensated) Poisson random measure.
Deﬁnition 2.1. Let (Z, Z, ν)be a measurable space with a σ-ﬁnite measure ν, and (Ω,F,F,P)
be a ﬁltered probability space with right continuous ﬁltration F= (Ft)t0. A time homogeneous
Poisson random measure ηon (Z, Z)over (Ω,F,F,P), is a measurable function η: (Ω,F)
(MI(Z×[0,)),MI(Z×[0,))), such that
(i) η(∅ × I) = 0 a.s. for I∈ B([0,)) and η(A× ∅) = 0 a.s. for A∈ Z;
(ii) for each B×I Z × B([0,)),η(B×I) := iB×Iη: Ω ¯
Nis a Poisson random
variable with parameter1ν(B)λ(I).
(iii) ηis independently scattered, i.e. if the sets Bj×Ij Z × B([0,)),j= 1,···, n,
are pairwise disjoint, then the random variables η(Bj×Ij),j= 1,···, n are mutually
independent.
(iv) for each U∈ Z, the ¯
N-valued process (N(t, U))t>0deﬁned by
N(t, U ) := η(U×(0, t]), t > 0
is (Ft)t0-adapted and its increments are independent of the past, i.e. if t > s 0, then
N(t, U )N(s, U) = η((s, t]×U)is independent of Fs.
The measure νdeﬁned by
ν:Z A7→ Eη(A×(0,1]) ¯
N
is called the intensity of η.
We denote by ˜ηthe compensated Poisson random measure deﬁned by
˜η(U×I) = η(U×I)ν(U)λ(I), U ×I Z × B([0,)).
If the intensity of a Poisson random measure is a L´evy measure2, then one can construct from
the Poisson random measure a L´evy process. Conversely, tracing the jumps, one can ﬁnd a
1If ν(B)λ(I) = , then obviously η(B×I) = a.s..
2A L´evy measure on Ris a σ–ﬁnite measure such that ν({0}) = 0 and RR(|z|21)ν(dz)<.
4 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
Poisson random measure associated to each L´evy process. For more details on this relationship
we refer to [1, 6].
Let A= (Ω,F,F,P) be a complete probability space with right continuous ﬁltration F=
(Ft)t0,ηbe a time homogeneous Poisson random measure on Rover Awith intensity νbeing
a L´evy measure and compensator γνdeﬁned by
γν:B(R)× B([0,)) (A×I)7→ γν(A×I) := ν(A)λ(I)R+
0.
Our assumptions are given below.
Hypothesis 1. We assume that the L´evy measure νis symmetric and has a density kwith
respect to the Lebesgue measure λ. Furthermore, we assume that for any ρ > 0the density kis
bounded on the interval (ρ, ), and that there exist an index α(1,2), constants K1, K2>0
and ρ0>0such that
K1|ρ|α1k(ρ)K2|ρ|α1,for all |ρ| ≤ ρ0.
Hypothesis 2. We assume that
Z|ρ|0|z|2ν(dz)<.
Remark 2.2. Note that Hypothesis 1 and Hypothesis 2 imply
(5) ZR|z|2ν(dz)<.
Hypothesis 3. Let H1be a separable Hilbert space such that the embedding H ֒H1is
continuous. We assume that B:RH1is a linear operator. We suppose that Ais the
inﬁnitesimal generator of a C0-semigroup {etA, t 0}on Hand for any T > 0there exists a
constant KTsuch that
(6) ZT
0ke(Ts)ABk2
L(R,H)ds < KT.
Hypothesis 4. Let σ:HRbe a Lipschitz continuous mapping which is bounded from below
and above, i.e., there exist positive constants Cσ,such that
Cσ<|σ(u)| ≤ ℓ,
for any uH.
By u(t, x) we denote the solution of the following stochastic evolution equation
du(t, x) = Au(t, x) + RRBσ(u(t, x)) z˜η(dz, dt), t > 0,
u(0, x) = xH.
(7)
Typical examples of such system are SPDEs with boundary noise and are presented in the
following examples (for more details we refer to Section 4 and Section 5).
Example 2.3. We consider the vibration of a string of length 2πwhere one end is ﬁxed and
the other end is perturbed by a L´evy noise. To be more precise, let T > 0,˜ρRand consider
April 3, 2015 5
the system
(8)
utt(t, ξ)uξξ (t, ξ ) + ˜ρut(t, ξ) = 0, t (0, T ), ξ (0,2π),
u(t, 0) = 0, t (0, T ),
uξ(t, 2π) = 2 + sin(|u(t)|L2(O))˙
Lt, t (0, T ),
u(0, ξ) = x0(ξ), ut(0, ξ) = x1(ξ), ξ (0,2π),
where ˙
Lis the Radon Nikodym derivative of a real valued L´evy process with intensity measure
ν,x0H1
0(0,2π)and x1L2(0,2π).
Example 2.4. We consider the temperature of a one–dimensional rod of length 2πexposed
to a heat source with random ﬂuctuations on one side. To model this random ﬂuctuations a
evy noise is added at the boundary ξ= 2π, while the boundary ξ= 0 is ﬁxed to have zero
temperature. To be more precise, let T > 0and consider the system
(9)
ut(t, ξ)uξξ (t, ξ) = 0, t (0, T ), ξ (0,2π),
u(t, 0) = 0, t (0, T ),
uξ(t, 1) = ˙
Lt, t (0, T ),
u(0, ξ) = x0(ξ), ξ (0,2π).
Here x0L2(0,2π)and ˙
Lis the Radon–Nikodym derivative of a real valued L´evy process with
intensity measure ν.
The Example 2.3 and Example 2.4 are of diﬀerent type and we show their well-posedness,
i.e., the existence and uniqueness of their solutions in Section 4 and Section 5. In these sections
we also show how these examples can be written as an abstract evolution equation of type (7).
For the time being, we assume that they can be written as an abstract evolution equation of
the form (7) and satisfy the hypothesis listed before. Moreover, we assume that each of them
has a unique mild solution that is deﬁned in the next deﬁnition.
Deﬁnition 2.5. By a solution of (7) we mean a H-valued3and progressively measurable process
such that
PZt
0ZZe(ts)A(u(s, x))z
pν(dz)ds < = 1,t0,
and satisfying the following equation P-a.s.
(10) u(t, x) = etAx+Zt
0ZZ
e(ts)A(u(s, x))z˜η(dz, ds),t0.
Before continuing we introduce some deﬁnitions from control theory.
Deﬁnition 2.6.
(i) We say that the system
˙uc(t, x, v) = Auc(t, x, v) + Bv(t), t 0,
uc(0, x, v) = x,
(11)
is null controllable at time Tiﬀ for any xHthere exists a control vL2(0, T ;R)
such that uc(T , x, v) = 0.
3A process uis H–valued, iﬀ for all t0, u(t) is an H–valued random variable.
6 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
(ii) We say that the system (11) is exactly controllable at time Tin xHiﬀ for each
yHthere exists a control vL2(0, T ;R)such that uc(T , x, v) = y.
(iii) We say that the system (11) is approximate controllable at time Tin xHiﬀ for each
yHand ǫ > 0there exists a control vL2(0, T ;R)such that |uc(T , x, v)y| ≤ ǫ.
Remark 2.7.
The system (11) associated to the wave equation with boundary control described in
Example 2.3 is exactly controllable at time T2π(Zuazua [35], or [34, Proposition
10.2.3, p. 325]);
The system (11) associated to the heat equation with Neumann boundary control de-
scribed in Example 2.4 is approximate controllable at time T > 0(Laroche, Martin and
Rouchon [21], see also [4, Theorem 5.3, p. 337]). Moreover, it is null-controllable for
any T > 0(see [34, Example 11.2.5, p. 360]).
For C > 0 and yH, we set
DH(y, C ) := {zH:|zy| ≤ C},
and, for simplicity, we just write DH(C) for DH(0, C ). Let ube the solution of the stochastic
evolution equation
du(t, x) = Au(t, x)dt +RRB z˜η(dz, dt)
u(0, x) = xH.
(12)
Then, we have the following theorems.
Theorem 2.8. Let us assume that Hypothesis 1 to Hypothesis 3 are satisﬁed. In addition,
we assume that the system (11) is approximate controllable at time T > 0. Let ube a unique
solution of Eq. (12). Then for any x, y Hand δ > 0there exists a number κ=κ(x, y, δ)>0
such that
P(u(T, x)∈ DH(y, δ)) κ.(13)
In case the system is exactly controllable the result of the above theorem can be strengthen
as follows.
Theorem 2.9. Let us assume that Hypothesis 1 to Hypothesis 3 are satisﬁed. In addition, we
assume that the system (11) is exactly controllable at time T > 0. Let ube a unique solution of
Eq. (12). Then for all C > 0, for all x0, y H, and all δ > 0there exists κ=κ(x0, C, y, δ)>0
such that we have for all x∈ DH(x0, C)
P(u(T, x)∈ DH(y, δ)) κ.
One gets a similar result, if the system (11) is only null controllable.
Theorem 2.10. Let us assume that Hypothesis 1 to Hypothesis 3 are satisﬁed. In addition,
we assume that the system (11) is null controllable. Let ube a unique solution of Eq. (12).
Then for all C > 0, all δ > 0there exists a number κ=κ(δ, C)>0such that all x∈ DH(C)
we have
P(u(T, x)∈ DH(δ)) κ.
April 3, 2015 7
Let ube the solution of the stochastic evolution equation
du(t, x) = Au(t, x)dt +RZB σ(u(t, x)) z˜η(dz, dt)
u(0, x) = x.
(14)
We will prove the following result later.
Theorem 2.11. Let us assume that Hypothesis 1 to Hypothesis 4 are satisﬁed. In addition,
we assume that the system (11) is approximate controllable at time T > 0. Let ube a unique
solution of Eq. (14). If uis cadlag in H, then for any δ > 0and for all x, y H, there exists
a number κ=κ(x, y, δ)>0such that
P(u(T, x)∈ DH(y, δ)) κ.(15)
Again, in the case when the system is exactly controllable the result of the above theorem
can be strengthen as follows.
Theorem 2.12. Let us assume that Hypothesis 1 to Hypothesis 4 are satisﬁed. In addition,
we assume that the system (11) is exactly controllable at time T > 0. Let ube a solution of
Eq. (14). If uis cadlag in H, then for all C > 0, for all x0, y H, and all δ > 0there exists
κ=κ(x0, C, y, δ)>0such that for all x∈ DH(x0, C )we have
P(u(T, x)∈ DH(y, δ)) κ.
If the system is only null controllable at time T, then again the disk DH(y, C ) centered at
any point yHhas to be replaced by a disc DH(C) centered at the point 0. Since this claim
is important for our analysis we state and prove it in the following theorem.
Theorem 2.13. Let us assume that Hypothesis 1 to Hypothesis 4 are satisﬁed. In addition, we
assume that the system (11) is null controllable at time T > 0. Let ube a H-cadlag solution
of Eq. (14). Then for all C > 0, and all δ > 0there exists a number κ=κ(C, δ)>0such that
for all x∈ DH(C)we have
P(u(T, x)∈ DH(δ)) κ.(16)
As mentioned in the introduction, we illustrate the applicability of these theorems to the
Wave and Heat equations with L´evy boundary noises. In particular, we will give the detailed
mathematical treatment of Example 2.3 and Example 2.4 in Section 4 and Section 5, respec-
tively.
Proof of Theorem 2.8. For technical reasons we will switch to another representation of the
Poisson random measure. Let A= ( ¯
,¯
F,¯
F,¯
P) be a ﬁltered probability space with ﬁltration
¯
F= ( ¯
Ft)t0and let µbe a Poisson random measure on R×[0, T ] over Ahaving the Lebesgue
measure λas its intensity measure. The compensator of µis denoted by γand given by
B(R)× B([0,)) A×I7→ γ(A×I) := λ(A)λ(I).
Let U: (0,)R+denote the tail integral U(r) = R
rk(s)ds. Observe that Uis a strictly
decreasing and continuous function. Let c:R+R+be its inverse given by
c:R+r7→ c(r) := sup
κ>0{U(κ)r}if r > 0.(17)
8 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
Since kis symmetric, ccan be extended to the negative numbers by putting c(z) = c(z),
zR+and c(0) = 0. Note, that hypothesis 1 implies, that there exists an index α(1,2),
constants K1, K2>0 and r0>0 such that
K1|z|1
αc(z)K2|z|1
α,for all |z| ≥ r0.
In what follows we assume that cis a function on R.
Let
L(t) := Zt
0ZR
z˜η(dz, ds), t 0,
and
Lc(t) := Zt
0ZR
c(z) ˜µ(dz, ds), t 0.
By [1, Theorem 2.3.7] we have
¯
Eexp hiy Zt
0ZR
c(zµ(dz, ds)i= exptZR
(eiyz 1)λc(dz),
where λc=λ(c1(B)) for each B∈ B(R). Thanks to the deﬁnition of c(·) we can deduce that
¯
Eexp hiy Zt
0ZR
c(zµ(dz, ds)i= exptZR
(eiyz 1)k(z)dz
=Eexp hiy Zt
0ZR
z˜µ(dz, ds)i.
Thus, the processes L={L(t) : 0 t < ∞} and Lc={Lc(t) : 0 t < ∞} have the same law.
Now, the stochastic evolution equation given in (12) reads as follows on A
du(t, x) = Au(t, x)dt +RRB c(zµ(dz, dt),
u(0, x) = xH.
(18)
Fix yH,δ > 0, T > 0 and xH. In order to prove Theorem 2.8 we need a result from
control theory. Given vL2([0,); R), let ucbe the solution to
duc(t, x, v) = Auc(t, x, v)dt +Bv(t)dt, t 0,
uc(0, x, v) = x.
(19)
Since the system (19) is supposed to be approximate controllable, there exists vL2(0, T ;R)
such that
|uc(T, x, v)y| ≤ δ
3.(20)
Now, let ˜
R > 0 be as in Corollary A.4-(2) and choose R > ˜
Rr0such that
3
δ2
C T R12
α1
2.(21)
Here Cis a generic constant which does not depend on δ,Tand R(see (30)). Next we set
gR=ZDR(R)
c(z)λ(dz).(22)
April 3, 2015 9
Let θ(R): [0, T ]×RRbe a transformation such that
(v(s) + gR) = ZRhc(z)c(θ(R)(s, z))iλ(dz), s [0, T ].(23)
Observe, that for z∈ DR(R) we have c(θ(R)(s, z)) = c(z). The existence of such a transforma-
tion is given by Corollary A.3 and Corollary A.4. Now, Corollary A.3 implies that for any
Rr0there exists ϑR:R×R\{0} → Rsuch that
(v(s) + gR) = ZRc(z)c(ϑR((v(s) + gR), z))λ(dz), s [0, T ].
Setting
θ(R)(s, z) := ϑR(v(s) + gR, z),
it is easy to see that for any s[0, T ] and zR\{0}the transformation θ(R)satisﬁes (23).
In addition, due to Corollary A.4, we can assume that ρ(R)
z(x, z)| ≤ 1
2, for all zR\ {0}and
xR. For simplicity, we will write θinstead of θ(R)throughout this proof.
Let µθbe a random measure deﬁned by
µθ:¯
× B(R)⊗ B([0, T ]) (ω, A ×I)7→ ZRZI
1A(θ(s, z))µ(ω, dz , ds).
Remark 2.14. Note, that due to the fact that µis a (Poisson) random measure depending on
ω, also µθdepends on ω. Moreover, µθis not necessarily time homogenous on A, i.e. it is
a random measure but not necessarily a Poisson random measure on A.
Let Qθbe the probability measure on Asuch that µθhas compensator γ. The existence of
such probability measure Qθis ensured by Lemma B.1. Under Qθ, the process uθ
µdeﬁned by
duθ
µ(t, x) = Auθ
µ(t, x)dt +RRBc(z) (µθγ)(dz, dt)
uθ
µ(0, x) = x,
(24)
has the same law as u. In particular, we have
E¯
P1[0](|u(T, x)y|)=EQθh1[0](|uθ
µ(T, x)y|)i.
On the other hand we know that under ¯
Pthe process uθ
µsolves the following stochastic
evolution equation
(25)
duθ
µ(t, x) = Auθ
µ(t, x)dt +RRB[c(θ(t, z)) c(z)] (µγ)(dz, dt)
+RRB[c(θ(t, z)) c(z)] γ(dz, dt) + RRBc(z)(µγ)(dz, dt),
uθ
µ(0, x) = x.
Hence we can write
P(|u(T, x)y| ≤ δ) = Qθ(|uθ
µ(T, x)y| ≤ δ) = EQθh1|uθ
µ(T,x)y|≤δi
=E¯
PhGθ(T)1|uθ
µ(T,x)y|≤δi,
where G(T) is such that dQθ=G(T)d¯
P. The existence of G(T) is again ensured by Lemma
B.1. Now, let X= 1|uθ
µ(T,x)y|≤δand Y=Gθ(T). By H¨older’s inequality we have
E¯
Ph(X·Y)1
2Y1
2i2E¯
P[X·Y]E¯
P[Y1].
10 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
Hence,
E¯
P[X·Y]E¯
P[X1
2]2
E¯
P[Y1],
from which we infer that
P(|u(T, x)y| ≤ δ)E¯
Ph1|uθ
µ(T,x)y|≤δi2
E¯
P|G1
θ(T)|.(26)
In what follows we will prove that the right hand side of the above inequality is bounded
from below by a positive constant. For this aim, we will ﬁrst give an upper estimate of
E¯
P|H(t)|=E¯
P|G1
θ(T)|, and then give a lower estimate of E¯
Ph1|uθ
µ(T,x)y|≤δi.
By Remark B.3 the process Hdeﬁned by H(t) := G1(t), t0, solves the stochastic
diﬀerential equation
dH(t) = −H(t)RR
θz(t,z)1
θz(t,z)(µγ)(dz, dt) + H(t)RR
[θz(t,z)1]2
θz(t,z)γ(dz, dt),
H(0) = 1.
Let θzbe the partial derivative of θwith respect to to z. By Corollary A.4-(2) we have
1
2θz3
2for the choice of Ras above. Since His of ﬁnite variation we see that
E¯
P|H(t)| ≤ 1 + 2 Zt
0ZR
E¯
P|H(s)||θz(s, z)1|dz ds + 2 Zt
0ZR
E¯
PH(s)|θz(s, z)1|2dz ds
1 + 2 Zt
0ZR
E¯
P|H(s)||θz(s, z)1|dz ds + 3 Zt
0ZR
E¯
PH(s)|θz(s, z)1|dz ds.
Since, by Corollary A.4-(1), there exists C(R)>0 such that
Z
−∞ |θz(s, z)1|dz C(R)|v(s)|2,
we have
E¯
P|H(t)| ≤ 1 + 2C(R)Zt
0
E¯
P|H(s)||v(s)|2ds + 2C(R)Zt
0
E¯
P|H(s)||v(s)|2ds.(27)
Note that the function vis given by the control problem (19) and is not a random function.
Now, Gronwall’s inequality implies
(28) E¯
P|G1
θ(T)|C·exp C(R)ZT
0|v(s)|2ds.
Next, by inequality (20) we deduce that
E¯
Ph1|uθ
µ(T,x)y|≤δi
E¯
Ph1|uc(T,x,v )y|≤ δ
31|uc(T,x,v )uθ
µ(T,x)|≤ δ
3i=E¯
Ph1|uc(T,x,v )uθ
µ(T,x)|≤ δ
3i.(29)
By setting ∆(T) = uθ
µ(T, x)uc(T , x, v) we have
E¯
Ph1|uθ
µ(T,x)y|≤δiE¯
Ph1|∆(T)|≤ δ
3i.
April 3, 2015 11
It is clear that
∆(T) = ZT
0ZR
e(Ts)AB[c(θ(s, z)) c(z)] (µγ)(dz, ds)
+ZT
0ZR
e(Ts)ABc(z) (µγ)(dz, ds)
+ZT
0ZR
e(Ts)AB[c(θ(s, z)) c(z)] γ(dz, dt)ZT
0
e(Ts)Av(s)ds
=ZT
0ZR
e(Ts)AB[c(θ(s, z)) c(z)] γ(dz, dt)ZT
0
e(Ts)Av(s)ds
+ZT
0ZR
e(Ts)ABc(θ(s, z))(µγ)(dz, ds).
From the last line and (23) we deduce that
∆(T) = ZT
0ZR
e(Ts)ABc(θ(s, z))(µγ)(dz, ds) + ZT
0
e(Ts)ABgRds.
Now we proceed by giving a lower estimate of ¯
P|∆(T)| ≤ δ
3. For this aim, we apply the Bayes
Theorem and get
¯
P|∆(T)| ≤ δ
3=¯
P(µ(DR(R)×[0, T ]) = 0)
×¯
P ZT
0ZR\DR(R)
e(Ts)ABc(θ(s, z))(µγ)(dz, ds)
+ZT
0ZDR(R)
e(Ts)ABc(θ(s, z))(µγ)(dz, ds)
+ZT
0
e(Ts)ABgRdsδ
3µ(DR(R)×[0, T ]) = 0!
+¯
P(µ(DR(R)×[0, T ]) >0)
×¯
PZT
0ZR
e(Ts)ABc(θ(s, z))(µγ)(dz, ds)
+ZT
0
e(Ts)ABgRdsδ
3µ(DR(R)×[0, T ]) >0
Since c(θ(s, z)) = c(z) on DR(R)×[0, T ], we have
ZT
0ZDR(R)
e(Ts)ABc(θ(s, z))(µγ)(dz, ds) = ZT
0ZDR(R)
e(Ts)ABc(θ(s, z))λ(dz)ds
=ZT
0
e(Ts)ABgRds,
12 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
on the event µ(DR(R)×[0, T ]) = 0. Thus,
¯
P|∆(T)| ≤ δ
3
¯
P ZT
0ZR\DR(R)
e(Ts)ABc(θ(s, z))(µγ)(dz, ds)δ
3µ(DR(R)×[0, T ]) = 0!
×¯
P(µ(DR(R)×[0, T ]) = 0) .
Since the random variables µ(DR(R)×[0, T ]) and µ(A×[0, T ]) for all A∈ B(R\ DR(R)) are
independent, it follows that
¯
P|∆(T)| ≤ δ
3¯
P ZT
0ZR\DR(R)
e(Ts)ABc(θ(s, z))(µγ)(dz, ds)δ
3!
×¯
P(µ(DR(R)×[0, T ]) = 0) .
By the Chebyscheﬀ inequality we know that
¯
PRT
0RR\DR(R)e(Ts)ABc(θ(s, z))(µγ)(dz, ds)δ
3
3
δ2×E¯
PRT
0RR\DR(R)e(Ts)ABc(θ(s, z))(µγ)(dz, ds)
2.
It follows from the Burkholder inequality and the inequality |c(θ(t, z))|≤ |c(z)|(see the con-
struction of θ) that
E¯
P
ZT
0ZR\DR(R)
e(Ts)ABc(θ(s, z))(µγ)(dz, ds)
2
"ZT
0ZR\DR(R)ke(Ts)ABk2
L(R,H)|c(θ(s, z))|λ(dz)ds#
"ZT
0ZR\DR(R)ke(Ts)ABk2
L(R,H)|c(z)|λ(dz)ds#C T R12
α.
Therefore, collecting all together we obtain that
E¯
Ph1|uc(T,x,v )uθ
µ(T,x)|≤ δ
3i1
C(R) 13
δ2
C T R12
α!.
Since Ris chosen in such a way that
3
δ2
C T R12
α1
2
(30)
and using the fact that
¯
P(µ(DR(R)×[0, T ]) = 0) = eλ(DR(R))T
April 3, 2015 13
we get
E¯
Ph1|uθ
µ(T,x)y|≤δiE¯
Ph1|∆(T)|≤ δ
3i¯
P(µ(DR(R)×[0, T ]) = 0)
× 1¯
P ZT
0ZR\DR(R)
e(Ts)ABc(θ(s, z))(µγ)(dz, ds)δ
3µ(DR(R)×[0, T ]) = 0!!
eλ(DR(R))TC(R) 13
δ2
C T R12
α!1
2eλ(DR(R))TC(R).
Hence, we have shown that
¯
P(|u(T, x)y| ≤ δ)1
4e2λ(DR(R))TC(R) exp(−|v|2
L2([0,T ];R)),(31)
which completes the proof of Theorem 2.8.
Proof of Theorem 2.9. Let x∈ DH(x0, C ) and ucthe solution to
˙uc(t, x, v) = Auc(t, x, v) + Bv(t), t 0,
uc(0, x, v) = x,
(32)
In order to prove Theorem 2.9, we have to show that the RHS of inequality (31) can be
estimated from below for all x∈ DH(x0, C). That is, we have to check that for any yH,
there exists a constant K > 0 such that for any x∈ DH(x0, C) and control vx,y L2(0, T ;R)
with uc(T, x, vx,y ) = y, we have
|vx,y|L2(0,T ;R)K, x ∈ DH(x0, C ).
To be more precise, let us deﬁne the linear mapping ΦT:L2(0, T ;R)Hby
ΦT(v) = ZT
0
e(Ts)ABv(s)ds.
Now, the exact controllability of system (32) is equivalent to the fact that Ran(ΦT) = H
(see [34, Deﬁnition 11.1.1]). By [34, Proposition 12.1.3 ] it follows that ΦTis invertible. In
particular, there exists a constant c > 0 such that
|ΦT(v)|Hc|v|L2(0,T ;R), v L2(0, T ;R).
Since u(t, x1, vx1,y) = u(t, x2, vx2,y ) and u(t, xi, vxi,y ) = eT A xi+RT
0e(Ts)ABvxi,y (s)ds, we have
c|vx1,y vx2,y|2
L2([0,T ];R)ZT
0
e(Ts)AB(vx1,y(s)vx2,y (s)) ds
2
H
ZT
0
e(Ts)ABvx1,y (s)ds ZT
0
e(Ts)ABvx2,y (s)ds
2
H
=eT Ax1eT A x2
2
H˜
C|x1x2|2
H.
Hence, for all x∈ DH(x0, C) there exists a vx,y ∈ DL2([0,T ];R)(x0,˜
CC
c) such that uc(T, x, vx,y ) =
y. Since, by the choice of R(see (21)), Rdepends on δ, we see by estimate (31) that κcan be
chosen such that it depends only on δand C. The constant ˜
Cand care given by the system.
This concludes the proof of Theorem 2.9
14 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
Proof of Theorem 2.10. Let xHand ucthe solution to
˙uc(t, x, v) = Auc(t, x, v) + Bv(t), t 0,
uc(0, x, v) = x,
(33)
Again, in order to prove Theorem 2.10, we have to show that the RHS of inequality (31) can be
estimated from below for all x∈ DH(C). That is, we have to check that there exists a constant
K > 0 such that for any x∈ DH(C) and control vxL2(0, T ;R) with uc(T , x, vx) = 0 we have
|vx|L2(0,T ;R)K, x ∈ DH(x0, C).
However, due to Deﬁnition 11.1.1 [34, p. 356] the pair (A, B) is null controllable at time Tiﬀ
Ran ΦTRan eT A, where
ΦT:L2(0, T ;R)v7→ ZT
0
e(Ts)ABv(s)ds H.
In addition, by the closed–graph Theorem (see e.g. [34, p.385, Proposition 12.1.2]), it follows
that there exists an operator L∈ L(H, L2(0, T ;R)) such that eT A = ΦTL. In particular,
vx=Lx, where vxis the control which steers xto 0 in system (33). Since L∈ L(H, L2(0, T ;R)),
there exists a constant K > 0 such that kvxkL2(0,T;R)K|x|H. Since for all x∈ DH(C) we
have |x| ≤ 2C, we infer that
|vx|L2(0,T ;R)2C K,
from which and inequality (31) we derive that
¯
P(|u(T, x)y| ≤ δ)1
4e2λ(DR(R))TC(R) exp(2CK).
This completes the proof of Theorem 2.10.
Proof of Theorem 2.11. The beginning of the proof uses similar argument as used in the
proof of Theorem 2.8. Here, we only list the points where the proofs diﬀer. Let us consider the
following SPDE
du(t, x) = Au(t, x) + RR(u(s, x)) c(z)˜µ(dz, ds),
u(0, x) = xH.
(34)
Let ˜
R > 0 be as in Corollary A.4-(2). We start by choosing a number R > ˜
Rr0such that
3
δ2
C T R12
αCσ(1 + K)1
2,(35)
where
K:= ZR\DR(R)|c(z)|2λ(dz)!ZT
0
e(Ts)AB
2
L(R,H)ds.
By the assumptions on B,νand σ,Kis ﬁnite. The next diﬀerence is that one has to ﬁnd a
predictable transformation θ(R): Ω ×[0, T ]Rsuch that
v(s)
σ(u(s, x)) +gR=ZR\BR(ρ)hc(z)c(θ(R)(s, z))iλ(dz), s [0, T ].
Again, the existence of such a transformation is given by Corollary A.3. In fact, letting
G(s) := v(s)
σ(u(s, x)) +gR,
April 3, 2015 15
we see from Corollary A.3 that there exists ϑR:R×R\{0} → Rsuch that
G(s) = ZR\BR(ρ)c(z)c(ϑR(G(s), z))λ(dz), s [0, T ],
so we take θ(R): [0, T ]×R\{0} → Rsuch that for any s[0, T ] and zR\{0}
θ(R)(s, z) := ϑRv(s)
σ(u(s, x)) +gR, z.
Again, we will write θinstead of θ(R)for the rest of the proof.
Let µθa random measure deﬁned by
µθ: Ω × B(R)⊗ B([0, T ]) (A×I)7→ ZR2ZI
1A(θ(s, z))µ(ω, dz , ds),
and Qbe a probability measure on Asuch that µθhas compensator γ. The process uθ
µdeﬁned
by
duθ
µ(t, x) = Auθ
µ(t, x)dt +RRBc(z)σ(u(s, x)) (µθγ)(dz, dt)
uθ
µ(0, x) = x.
(36)
has under Qthe same law as u. In particular, we have
EP1[0](|u(T , x0)y|)=EQh1[0](|uθ
µ(T, x0)y|)i.
On the other hand we know that under ¯
Pthe process uθ
µsolves
(37)
duθ
µ(t, x) = Auθ
µ(t, x)dt +RR(uθ
µ(t, x)) [c(θ(t, z)) c(z)] (µγ)(dz, dt)
+RR(uθ
µ(t, x)) [c(θ(t, z)) c(z)] γ(dz, dt) + RRBσ(uθ
µ(t, x))c(z)(µγ)(dz, dt),
uθ
µ(0, x) = x.
Hence as before we can again write
P(|u(T, x)y| ≤ δ) = Q(|uθ
µ(T, x)y| ≤ δ) = EQh1|uθ
µ(T,x)y|≤δi=E¯
PhGθ(T)1|uθ
µ(T,x)y|≤δi.
Similarly to (26) we obtain
¯
P(|u(T, x)y| ≤ δ)
E¯
Ph1|uθ
µ(T,x)y|≤δi2
E¯
P|G1
θ(T)|.
As in proof of Theorem 2.8 we want to ﬁnd a positive lower bound for the right hand side of
the above inequality. First note that thanks to Hypothesis 4 we can show by arguing as in
the proof of the inequality (28) that the denominator E¯
P|G1
θ(T)|satisﬁes an estimate which
is very similar to (28). In fact, to estimate (27), we used that vis deterministic. But using
the fact that σis bounded from below and above, estimate (28) can be shown similarly. To
estimate the numerator we closely follow the lines of the proof of Theorem 2.9. To be more
precise, we obtain ﬁrst
E¯
Ph1|uθ
µ(T,x)y|≤δi
E¯
Ph1|uc(T,x,v )y|≤ δ
31|uc(T,x,v )uθ
µ(T,x)|≤ δ
3i=E¯
Ph1|uc(T,x,v )uθ
µ(T,x)|≤ δ
3i.(38)
16 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
By rewriting the diﬀerence ∆(T) = uθ
µ(T, x)uc(T , x, v) as follows
∆(T) = ZT
0ZR
e(Ts)A(u(s, x))c(θ(t, z))(µγ)(dz, ds)
+ZT
0
e(Ts)A(u(s, x))gRds,
we infer that
E¯
Ph1|uθ
µ(T,x)y|≤δiE¯
Ph1|∆(T)|≤ δ
3i.
Let us deﬁne
∆(T) := ZT
0ZR\DR(R)
e(Ts)A(u(s, x))c(θ(s, z))(µγ)(dz, ds)
,
and
I(T) := ¯
P ∆(T)>δ
3µ(DR(R)×[0, T ]) = 0!
Arguing as in proof of Theorem 2.8 we also obtain
¯
P|∆(T)| ≤ δ
3¯
Pµ(DR(R)×[0, T ]) = 0×¯
P ∆(T)δ
3µ(DR(R)×[0, T ]) = 0!
¯
Pµ(DR(R)×[0, T ]) = 0×[1 I(T)].
Now we need to prove that due to the choice of R, we have I(T)1
2. To do so we apply the
Chebyscheﬀ inequality to I(T) and use the fact that the random variables µ(DR(R)×[0, T ])
and µ(A×[0, T ]) for all A∈ B(R\ DR(R)) are independent. Thus we obtain
I(T)3
δ2
×E¯
PZT
0ZR\DR(R)
e(Ts)A(u(s, x))c(θ(s, z))(µγ)(dz, ds)
2
.
The Burkholder inequality gives
I(T)3
δ2
×E¯
P"ZT
0ZR\DR(R)e(Ts)A(u(s, x))c(z)
2λ(dz)ds#
Since c(θ(t, z)) c(z) and σis bounded (see Hypothesis 4), we get
... C3
δ2
×T R12
αCσ 1 + E¯
P"ZT
0ZR\DR(R)ke(Ts)ABk2
L(R,H)|c(z)|2λ(dz)ds#!
C3
δ2
×T R12
αCσ(1 + ZR\DR(R)|c(z)|2λ(dz)!ZT
0ke(Ts)ABk2
L(R,H)ds).
Therefore, collecting all together
E¯
Ph1|uc(T,x,v )uθ
µ(T,x)|≤ δ
3i1
C(R) 13
δ2
C T R12
αCσ(1 + K)!.
April 3, 2015 17
Since Rsatisﬁes inequality (35) we get
E¯
Ph1|uθ
µ(T,x)y|≤δi1
2eλ(DR(R))TC(R).
Hence, we have shown that
P(|u(T, x)y| ≤ δ)1
2eλ(DR(R))TC(R) exp(−|v|2
L2(0,T ;R)),(39)
which concludes the proof of Theorem 2.11.
Proof of Theorem 2.13. Let x∈ DH(C) and ucthe solution to
˙uc(t, x, v) = Auc(t, x, v) + Bv(t), t 0,
uc(0, x, vx) = x,
(40)
In order to show Theorem 2.13 we use the same arguments as we have used in the proof of
Theorem 2.9. That is, we ﬁrstly choose Rin such a way that inequality (35) holds for all
initial conditions x∈ DH(C). Secondly, we have to show that the RHS of inequality (39) can
be estimated from below for all x∈ DH(C). That is, we have to show that there exists a
constant C>0 such that for any x∈ DH(C) there exists a control vxL2(0, T ;R) such that
uc(T, x, vx) = 0 and
kvxkL2(0,T ;R)C.
However, again, this constant is given by continuity properties of system (40). Now, one can
proceed as in the proof of Theorem 2.8.
3. Uniqueness of the invariant measure and the asymptotic strong Feller
property
Let ube the solution of the following stochastic evolution equation
du(t, x) = Au(t, x) + RR(u(t, x)) z˜η(dz, ds),
u(0, x) = xH.
(41)
If (41) admits a unique H–valued cadlag solution, then the Markovian semigroup (Pt)t0
deﬁned by
Ptφ(x) := Eφ(u(t, x)), φ Cb(H), t 0,
is a stochastically continuous Feller semigroup on Cb(H). That is, (Pt)t0satisﬁes (see [11])
(1) PtPs=Pt+s;
(2) for all φCb(H) and for all xHwe have limt0Ptφ(x) = φ(x).
Since Item (1) is clear we only check Item (2). For this purpose let ε > 0 and φCb(H) with
|φ|= 1. Item (2) follows from the fact that limt0Eφ(u(t, x)) = φ(x). Equivalently, for all
ǫ > 0 there exists δ > 0 such that |Eφ(u(t, x)) φ(x)| ≤ ǫfor all 0 t < δ. Since φis uniformly
continuous on all compact sets Kof Hand the law of uis tight in D([0, T ]; H), there exists a
compact set Kǫsuch that P(u(t)6∈ Kǫfor all 0 tT)ǫ
3(compare [15, Remark 7.3]) and
there exists a δ1>0 such that |φ(x)φ(y)| ≤ ǫ
2for all x, y Kǫsuch that |xy|Hδ1.
18 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
Then for tδ:= ǫ
6δ2
1we know by the Chebyscheﬀ inequality that P(|u(t, x)x|Hδ1)ǫ
2.
Hence,
E|φ(u(t, x)) φ(x)| ≤ E|φ(u(t, x)) φ(x)|1u(t,x)6∈Kǫ
+E|φ(u(t, x)) φ(x)|1u(t,x)Kǫ1{|u(t,x)x|≥δ1}P(|u(t, x)x|Hδ1)
+E|φ(u(t, x)) φ(x)|1u(t,x)Kǫ1{|u(t,x)x|1}ǫ
3+ǫ
3+ǫ
3,
from which it follows that the Markovian semigroup (Pt)t0on Cb(H) is stochastically contin-
uous.
Since in this section we are interested in the existence of an invariant measure, we assume, in
addition to the assumption Hypothesis 1-Hypothesis 4, that the semigroup generated by Ais of
contractive type. To be more precise, we suppose that we have the following set of conditions.
Hypothesis 5. We assume that Ais the inﬁnitesimal generator of a contraction type C0-
semigroup {etA, t 0}on H, i.e., there exists ω > 0and a constant M > 0such that for any
t0
ketAkL(H,H )M eωt.
In addition, we suppose that there exists a Hilbert space H1such that the embedding H1֒H
is compact and Agenerates a contraction type C0-semigroup {etA, t 0}on H1. Moreover,
assume that there exists a constant KBsuch that
(42) Z
0kesABk2
L(R,H1)ds < KB.
Under Hypothesis 1-Hypothesis 4, and the additional Hypothesis 5, the existence of the
invariant measure can be shown by an application of the Krylov-Bogoliubov Theorem (see [14,
Theorem 3.1.1] ). First, we will deﬁne for T > 0 and xHthe following probability measure
on (H, B(H))
B(H)Γ7→ RT(x, Γ) := 1
TZT
0
Pt(x, Γ) dt.(43)
In addition, for any ρM1(H), let R
Tρbe deﬁned by
B(H)Γ7→ ZH
RT(x, Γ) ρ(dx).
Corollary 3.1.2 in [14] says that, if for some probability measure ρon (H, B(H)) and some
sequence Tn↑ ∞, the sequence {R
Tnρ:nN}is tight, and, in addition the Markovian (Pt)t0
semigroup is also a Feller semigroup, then there exists an invariant measure for (Pt)t0. That
means, for the existence of an invariant measure it is suﬃcient to show that for all ǫ > 0 for
all xH, there exists a compactly embedded subspace E ֒H, a R > 0 such that we have
for all T > 0
1
TZT
0
P(|u(t, x)|ER)dt < ǫ.(44)
Observe that, if Agenerates a contraction type C0-semigroup (etA)t0on E, then
Z
0
etA
2
L(E,E )dt < .
April 3, 2015 19
By Hypothesis 5 we can take E=H1. Thanks to the boundedness of σ(see Hypothesis 4)
and Eq. (42) of Hypothesis 5 we can show by direct calculation that the following estimate
holds
sup
0tT
E|u(t, x)|2
H1C, T 0.(45)
Then, the Markovian semigroup (Pt)t0admits an invariant measure. In fact, the Chebyshev
inequality implies that for any R > 0
1
TZT
0
P(|u(t, x)|H1R)dt 1
TZT
0
E|u(t, x)|2
H1
R2ds
C
R2,
from which we easily deduce inequality (44) by taking R > (C
ǫ)1
2.
By [17, Corollary 3.17] we know that if the semigroup is asymptotically strong Feller (for
the deﬁnition of asymptotic strong Feller we refer to [17]) and there exists a point xHsuch
that xsupp(ϕ), whenever ϕis an invariant measure of the Markovian semigroup (Pt)t0,
then the semigroup (Pt)t0admits at most one invariant measure.
Assume for the time being that the semigroup (Pt)t0is asymptotically strong Feller. Hence,
it remains to prove that there exists a xHsuch that xsupp(ϕ). Now, the two following
properties imply that 0 supp(ϕ)4whenever ϕis an invariant measure.
There exists a constant C > 0 such that
inf
{ρis an invariant measure}ρ(DH(C)) >0.(46)
For all δ > 0 and for all x∈ DH(C) there exists a time Tδ>0 and some κ=κ(C, δ)>0
depending only on Cand δ, such that
P(u(Tδ, x)∈ DH(δ)) κ.(47)
Indeed since ϕis invariant we have
ϕ(DH(δ)) κ·inf
{ρis invariant measure}ρ(¯
DH(C)).
Now, estimates (46) and (47) imply 0 supp(ϕ).
Inequality (47) follows from Theorem 2.13. Estimate (46) follows from the fact that for any
invariant measure ρof the semigroup (Pt)t0, there exists a constant C > 0 such that
ZH|u|2
Hρ(du)C.(48)
Estimate (48) follows by the same lines as in [24, Lemma B.1] and the a priori estimate (45).
Now, an application of the Chebyscheﬀ Inequality leads to inequality (46).
It remains to investigate under which conditions the semigroup satisﬁes the asymptotic
strong Feller property. But, before continuing further we introduce an additional concept of
controllability.
4Note, that xsupp(ϕ) iﬀ for all δ > 0, ϕ(DH(x, δ )) >0.
20 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
Deﬁnition 3.1. We say that the system
˙uc(t, x, v) = Auc(t, x, v) + Bv(t), t 0,
uc(0, x, v) = x,
is null controllable with vanishing energy (see [26, 28]), if it is null controllable and for any
xHthere exists a sequence of times {tn0 : nN}and a sequence of controls {vn:n
N} ⊂ L2([0, tn]; R)such that u(tn, x, vn) = 0 for any nNand
lim
n→∞ Ztn
0|vn(s)|2
Rds = 0.
Now we are ready to present our result about the asymptotically strong Feller property.
Theorem 3.2. We assume that Hypothesis 1 to Hypothesis 5 are satisﬁed. If the system (11)
is null controllable with vanishing energy, and (41) admits an H–valued cadlag solution, then
the Markovian semigroup of system (41) is asymptotically strong Feller.
Due to the null controllability, one can prove the following Corollary.
Corollary 3.3. Assume that all the assumptions of Theorem 3.2 are satisﬁed. Then, the Mar-
kovian semigroup (Pt)t0generated by the solution to system (41) admits a unique invariant
measure.
Some similar theorem holds also for the system (12), if the solution is not cadlag. Albeit,
the Markovian semigroup is not Feller, and therefore, the Krylov-Bogoliubov Theorem is not
applicable. However, one can prove a similar result to the one stated in Theorem 3.2.
Theorem 3.4. We assume that Hypothesis 1 to Hypothesis 4 are satisﬁed. If the system (11)
is null controllable with vanishing energy, then the Markovian semigroup of system (12) is
asymptotically strong Feller.
Corollary 3.5. Assume that all assumptions of Theorem 3.4 are satisﬁed. Then, the Markov-
ian semigroup (Pt)t0generated by the solution to system (12), admits at most one invariant
measure.
Proof of Theorem 3.2:Again, we will switch for technical reasons to another representation
of the Poisson random measure. Let A= (¯
,¯
F,¯
F,¯
P) be a ﬁltered probability space with
ﬁltration ¯
F= ( ¯
Ft)t0and let µbe a Poisson random measure on Rover Ahaving intensity
measure λ. The compensator of µis denoted by γand given by
B(R)× B([0,)) A×I7→ γ(A×I) := λ(A)λ(I).
Let
c:R+r7→ sup
ρ>0Z
ρ
k(s)ds rif r > 0.(49)
Now, the stochastic evolution equation given in (7) reads as follows
du(t, x) = Au(t, x) + RRBσ(u(s, x)) c(z)˜µ(dz, ds),
u(0, x) = xH.
(50)
We split the proof into several steps.
April 3, 2015 21
Step I:. Fix xH. In order to show the asymptotic strong Feller property of (Pt)t0in x,
we have to show that there exist an increasing sequence {tn:nN}and a totally separating
sequence of pseudometrics {dn:nN}5such that6
lim
ǫ0+lim sup
n→∞ sup
y∈D(x,ǫ)kPtn(x, ·)− Ptn(y, ·)kdn= 0.(51)
Let {an:nN}be a sequence of positive real numbers converging to zero. Let
dn(y, z) := 1 (|zy|H/an), z, y H, n N.
Fix hHwith |h|= 1. Since system (11) is linear and null controllable with vanishing
energy, we can ﬁnd a sequence of times {tn:nN}and controls {vn:nN}such that
a2
nRtn
0|vn(s)|2ds,nN, and the solution ucto
duc(t, x, vn) = Auc(t, x, vn)dt +Bvn(t)dt, 0ttn
uc(0, x, vn) = x,
satisfy uc(tn, x, vn) = uc(tn, x +h, 0). For simplicity, put y=x+ǫh and vn
ǫ:= ǫ·vn. Then,
it follows by the linearity that uc(tn, x, ǫvn) = uc(tn, x +ǫh, 0) = uc(tn, y, 0). In order to give
an estimate of kPtn(x, ·)− Ptn(y, ·)kdnin terms of ǫand n, we deﬁne the following sequence
of continuous functions. Let φCb(H), there exists a sequence {φn:nN},φnC
b(H),
such that φnφpointwise, kφnk≤ kφkand kφnkdn1 for all nN. Now, the following
identity holds
kPtn(x, ·)− Ptn(y, ·)kdn=E¯
P[φn(u(tn, x +ǫh))] E¯
P[φn(u(tn, x))].
Hence, we have to show that
lim
ǫ0+lim sup
n→∞ sup
y∈D(x,ǫ)E¯
P[φn(u(tn, x +ǫh))] E¯
P[φn(u(tn, x))]= 0.(52)
This will be the sub ject of the second step of the proof.
Step II:. Let us introduce a transformation ϑǫ
n: [0,)×RR, such that we have
ZR
(c(z)c(ϑn
ǫ(s, z)))λ(dz) = vn
ǫ(s)
σ(u(s, x)) for all s[0, tn].
Note, that hypothesis 1 implies, that there exists an index α(1,2), constants K1, K2>0
and r0>0 such that
K1|z|1
αc(z)K2|z|1
α,for all |z| ≥ r0.
5An increasing sequence {dn:nN}of pseudo metrics is called a totally separating system of pseudo metrics
for X, if limn→∞ dn(z, y) = 1 for all z, y X ,z6=y.
6Let dbe a pseudo–metric on X, we denote by L(X, d) the space of d-Lipschitz functions from Xinto R.
That is, the function φ:X Ris an element of L(X, d) if
kφkd:= sup
z,y∈X
z6=y
|φ(z)φ(y)|
d(z, y )<.
For a pseudo-metric don Xwe deﬁne the distance between two probability measures P1and P2with respect
to dby
kP1− P2kd:= sup
φL(X,d)
kφkd=1
ZX
φ(x) (P1− P2)(dx).
22 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
Now, let ˜
R > 0 be as in Corollary A.4-(2) and put R=˜
Rr0. In fact, by Corollary A.3 such
a transformation ϑn
ǫexists and is given by
ϑn
ǫ: [0,)×R(s, z)7→ ϑR(vn
ǫ,σ(s)),(53)
where
vn
ǫ,σ(s) = vn
ǫ(s)
σ(u(s, x)).
ǫbe a random measure deﬁned by
B(Z)× B([0, t]) A×I7→ µn
ǫ(A×I) = ZIZA
1A(ϑn
ǫ(s, z))µ(dz, ds)(54)
Let Qǫ,n be the probability measure on Asuch that the intensity measure of µn
ǫis the Lebesgue
measure λ. Let un
ǫbe the solution to
dun
ǫ(t, x) = Aun
ǫ(t, x)dt +RR(u(t, x))[c(ϑn
ǫ(t, z)) c(z)]µ(dz, dt)
+RR(u(t, x))c(z)(µγ)(dz, dt),
un
ǫ(0, x) = x,
and let uc
µ,n,ǫ be the solution to
duc
µ,n,ǫ(t, x, vn
ǫ,σ) = Auc
µ,n,ǫ(t, x, vn
ǫ)dt +Bvn
ǫ,σ(t)dt +RRBσ(u(t, x))c(z)(µγ)(dz, dt),
uc
µ,n,ǫ(0, x, vn
ǫ) = x.
Observe, that, by the choice of the transformation ϑǫ
n, under Qǫ,n , the random variable un
ǫ(tn, x)
is identical in law to the process u(tn, x). In particular, we have
EQǫ,n
tn[φn(un
ǫ(tn, x))] = E¯
P[φn(u(tn, x))] .
From the choice of the control and the linearity of Awe also have uc
µ,n,ǫ(tn, x, vn
ǫ) = u(tn, x+ǫh).
For t0 let Qǫ,n
tbe the restriction of Qǫ,n onto ¯
Ftand ¯
Ptbe the restriction of ¯
Ponto ¯
Ft. Now
we are ready to give an estimate of kPtn(x, ·)− Ptn(y, ·)kdn.First, we write
kPtn(x, ·)− Ptn(y, ·)kdn=E¯
P[φn(u(tn, x +ǫh))] E¯
P[φn(u(tn, x))],(55)
and
E¯
P[φn(u(tn, x +ǫh))] E¯
P[φn(u(tn, x))]
=E¯
Pφnuc
µ,n,ǫ(tn, x, vn
ǫ)E¯
P[φn(un
ǫ(tn, x))] + E¯
P[φn(un
ǫ(tn, x))] E¯
P[φn(u(tn, x))]
=E¯
Pφnuc
µ,n,ǫ(tn, x, vn
ǫ)φn(un
ǫ(tn, x))+E¯
P1dQǫ,n
tn
d¯
Ptnφn(un
ǫ(tn, x))
+EQǫ,n [φn(un
ǫ(tn, x))] E¯
P[φn(u(tn, x))]
=E¯
Pφnuc
µ,n,ǫ(tn, x, vn
ǫ)φn(un
ǫ(tn, x))+E¯
P1dQǫ,n
tn
d¯
Ptnφn(un
ǫ(tn, x)).
April 3, 2015 23
We have the following inequality
E¯
P[φn(u(tn, x +ǫh))] E¯
P[φn(u(tn, x))]
1
an
E¯
Puc
µ,n,ǫ(tn, x, vn
ǫ)un
ǫ(tn, x)+||φn||E¯
P
1dQǫ,n
tn
d¯
Ptn
,
=: 1
an
I1+||φn||I2.
By the construction of un
ǫ(t, x) and uc
µ,n,ǫ(t, x, vn
ǫ) we see that
un
ǫ(tn, x)uc
µ,n,ǫ(tn, x, vn
ǫ) = Ztn
0ZR
e(tns)A(u(s, x)) [c(z)c(ϑǫ
n(s, z))] (γµ)(dz, ds).
Applying Hypothesis 4
E¯
Pun
ǫ(tn, x)uc
µ,n,ǫ(tn, x, vn
ǫ)
2
E¯
PZtn
0ZRe(tns)A(u(s, x)) [c(z)c(ϑǫ
n(s, z))]λ(dz)ds
1
Cσ
E¯
PZtn
0ke(tns)ABkL(R,H)|vn
ǫ(s)|ds.
By Applying Hypothesis 3 we derive that
|Iǫ
1| ≤ E¯
Pu(tn, x)uc
µ,n,ǫ(tn, x, vn
ǫ)
2C
CσZtn
0ke(tns)ABk2
L(R,H)ds1
2
E¯
PZtn
0|vn
ǫ(s)|2ds1
2
CK2
B
CσZtn
0|vn
ǫ(s)|2ds1
2
C(γ, ρ, B )
CσZtn
0|vn
ǫ(s)|2
Rds1
2
.
To give an estimate of the second term Iǫ
2we apply [18, Theorem 1] to get an exact represen-
tation of the Radon Nikodym derivative. In particular, we obtain
Iǫ
2E¯
Ph(1 dQǫ,n
tn
d¯
Ptn)φ(un
ǫ(tn, x))i
E¯
P[(1 − Gn
ǫ(tn)) φ(un
ǫ(tn, x))]
where Gn
ǫis deﬁned by (see Lemma B.1 and (A.3))
(56)
dGn
ǫ(t) = Gn
ǫ(t)RR+[ςz(κ+(|vn
ǫ,σ(s)|), z) sgn(vn
ǫ,σ(s))] (µγ)(dz, ds)
Gn
ǫ(0) = 1.
The H¨older inequality gives
Iǫ
2E¯
P|1− Gn
ǫ(tn)| kφk.
24 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
First we will give an estimate of Esup0stn|Gn
ǫ(s)|. By the choice of ϑ, the process Gis of
ﬁnite variation, thus we easily derive that for 0 ttn
E¯
Psup
0st|Gn
ǫ(s)| ≤ 1 + Zt
0ZR|Gn
ǫ(s)||1ϑz(vn
ǫ,σ(s))|dz ds
1 + CZt
0|Gn
ǫ(s)|ZR|1ϑz(vn
ǫ,σ(s))|dzds.(57)
From Corollary A.4 it follows
ZR|1ϑz(vn
ǫ,σ(s))|dz C
Cσ|vn
ǫ(s)|2.
Substituting this last estimate in (57) we obtain
E¯
Psup
0st|Gn
ǫ(s)| ≤ 1 + C
CσZt
0|Gn
ǫ(s)||vn
ǫ(s)|2ds(58)
1 + C
Cσ
Esup
0st|Gn
ǫ(s)|Zt
0|vn
ǫ(s)|2ds.(59)
Since Rtn
0|vn
ǫ(s)|2ds a2
nand an0, there exists a n0Nsuch that Ca2
n<1/2. Therefore,
for nn0we obtain
E¯
Psup
0st|Gn
ǫ(s)| ≤ 2.
Arguing as above above we obtain that
E¯
P|Gn
ǫ(tn)1| ≤ C(R)
Cσ
E¯
PZtn
0|Gn
ǫ(s)||vn
ǫ(s)|2ds
C
Cσ
E¯
Psup
0stn|Gn
ǫ(s)|Ztn
0|vn
ǫ(s)|2ds,
C
Cσ
2ǫa2
n.
Going back to equation (55) and taking the limit, it follows that there exists some constants
C1, C2>0 and n0N, such that for all nn0
E¯
P[φn(u(tn, x +ǫh))] E¯
P[φn(u(tn, x))]
(C1
anZtn
0|vn
ǫ(s)|2ds1
2
+C2kφk2ǫ2a2
n).
Hence, we have
C1ǫan
an
+C2ǫan.
Taking the limit n→ ∞ we get
lim sup
n→∞ sup
y∈D(x,ǫ)kPtn(x, ·)− Ptn(y, ·)kdnCǫ.
We ﬁnish the proof of Theorem 3.2 by passing to limit as ǫ0 in the last estimate.
April 3, 2015 25
Proof of Theorem 3.4:The proof works along the lines of the proof of Theorem 3.2, only
one has to put σ= 1. Since one does not need that the process [0,)t7→ u(t) is cadlag,
but only one needs that the process [0,)t7→ σ(u(t)) is cadlag, the proof of Theorem 3.4
remains the same.
4. An Example - the damped wave equation with boundary noise
As mentioned in the introduction, we consider as example an elastic string, ﬁxed at one end
and perturbed by a L´evy noise at the other end. Mathematically, the system can be formulated
as a damped wave equation with boundary L´evy noise.
Throughout this section suppose that we are given a ﬁltered probability space (Ω,F,F,P)
such that the ﬁltration F= (Ft)t0satisﬁes the usual condition. On this probability space we
assume that we are given a real valued L´evy process L. Let T > 0 and ˜ρR. We consider the
system
utt(t, ξ)Λu(t, ξ) + ˜ρ uξ(t, ξ) = 0, t (0, T ), ξ (0,2π),
u(t, 0) = 0, t (0, T ),
uξ(t, 2π) = σ(u(t)) ˙
Lt, t (0, T ),
u(0, ξ) = x0(ξ), ut(0, ξ) = x1(ξ),
(60)
where Λ = ∆ is the Laplacian and ˙
Lis the Radon Nikodym derivative of a real valued L´evy
process with characteristic measure νsatisfying Hypothesis 1 and Hypothesis 2. Let x0
H1
0(0,2π) and x1L2(0,2π). Here we have set σ(u(t, ξ)) = (2 + 2 + sin(|u(t, ξ )|L2(O))for
any t(0, T ) and ξ(0,2π).
Equation (60) can be reformulated as an evolution equation of order one. In fact, let H0=
L2([0,2π]) and D(Λ) = {uH0,ΛuH0and u(0) = 0, ux(2π) = 0}. Observe, that for θ0
we have
Dθ
2) =
Hθ,2(0,2π) for θ < 1
2
{uHθ,2(0,2π) : u(0) = 0}for 1
2θ < 3
2
{uHθ,2(0,2π) : u(0) = 0, u(2π) = 0}for 3
2θ.
Finally, to reformulate Equation (60) as an evolution equation of order one let us deﬁne the
Hilbert space H=D 1
2)H0equipped with the scalar product
hw, ziH=hΛ1
2w1,Λ1
2z1i+hw1, w2i, w =w1
w2∈ H and z=z1
z2∈ H,
where h·,·i denotes the scalar product on L2([0,2π]). Deﬁne an operator Awith domain
D(A) = D(Λ) ×D1
2)→ H by
(61) Az1
z2=0I
Λ 0z1
z2,
and B˜ρ:H → H by
B˜ρz1
z2=0
˜ρz1.
It is not diﬃcult to prove that for ˜ρ > 0A+B˜ρgenerates a C0semigroup (S(t))t0of
contractional type on H. First, Λ has eigenfunctions en(ξ) = sin((2n1)ξ/4), nN, with
eigenvalues λn= (2n1)2/16. Now, if {λn:nN}are the eigenvalues and {φn:nN}the
26 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
eigenfunction of A, then {µn:nR}with µn=˜ρ±p˜ρ24λn,µn= ¯µn,nN, are the
eigenvalues and
ψ=1
21
nφn
φn:nZ,
are the eigenfunction of A(see [34, Proposition]). The semigroup S={S(t), t 0}can be
written as
S(t)f
g=1
2X
nR
eµnµnhdf
dx,n
dx iL2([0,1]) +hg, φniL2([0,1])ψn,f
gR.
To rewrite (60) as a stochastic evolution equations on the Hilbert space Hwe need to ﬁnd
a way of transforming the nonhomogeneous boundary conditions in (60) to homogeneous one.
Therefore, we introduce an operator DB,γ as follows. For every aR,DB,γ a=vis a solution
to the problem
v′′(ξ) = γv(ξ), ξ (0,2π),
vξ(2π) = a,
v(0) = 0.
By a short calculation it follows that given aR,
v=DB,0(ξ) = /(2π), ξ [0,2π].
Moreover, for all θ < 3
2, the mapping DB,0:RDθ
2) is bounded. Also, observe that for
any θ < 1
4the operator
A0
DB,0:RD θ+1
2)×Dθ
2)
is bounded. Following the approach in [22] and [27] we see that (60) can be transformed into
the following
(62)
dX(t, X0) = (A+B˜ρ)X(t, X0)dt +A 0
DB,0σΠ1(X(t, X0))dL!,
X(0, X0) = X0,
where X= (u, ˙u)T,X0= (x0, x1)T. Here Π1X=udenotes the projection from Honto D1
2).
From now on we will work with (62). Since Dθ+1
2)×Dθ
2)֒→ H continuously, one can
show by mimicking the proof of [27, Theorem 15.7.2] that the problem (62) is well posed.
Moreover, the solution uis cadlag in Hand is a Markov-Feller process. In particular, the
family of operators (Pt)t0deﬁned by
Ptφ(x) := Eφ(X), φ Cb(H), t 0.(63)
is a semigroup on Cb(H). By means of Remark 2.7 and Theorem 2.9 the following results can
be obtained.
Theorem 4.1. Assume that the L´evy measure νof the L´evy noise Lsatisﬁes Hypothesis 1 and
Hypothesis 2. Then, there exists a time T > 0such that for any C > 0and δ > 0, there exists
aκ=κ(C, δ)>0with
(P
Tδx) (DH(δ)) = P(|X(T, x)y|Hδ)κ, x, y ∈ DH(C).
April 3, 2015 27
Proof of Theorem 4.1: System (60) is exactly controlable at any time T > 2π(see [10, Section
2.4], or [34, Example 11.2.6]). Hence, the system satisﬁes the assumptions of Theorem 2.9,
which implies Theorem 4.1.
The following Corollary is a consequence of Theorem 4.1 and Theorem 3.2.
Corollary 4.2. Assume that the L´evy measure νof the L´evy noise Lsatisﬁes Hypothesis 1
and Hypothesis 2. If ˜ρ > 0, then the Markovian semigroup (Pt)t0deﬁned by (63) has at most
one invariant measure µ. Moreover, the invariant measure is nondegenerate. In particular, for
any x∈ H and δ > 0
µ(DH(x, δ)) >0.
We need to show some facts which are essential for the results in the previous sections to be
applicable in for our example. First, in [28, Section 6.3] it is shown that the following system
utt(t, ξ)Λu(t, ξ) + ˜ρ u(t, ξ) = 0, t (0, T ), ξ (0,2π),
u(t, 0) = 0, t (0, T ),
uξ(t, 2π) = v(t), t (0, T ),
u(0, ξ) = u0(ξ), ut(0, ξ) = u1(ξ),
(64)
with control vL2([0,); R) and ˜ρ0 is null controllable with vanishing energy. Moreover,
it is shown (see Remark 2.7), that (64) is exact controllable for T2π.
Now we are ready to prove the existence and uniqueness of the invariant measure.
Proof of Corollary 4.2: It was proved in [10, Section 2.4] (see also e.g. [34, Example 11.2.6])
that (60) is controllable at any time T > 2π, hence it is null controllable. Since approximate
controllability is weaker than exact contractibility, it is also null controllable. Now it remains
to prove that it is null controllable with vanishing energy. For this purpose we mainly follow
the idea in [28]. Let us write Has the direct sum H=HsHuwhere Hu={0}and Hs=H.
Therefore we see that [28, Hypothesis 1.1] are satisﬁed in our case. Moreover, since Ais the
inﬁnitesimal generator of a strongly continuous semigroup of contractions we can deduce from
[25, Chapter 1, Corollary 3.6] that the spectrum σ(A) is contained in {λC: Re(λ)0}.
This fact implies that S(A) = sup{Re(λ) : λσ(A)} ≤ 0. Therefore we can deduce from [28,
Theorem 1.1] that (64) is null controllable with vanishing energy.
In the next step we will proof the existence of an invariant measure by an application of the
Krylov-Bogoliubov Theorem. Let 0 < θ < 1
4and
H1=Dθ+1
2)×Dθ
2).
First, note that the embeddings Dθ+1
2)×Dθ
2)֒→ H are compact. Now, to establish the
existence of the invariant measure we have to show inequality (45) with H1deﬁned above. By
standard arguments (see [19]) we get
Zt
0ZRS(t)A0
DB,0(σΠ1(X(t))) z˜η(dz, ds)
2
H1
Zt
0ZRS(t)A0
DB,0(σΠ1(X(t))) z
2
H1
ν(dz)ds.
28 E. HAUSENBLAS AND P. RAZAFIMANDIMBY APRIL 3, 2015
Since
A0
DB,0:RD θ+1
2)×Dθ
2),
and Sis a semigroup of contractive type, we get
... CZt
0ZR
e˜ρ(ts)A0
DB,0(σΠ1(X(t))) z
2
H1
ν(dz)ds
CZt
0ZR
e˜ρ(ts)|z|2ν(dz)ds.
Due to the fact that σis bounded, estimate (45) is satisﬁed. Thus, the damped wave equation
with L´evy noise boundary condition (60) has an invariant measure (compare also with [14,
p. 181 ]).
It remains to show the uniqueness of the invariant measure. Owing to the Remark 2.7 and
Theorem 3.2 the semigroup Ptis asymptotically strong Feller. By [17, Corollary 3.17] we
know that if the semigroup is asymptotically strong Feller and there exists a point x∈ H
such that xsupp(ρ), whenever ρis an invariant measure of (Pt)t0, then the Markovian
(Pt)t0semigroup admits at most one invariant measure. Therefore, we have to show that
there exists a point xHsuch that for any invariant measure ρ,xsupp(ρ), i.e. for all δ > 0,
ρ(DH(δ)) >0. Since null controllability implies approximate null controllability Theorem 2.8
can be applied to our example. In particular, there exists a time T > 0 such that for all C > 0
and δ > 0 there exists a κ > 0 with
P(u(T, x)∈ DH(δ)) κ, x ¯
DH(C).(65)
It remains to show (46). In particular, we should check that there exists a constant C > 0 such
that
inf
{ϕis an invariant measure}ϕ(¯
DH(C)) >0.(66)
It follows that 0 supp(ρ) by the following observations. First, since ρis an invariant measure
we have
ρ(DH(δ)) ρ¯
DH(C)·inf
{ϕis invariant measure}ϕ(¯
DH(C)).
Now, the estimates (65) and (47) implies that for all δ > 0, ρ(