ArticlePDF Available

Abstract

In this paper, we study the local backward problem of a linear heat equation with time-dependent coefficients under the Dirichlet boundary condition. Precisely, we recover the initial data from the observation on a subdomain at some later time. Thanks to the "optimal filtering" method of Seidman, we can solve the global backward problem, which determines the solution at initial time from the known data on the whole domain. Then, by using a result of controllability at one point of time, we can connect local and global backward problem.
arXiv:1704.05314v1 [math.AP] 18 Apr 2017
The local backward heat problem
Thi Minh Nhat VO
April 19, 2017
Abstract
In this paper, we study the local backward problem of a linear heat equation
with time-dependent coefficients under the Dirichlet boundary condition. Pre-
cisely, we recover the initial data from the observation on a subdomain at some
later time. Thanks to the “ optimal filtering ” method of Seidman, we can solve
the global backward problem, which determines the solution at initial time from
the known data on the whole domain. Then, by using a result of controllability
at one point of time, we can connect local and global backward problem.
Keywords. inverse problem, global backward, local backward, controllability,
observation estimate, heat equation.
1 Introduction and main result
1.1 Our motivation
Inverse and ill-posed problems (see [I], [P], [K]) are the heart of scientific inquiry and
technological development. They play a significant role in engineering applications, as
well as several practical areas, such as image processing, mathematical finance, physics,
etc. and, more recently, modeling in the life sciences. During the last ten years or
so, there have been remarkable developments both in the mathematical theory and ap-
plications of inverse problems. Especially, in various industrial purposes, for example
steel industries, glass and polymer forming and nuclear power station, the "backward
heat problem", which recovers the temperature in the heating system from the obser-
vation at some later time keeps an important position. On the other hand, from the
mathematical point of view, it is well-known to be an ill-posed problem in the sense
Université d’Orléans, Laboratoire MAPMO, CNRS UMR 7349, Fédération Denis Poisson, FR
CNRS 2964, Bâtiment de Mathématiques, B.P. 6759, 45067 Orléans Cedex 2, France. Email address:
vtmnhat@gmail.com.
1
of Hadamard (see [H]) due to the irreversibility of time. That is, there exists no so-
lution from the given final data, and even if a solution exists, the small perturbations
of the observation data may be dramatically scaled up in the solution. Hence, the in-
terest of constructing some special regularization method is motivated. This topic has
been studied extensively with lots of methods released such as: Tikhonov regularization
[F], [M], [TS], [ZM], [MFH], Lavrentiev regularization [NT], [JSG], truncation method
[NTT], [KT], [ZFM], filter method [S], [TKLT], [QW], the quasi-boundary value method
[DB], [KT], [QTTT] and other methods [AE], [LL1], [LL2], [HX], [TQKT], ... In [S],
Seidman uses a so-called "optimal filtering" method in order to recover the solution at
time t > 0with an optimal result. The idea of improving his outcome to reconstruct the
solution at time 0is an interesting issue. Furthermore, the question that if we restrict
our observation on a subregion inside the domain then how the local problem will be
solved is also attractive.
1.2 Our problem
Let be an open bounded domain in Rn(n1) with a boundary of class C2;T
be a fixed positive constant. Let pC1([0; +)) such that 0< p1p(t)p2,t
[0,+), where p1and p2are some positive real numbers. Let ωbe a nonempty, open
subdomain of . We consider a linear heat equation with time-dependent coefficients,
under the Dirichlet boundary condition with the state uC1((0, T ); H1
0(Ω)):
tup(t)∆u= 0 in ×(0, T ),
u= 0 on ×(0, T ).(1.1)
Our target is constructing the initial solution u(·,0) when the local measurement data of
u(·, T )on the subdomain ωis available. In practice, the data at time Tis often measured
by the physical instrument. Therefore, only a perturbed data fcan be obtained. Let
δ > 0denote the noisy level with the following property
ku(·, T )fkL2(ω)δ. (1.2)
Moreover, in order to assure the convergence of the regularization approximation to the
initial data u(·,0), some a priori assumption on the exact solution is required
u(·,0) H1
0(Ω). (1.3)
We will determine an approximate output gof the unknown exact solution u(·,0) such
that the error estimate e(δ)in
ku(·,0) gkL2(Ω) e(δ)(1.4)
tends to 0when δtends to 0.
2
1.3 Relevant works
Now, we consider how our problem can be solved so far in the past. In fact, there
are lots of papers on the global backward problem but the works on the local one are
restricted. There has been a sizeable literature on the special case p1with various
methods. From now on, we will denote by δthe noisy level.
1. In 1996 (see [S]), Seidman considers the heat equation which has the following
form
tu− ∇au+qu = 0 on ×(0, T )with u= 0 on ×(0, T )(1.5)
where aand qbelong to L(Ω). He succeeds in constructing the solution at a
fixed time t(0, T )from the observation fsatisfying ku(·, T )fkL2(Ω) δ,
under the assumption u(·,0) L2(Ω). His strategy is using a "filter" with respect
to the spectral decomposition of operator A:u7→ −∇au+qu, which is defined
as
F(t)ei= min (1, eλi(Tt)t
Tku(·,0)kL2(Ω)
δ1t
T)ei(1.6)
where {λi}i1and {ei}i1are respectively denoted by the eigenvalues and the
corresponding eigenfunctions of the operator A. Then, he can get the optimal
result, which is
ku(·, t)gtkL2(Ω) δt
Tku(·,0)k1t
T
L2(Ω). (1.7)
The regularization solution gtat time tis constructed as
gt:=
X
i=1
eλi(Tt)Z
f(x)ei(x)dxF(t)ei. (1.8)
2. By generalizing the result of Seidman, in [TS], Tautenhahn and Schröter provide
us a definition of the term "optimal method", in a sense the error of the estimate
between the exact solution and the approximate one defined from the optimal
method can not be greater than the best possible worst case error (see Definition
1.1, page 478). Their interest is finding the optimal results in different regulariza-
tion methods for solving the backward heat equations. According to this sense,
the result of Seidman is optimal.
3. In 2007, Trong et al. (see [TQKT]) improve the quasi-boundary value method to
regularize the 1D backward heat equation. They succeed in recovering the initial
data with the following error:
ku(·,0) gkL2(Ω) 4
8C4
Tln ku(·,0)kL2(Ω)
δ1
4
(1.9)
where Cis a positive constant depending on ku(·,0)kH1
0(Ω) (see also [AP]).
3
The problem with case p6≡ constant is recently concerned, which can be mentioned in
some following writings:
1. In 2013, Tuan et al. in [TQTT] consider the 1D backward heat equation with
time-dependent coefficients. They use a so-called "modified method" to get the
result below
ku(·, t)gtkL2(Ω) 1 + ku(·,0)kL2(Ω)δ
ku(·,0)kL2(Ω)
p2
1t
p2
2T. (1.10)
2. In 2014, Zhang, Fu and Ma (see [ZFM]) also study on the 1D backward heat
equation with time-dependent coefficients, but use the truncation method. They
can recover the solution at time tT(1 p1
p2); Tsatisfying
ku(·, t)gtkL2(Ω) ≤ ku(·,0)k1t
T
L2(Ω)((τ+ 1)δ)t
T+ku(·,0)kL2(Ω)
τ1
p2(Tt)
p1T
δ
(p2p1)T+p2t
p1T
(1.11)
for some constant τ > 1.
3. In 2016, Khanh and Tuan (in [KT]) solve an initial inverse problem for an inho-
mogeneous heat equation by using high frequency truncation method. Under the
assumption that u(·,0) H1
0(Ω), they can recover the initial data with the below
error:
ku(·,0) gkL2(Ω) δ
ku(·,0)kL2(Ω) 1
2Trln ku(·,0)kL2(Ω)
δ
2p2T+2p2Tku(·,0)kH1
0(Ω)
rln ku(·,0)kL2(Ω)
δ
.
(1.12)
For the local inverse problem, we can pick up some of following works:
1. In 1995, Yamamoto in [Y] proposes a reconstruction formula for the spatial de-
pendence of the source term in a wave equation ttuu=f(x)σ(t), assuming
σ(t)known, from local measurement using exact controllability.
2. In 2009, Li, Yamamoto and Zou in [LYZ] study the conditional stability of inverse
problems. Here, the known data is observed in a subregion along a time period
which may start at some point, possibly far away from the initial data.
3. In 2011, García and Takahashi (see [GT]) present some abstract results of a general
connection between null-controllability and several inverse problems for a class of
parabolic equations.
4. In 2013, García, Osses and Tapia in [GOT] succeed in determining the heat source
from a single internal measurement of the solution, thanks to a family of null
control functions.
4
1.4 Our method of solving the global backward problem (GBP)
and the local backward problem (LBP)
Firstly, we deal with the global backward problem (GBP), which recovers the initial
data from the observation on the whole domain at some later time τ > 0. Here, we
assume that there exists solution of the (1.1) satisfying the a priori condition (1.3) and
¯
fbe the known data on at time τsuch that ku(·, τ )¯
fkL2(Ω) δfor some δ > 0. We
will determine a function gwhich approximate the initial data. Our idea of constructing
such function gis from the “optimal filtering method” of Seidman (see [S]): First, we
define a continuous operator depending on a regularization parameter α
Rα:L2(Ω) L2(Ω)
φ7→
X
i=1
min{eλiRτ
0p(s)ds;α}Z
φ(x)ei(x)dxei;
Then, the function Rα¯
fwill be closed to the exact solution u(·,0) in L2(Ω) where αis
the minimizer of the problem min
α>0ku(·,0) − Rα¯
fkL2(Ω).
Secondly, for the local backward problem (LBP), whose observation is measured on a
subdomain, we need to use a tool of controllability to link with the (GBP). Precisely,
we use the assertion about the existence of a sequence of control functions to get the
information of solution on the whole domain from the given data on the subdomain
ω: For each i= 1,2, ..., for any ε > 0, there exists hiL2(ω)such that the solution of
tϕip(t)∆ϕi= 0 in ×(0,2T)\ {T},
ϕi= 0 on ×(0,2T),
ϕi(·,0) = eiin ,
ϕi(·, T ) = ϕi(·, T ) + ωhiin
(1.13)
satisfies kϕi(·,2T)kL2(Ω) ε. Here, ωpresents for the characteristic function on the
region ωand ϕi(T)denotes the left limit of the function ϕiat time T. Multiplying
tϕip(t)∆ϕi= 0 by u(·,2Tt)and using some computation technique, we can get
the approximate solution ¯
fat time τ= 3T, which is
ku(·,3T)¯
fkL2(Ω) ≤ E(δ).
Here, ¯
fis computed by known data hiand fand E(δ)is a function of δsuch that
E(δ)0when δ0. Lastly, applying the result of (GBP) with the information at
3Ton the whole domain, the initial data of (1.1) is reconstructed.
1.5 Spectral theory
As a direct consequence of spectral theorem for compact, self-adjoint operators (see
Theorem 9.16, page 225, [HN]), there exists a sequence of positive real eigenvalues of
the operator , which denoted by {λi}i=1,2,... where
0< λ1λ2λ3....,
λi→ ∞ as i→ ∞.(1.14)
5
Moreover, there exists an orthonormal basis {ei}i=1,2,... of L2(Ω), where eiH1
0(Ω) is
an eigenfunction corresponding to λi
ei=λieiin ,
ei= 0 on .(1.15)
When u0L2(Ω) and u0=
P
i=1
aieiwith ai=Ru0(x)ei(x)dx and
P
i=1 |ai|2<, then
u(·, t) =
X
i=1
aieλiRt
0p(s)dsei(1.16)
is the unique solution of
tup(t)∆u= 0 in ×(0, T ),
u= 0 on ×(0, T ),
u(·,0) = u0in .
(1.17)
1.6 Main result
Theorem 1.1. Let ube the solution of (1.1) with the a priori bound (1.3). Let f
L2(ω)and 0< δ < ku(·,0)kL2(Ω) satisfying
ku(·, T )fkL2(ω)δ.(1.18)
There exists a function gL2(Ω) and a constant C=C(Ω, ω, p)>0such that the
following estimate holds
ku(·,0) gkL2(Ω) Ce C
TTku(·,0)kH1
0(Ω)
qln ku(·,0)kL2(Ω)
δ
.(1.19)
Remark 1.1. 1. When δ < DeD(T+1
T)ku(·,0)kL2(Ω) for some positive constant
D=D(Ω, ω, p), the approximate solution of the initial data satisfying (1.19)
is constructed as below
g:=
X
i=1
min{eλiR3T
0p(s)ds, α}eλiR3T
2Tp(s)ds Zω
hi(x)f(x)dxei(1.20)
where {hi}i1is a sequence of control functions (see Section 4) and αis the
regularization parameter given by
α=A B1 3p2Tku(·,0)kH1
0(Ω)
K1eK1
Tku(·,0)kL2(Ω)1k1δk1!! (1.21)
with
6
(i)
A: [0; +)e
2; +
x7→ ex
1 + 2x,(1.22)
(ii)
B: (0; +)(0; +)
x7→ xex(1.23)
The existence of the function B1dues to the bijection property of the func-
tion Bon (0,+),
(iii) K1=K1(Ω, ω, p)>1and k1=k1(Ω, ω, p)(0,1). All these constants can
be explicitly computed when is convex or star-shaped with respect to some
x0.
2. The estimate (1.19) connects to the well-known following estimate
ku(·,0)kL2(Ω) Cq1 + T+1
Tku(·,0)kH1
0(Ω)
rln ku(·,0)kL2(Ω)
ku(·,T )kL2(ω)
.(1.24)
for some positive constant C=C(Ω, ω, p)(see Appendix for the proof ).
1.7 Outline
Section 2 will give us a result of the (GBP) (see Theorem 2.1), where the known data
is observed on the whole domain. In section 3, we construct an observation estimate
at one point of time for the parabolic equations with time-dependent coefficients (see
Theorem 3.1 and Theorem 3.2). This is an important preliminary of the approximate
controllability (see Theorem 4.1), which is studied in section 4. Lastly, combining the
result of controllability and global backward, we get the proof of the Theorem 1.1,
mentioned in section 5.
2 Global backward problem
First of all, we need to consider the special case, that is ω. In [S], Seidman succeeds
in recovering the solution at time t > 0by an optimal filtering method, under the a
priori condition u(·,0) L2(Ω). Here we will use his method to recover the initial data
at time 0but with the stronger assumption u(·,0) H1
0(Ω).
7
Theorem 2.1. Let ube the solution of (1.1) satisfying the a priori condition (1.3). Let
¯
fL2(Ω) and δ > 0having the following property
ku(·, T )¯
fkL2(Ω) δ.(2.1)
There exists a function gL2(Ω) such that for any ζ > δ2
2λ1p2Tku(·,0)k2
L2(Ω)
, the following
estimate holds
ku(·,0) gkL2(Ω) p(1 + ζ)p2Tku(·,0)kH1
0(Ω)
rln 2ζλ1p2Tku(·,0)kL2(Ω)
δ
.(2.2)
Remark 2.1. 1. When δ < ku(·,0)kL2(Ω) eλ1p2T, the approximate solution of the
initial data satisfying (2.2) is constructed as
g:=
X
i=1
min neλiRT
0p(s)ds,¯αoZ
¯
f(x)ei(x)dxei.(2.3)
Here, the regularization parameter ¯αis given by
¯α:= A B1 p2Tku(·,0)kH1
0(Ω)
δ!! (2.4)
with Aand Bbeing respectively defined in (1.22) and (1.23).
2. When δ < ku(·,0)kL2(Ω) , we can choose ζ=1
2λ1p2Tin order to get
ku(·,0) gkL2(Ω) qp2T+1
2λ1ku(·,0)kH1
0(Ω)
qln ku(·,0)kL2(Ω)
δ
.(2.5)
This connects to the well-known following estimate
ku(·,0)kL2(Ω) p2Tku(·,0)kH1
0(Ω)
rln ku(·,0)kL2(Ω)
ku(·,T )kL2(Ω)
.(2.6)
Proof of Theorem 2.1
Proof. For the case δ≥ ku(·,0)kL2(Ω) eλ1p2T, the estimate (2.2) holds with g= 0.
Indeed, combining with the fact that 2ζλ1p2Teζλ1p2Tζ > 0, we get
p2ζλ1p2Tku(·,0)kL2(Ω)
δe(1+ζ)λ1p2T. (2.7)
8
It implies that
1
λ1p(1 + ζ)p2T
qln 2ζλ1p2Tku(·,0)kL2(Ω)
δ
.
Hence
ku(·,0)kL2(Ω) ku(·,0)kH1
0(Ω)
λ1
p(1 + ζ)p2Tku(·,0)kH1
0(Ω)
qln 2λ1p2Tku(·,0)kL2(Ω)
δ
. (2.8)
The main purpose concerns the case when
δ < ku(·,0)kL2(Ω)eλ1p2T. (2.9)
In this case, we will determine the regularization solution at time 0as follows: First of
all, Step 1 will provide us the construction of a continuous operator Rβdepending on
a parameter β, which will be chosen later. The regularization solution gis defined by
applying this operator on the known-data ¯
f. Secondly, in Step 2, we compute the error
between the exact solution and the approximate solution defined in Step 1. Lastly, by
minimizing the error in Step 2 with respect to β, we can obtain the final result in Step
3.
Step 1: Construct the regularization solution.
Let us define a continuous function Rβdepending on a positive parameter β, which will
be chosen later:
Rβ:L2(Ω) L2(Ω)
f7→
X
i=1
min{eλiRT
0p(s)ds;β}Z
f(x)ei(x)dxei
Put g:= Rβ¯
f. We will prove that such defined function gapproximate the exact solu-
tion u(·,0) with some suitable choice of β.
Step 2: Compute the error ku(·,0) gkL2(Ω).
Put gT:= Rβu(·, T ), we will compute the error by using the following triangle inequality
ku(·,0) gkL2(Ω) ≤ ku(·,0) gTkL2(Ω) +kgTgkL2(Ω). (2.10)
On one hand, we have
kggTkL2(Ω) =
X
i=1
min{eλiRT
0p(s)ds;β}Z¯
f(x)u(x, T )dxei
L2(Ω)
β
¯
fu(·, T )
L2(Ω) βδ. (2.11)
9
On the other hand
ku(·,0) gTkL2(Ω)
=
X
i=1 Z
u(x, 0)ei(x)dxei
X
i=1
min{eλiRT
0p(s)ds;β}eλiRT
0p(s)ds Z
u(x, 0)ei(x)dxei
L2(Ω)
=
X
i=1 1min{eλiRT
0p(s)ds;β}eλiRT
0p(s)dsZ
u(x, 0)ei(x)dxei
L2(Ω)
=
X
eλiRT
0p(s)ds1βeλiRT
0p(s)dsZ
u(x, 0)ei(x)dxei
L2(Ω)
=
X
eλiRT
0p(s)ds
1βeλiRT
0p(s)ds
λipλiZ
u(x, 0)ei(x)dxei
L2(Ω)
sup
λλ11βeλRT
0p(s)ds
λku(·,0)kH1
0(Ω)
sup
λλ1
(1 βeλp2T)
λp2Tpp2Tku(·,0)kH1
0(Ω) . (2.12)
Now, we solve the problem of finding sup
λλ1
(1βeλp2T)
λp2T.
Define
F: [λ1; +)(0; +)
λ7→ (1 βeλp2T)
λp2T.
Obviously, Fis differentiable and
F(λ) = βp2T eλp2T(1 + 2λp2T)p2T
2λp2T λp2T. (2.13)
The equation F(λ) = 0 is equivalent to
β=eλp2T
1 + 2λp2T. (2.14)
We will choose βsuch that the equation (2.14) has a unique solution ¯
λλ1. Let us
remind the function Adefined in (1.22):
A: [0; +)e
2; +
x7→ ex
1 + 2x.
10
Note that the equation (2.14) has a unique solution ¯
λλ1if and only if
β > A(λ1p2T). (2.15)
Suppose the condition (2.15) is satisfied then there exists a unique ¯
λλ1such that
F(¯
λ) = 0 and β=A(¯
λp2T). We can write ¯
λp2T=A1(β). On the other hand, the
fact that F(λ1)>0leads us to the conclusion: the function Fis strictly increasing on
(λ1,¯
λ)and strictly decreasing on (¯
λ, +). Consequently, Fgets supremum at ¯
λ, i.e
F(¯
λ) = sup
λλ1F(λ). (2.16)
Step 3: Minimize the error with a suitable choice of β.
Combining the two above steps, we get
ku(·,0) gkL2(Ω) βδ +(1 βe¯
λp2T)
p¯
λp2Tpp2Tku(·,0)kH1
0(Ω)
= ΘδeA1(β)+ (1 Θ) p2Tku(·,0)kH1
0(Ω)
pA1(β)(2.17)
where Θ = βe¯
λp2T. Note that
ΘA+ (1 Θ)Bmin{A,B} ∀ A,B>0,Θ(0,1).
The equality occurs when and only when A=B. Hence, in order to minimize the
right-hand side of (2.17), we will choose βsuch that
δeA1(β)=p2Tku(·,0)kH1
0(Ω)
pA1(β). (2.18)
The choice of β= ¯α:= AB1p2Tku(·,0)kH1
0(Ω)
δsatisfies the condition (2.15) (dues
to the assumption (2.9) on the smallness of δ) and the estimate (2.18). Therefore, we
get the following estimate
ku(·,0) gkL2(Ω) p2Tku(·,0)kH1
0(Ω)
pA1(¯α)
p2Tku(·,0)kH1
0(Ω)
sB1p2Tku(·,0)kH1
0(Ω)
δ
. (2.19)
Due to the definition of the function B(see 1.23), (2.19) becomes
p2Tku(·,0)kH1
0(Ω)
δp2Tku(·,0)kH1
0(Ω)
ku(·,0) gkL2(Ω)
e
p2Tku(·,0)k2
H1
0(Ω)
ku(·,0)gk2
L2(Ω) . (2.20)
11
Using the fact that 2ζx eζ x2ζ > 0x > 0, one obtains
p2Tku(·,0)kH1
0(Ω)
δ1
2ζe
(1+ζ)p2Tku(·,0)k2
H1
0(Ω)
ku(·,0)gk2
L2(Ω) . (2.21)
It is equivalent to
(1 + ζ)p2Tku(·,0)k2
H1
0(Ω)
ku(·,0) gk2
L2(Ω) ln 2ζp2Tku(·,0)kH1
0(Ω)
δ!. (2.22)
With ζ > δ2
2λ1p2Tku(·,0)k2
L2(Ω)
, it deduces that
ku(·,0) gkL2(Ω) p(1 + ζ)p2Tku(·,0)kH1
0(Ω)
rln 2ζλ1p2Tku(·,0)kL2(Ω)
δ
. (2.23)
For the local case ω, it is required the existence of control functions on the sub-
domain at some point of time in order to link with a global result. This controllability
problem has a sustainable connection with the observability one, which will be studied
in the next Section.
3 Observability at one point of time
The issue on constructing an observation estimate is widely studied. It can be solved by
global Carleman inequality, which is presented in [FI]; by using the estimate of Lebeau
and Robbiano (see [LR]) or by transmutation (see [EZ]). Recently, Phung et al. provide
a different method which is based on properties of the heat kernel with a parametric of
order 0. In [PW1] and [PW2], the authors work on a linear equation which has form
tvv+av +bv= 0 in ×(0, T ).
Here, aL((0, T ), Lq(Ω)) with q2if n= 1 and q > n if n2;bL(Ω×(0, T ))n
and must be convex. Then, by using some geometrical techniques, Phung et. al
improve their previous results by working on a general domain (i.e is even convex or
not). For the following form of linear equation
tvv+av = 0 in ×(0, T )
where aL(Ω ×(0, T )), see [PWZ]. For the parabolic equations with space-time
coefficients
tv− ∇(Av) + av +bv= 0 in ×(0, T )
12
where aL(Ω ×(0, T )),bL(Ω ×(0, T ))nand Ais a n×nsymmetric positive-
definite matrix with C2(Ω ×[0, T ]) coefficients, see [BP]. Here, we also deal with the
problem of determining an observation estimate in the general case of domain but for
a linear heat equation with time-dependent coefficients
tvp(t)∆v= 0 in ×(0, T )
where pC1(0, T ). In this section, we will study two results of observation estimates
in two different geometrical cases: The general case (Theorem 3.1) and the special case
(Theorem 3.2) when is convex or star-shaped with respect to some x0such that
B(x0, r) := {x;|xx0|< r} ⊂ ω, 0< r < R := max
x|xx0|. For the special case, we
make a careful evaluation of the constants which can be explicitly computed. First of
all, we state an observation result in general case of domain .
Theorem 3.1. There exist constants K=K(Ω, ω, p)>0and µ=µ(Ω, ω, p)(0; 1)
such that the solution of
tvp(t)∆v= 0 in ×(0, T ),
v= 0 on ×(0, T ),
v(·,0) L2(Ω),
(3.1)
satisfies
kv(·, T )kL2(Ω) Ke K
Tkv(·, T )kµ
L2(ω)kv(·,0)k1µ
L2(Ω).(3.2)
Corollary 3.1. For any ε > 0, there exist positive constants c1and c2depending on
, ω and psuch that the following estimate holds
kv(·, T )k2
L2(Ω) c1ec1
T1
εc2kv(·, T )k2
L2(ω)+εkv(·,0)k2
L2(Ω).(3.3)
Proof of Corollary 3.1
Proof. It implies from (3.2) in Theorem 3.1 that
kv(·, T )k2
L2(Ω) K2e2K
Tkv(·, T )k2µ
L2(ω)kv(·,0)k2(1µ)
L2(Ω) .
Applying the Young’s inequality ab am
m+bq
qwith
a=K1
µeK
µT kv(·, T )kL2(ω)
1
ε1µ
2µ
(1 µ)1µ
2µ2µ
,
b= ε1
21
1µ1
2
kv(·,0)kL2(Ω)!2(1µ)
,
13
m=1
µand q=1
1µ,
we get
kv(·, T )k2
L2(Ω) µK 2
µe2K
µT 1
ε1µ
µ
(1 µ)1µ
µkv(·, T )k2
L2(ω)+εkv(·,0)k2
L2(Ω).
Therefore, we obtain the estimate (3.3) with
c1:= max µK 2
µ(1 µ)1µ
µ,2K
µand c2:= (1 µ)
µ. (3.4)
Our next theorem will provide us an observation result in a special geometric case
with specific constants.
Theorem 3.2. Let x0and R:= max
x|xx0|. Suppose all the following assumptions
hold:
(i) is convex or star-shaped domain with respect to x0,
(ii) R2<2p2
1
|p|if p6≡ constant where |p|= sup
t[0,T ]|p(t)|;
then the solution of (3.1) satisfies (3.2) with
ω={x;|xx0|< r}where 0< r < R,
K= max (41+C0(1+S)(1 + )n+2C0(1+S)e2C1(1+S)er2
4p11
2(1+S)
,r2
4p1(1 + S))
and
µ=1
2(1 + S).
Here
C0:= R2|p|
2p2
1
,
C1:= (2 + n)|p|
p1
,
:=
22+ξR2eC1
ξln 3
2r21
1ξ1ξ(0,1) if C0= 0,
4R2eC1
r21(2
3)C01
1C01if C0>0
and
S:= eC1
ln(1+)
ln 3
2
if C0= 0,
(1+)C0
1(2
3)C0if C0>0.
14
Remark 3.1. In the special case when p1, the observation estimate (3.2) can be
written as
kv(·, T )kL2(Ω) 4(1 + )ner2
4(1+ 1
T)kv(·, T )kL2(ω)
1
2 1+ ln(1+)
ln 3
2!kv(·,0)k
1+2 ln(1+)
ln 3
2
2 1+ ln(1+)
ln 3
2!
L2(Ω)
where := 22+ξR2
ξln 3
2r21
1ξ1>1for any ξ(0,1).
The interested readers can compare this result with Proposition 2.1 in [PW1], Proposi-
tion 2.2 in [PW2] or Theorem 4.2 in [BP].
The main idea of the proof of both theorems is based on the logarithm convexity
method (see [Ve]). In order to check a kind of logarithm convexity for a suitable
functional, it requires that some boundary terms must be dropped or have a good sign.
This is possible under the assumption (ii)in Theorem 3.2. But for the general case
(Theorem 3.1), we need a local star-shaped assumption (to get a good sign of boundary
terms) and a suitable cut-off function (to drop some boundary terms). Then, thanks
to the covering argument and the propagation of smallness, we get the global desired
result. First of all, we need some preliminary results in the first subsection. Then,
the proof of Theorem 3.2 and Theorem 3.1 will be devoted in two next subsections,
respectively.
3.1 Preliminary results
The strategy of the proof of Theorem 3.1 and Theorem 3.2 consists on choosing a
suitable function whose logarithm can be a convex function and considering the dif-
ferential inequalities associated to this function (see Lemma 3.1). Then by choosing
a suitable weight function inspired by the heat kernel (see Corollary 3.2) and solving
ODE inequalities (see Lemma 3.2), we obtain a Hölder type inequality (see Corollary
3.3). The localization process in the proof of general case makes appear the function F
in Corollary 3.2, which will be treated due to the technical Lemma 3.3.
Lemma 3.1. Let ϑbe an open set in Rn,x0ϑ,zH1(0, T ;H1
0(ϑ)) and φ
C2(×(0, T )). We define two functions from [0, T ]on (0,+)by
y(t) := Zϑ|z(x, t)|2eφ(x,t)dx,
N(t) := p(t)Rϑ|∇z(x, t)|2eφ(x,t)dx
Rϑ|z(x, t)|2eφ(x,t)dx .
With the notations Gφ:= tφ+p(t)∆φ+p(t)|∇φ|2and w:= tzp(t)∆z, the following
assertions hold for any times t > 0:
i/
y(t) + 2N(t)y(t) = ZϑGφ(x, t)|z(x, t)|2eφ(x,t)dx + 2 Zϑ
w(x, t)z(x, t)eφ(x,t)dx,
15
ii/
N(t)p(t)
p(t)N(t) + p(t)2
y(t)Z∂ϑ |∇z(x, t)|2νφ(x, t)eφ(x,t)dx
+p(t)
y(t)Zϑ|∇z(x, t)|2Gφ(x, t)eφ(x,t)dx +1
2y(t)Zϑ|w(x, t)|2eφ(x,t)dx
2p(t)2
y(t)Zϑz(x, t)2φ(x, t)z(x, t)eφ(x,t)dx
p(t)
y(t)2ZϑGφ(x, t)|z(x, t)|2eφ(x,t)dx Zϑ|∇z(x, t)|2eφ(x,t)dx
where νis the unit outward normal vector to ∂ϑ and 2φis the Hessian matrix of φ.
Proof of Lemma 3.1
Proof. First of all, we will prove the assertion i/.
We have
y(t) = 2 Zϑ
z(x, t)tz(x, t)eφ(x,t)dx +Zϑ|z(x, t)|2tφ(x, t)eφ(x,t)dx.
With w:= tzp(t)∆z, one has
y(t) = 2 Zϑ
z(x, t)w(x, t)eφ(x,t)dx + 2p(t)Zϑ
z(x, t)∆z(x, t)eφ(x,t)dx
+Zϑ|z(x, t)|2tφ(x, t)eφ(x,t)dx. (3.5)
Let us compute the second term of (3.5) by using integration by parts:
2p(t)Zϑ
z(x, t)∆z(x, t)eφ(x,t)dx
=2p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx 2p(t)Zϑ
z(x, t)z(x, t)φ(x, t)eφ(x,t)dx
=2p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx p(t)Zϑ(|z(x, t)|2)φ(x, t)eφ(x,t)dx. (3.6)
We use the fact that 2zz=(|z|2)to get the second equality. Integrating by parts
the second term in (3.6) gives
p(t)Zϑ(|z(x, t)|2)φ(x, t)eφ(x,t)dx
=p(t)Zϑ|z(x, t)|2φ(x, t)eφ(x,t)dx +p(t)Zϑ|z(x, t)|2|∇φ(x, t)|2eφ(x,t)dx. (3.7)
16
Combining (3.5) and (3.7), we obtain:
y(t) = 2p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx +p(t)Zϑ|z(x, t)|2φ(x, t)eφ(x,t)dx
+p(t)Zϑ|z(x, t)|2|∇φ(x, t)|2eφ(x,t)dx +Zϑ|z(x, t)|2tφ(x, t)eφ(x,t)dx
+2 Zϑ
z(x, t)w(x, t)eφ(x,t)dx.
Thus, we can get the assertion i/. Now, we move to next step with the proof of assertion
ii/.
Step 1: Compute d
dt p(t)Rϑ|∇z(x, t)|2eφ(x,t)dx.
d
dt p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx
=p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx + 2p(t)Zϑz(x, t)t(z(x, t))eφ(x,t)dx
+p(t)Zϑ|∇z(x, t)|2tφ(x, t)eφ(x,t)dx
=P1+P2+P3(3.8)
where Pi(i= 1,2,3) is the ith term in the right-hand side of (3.8). For the second term
P2, we use integration by parts, with the note that tz= 0 on ∂ϑ, to get:
P2= 2p(t)Zϑz(x, t)(tz(x, t))eφ(x,t)dx
=2p(t)Zϑ
z(x, t)tz(x, t)eφ(x,t)dx 2p(t)Zϑz(x, t)tz(x, t)φ(x, t)eφ(x,t)dx
=2Zϑ|tz(x, t)|2eφ(x,t)dx + 2 Zϑ
w(x, t)tz(x, t)eφ(x,t)dx
2p(t)Zϑ
tz(x, t)z(x, t)φ(x, t)eφ(x,t)dx. (3.9)
The last equality is implied from the fact: p(t)∆z=tzw. For the third term P3,
since Gφ:= tφ+p(t)∆φ+p(t)|∇φ|2, we get
P3=p(t)Zϑ|∇z(x, t)|2tφ(x, t)eφ(x,t)dx
=p(t)Zϑ|∇z(x, t)|2Gφ(x, t)eφ(x,t)dx p(t)2Zϑ|∇z(x, t)|2φ(x, t)eφ(x,t)dx
p(t)2Zϑ|∇z(x, t)|2|∇φ(x, t)|2eφ(x,t)dx. (3.10)
17
Integrating by parts the second term in (3.10) gives
p(t)2Zϑ|∇z(x, t)|2φ(x, t)eφ(x,t)dx
=p(t)2Zϑ(|∇z(x, t)|2)φ(x, t)eφ(x,t)dx +p(t)2Zϑ|∇z(x, t)|2|∇φ(x, t)|2eφ(x,t)dx
p(t)2Z∂ϑ |∇z(x, t)|2νφ(x, t)eφ(x,t)dx. (3.11)
Now, we compute the first term in (3.11) by using standard summation notations
p(t)2Zϑ(|∇z(x, t)|2)φ(x, t)eφ(x,t)dx
=p(t)2Zϑ
i(|jz(x, t)|2)iφ(x, t)eφ(x,t)dx
= 2p(t)2Zϑ
jz(x, t)2
ij z(x, t)iφ(x, t)eφ(x,t)dx
=2p(t)2Zϑ
2
jj z(x, t)iz(x, t)iφ(x, t)eφ(x,t)dx 2p(t)2Zϑ
jz(x, t)iz(x, t)2
ij φ(x, t)eφ(x,t)dx
2p(t)2Zϑ
jz(x, t)iz(x, t)iφ(x, t)jφ(x, t)eφ(x,t)dx
+2p(t)2Z∂ϑ
jz(x, t)iz(x, t)iφ(x, t)νjeφ(x,t)dx. (3.12)
Thus, we can write
p(t)2Zϑ|∇z(x, t)|2φ(x, t)eφ(x,t))dx
=2p(t)2Zϑ
z(x, t)z(x, t)φ(x, t)eφ(x,t)dx 2p(t)2Zϑz(x, t)2φ(x, t)z(x, t)eφ(x,t)dx
2p(t)2Zϑ|∇z(x, t)φ(x, t)|2eφ(x,t)dx + 2p(t)2Z∂ϑ |∇z(x, t)|2νφ(x, t)eφ(x,t)dx. (3.13)
Combining (3.10), (3.11) and (3.13), the third term P3in (3.8) can be computed as
P3=p(t)Zϑ|∇z(x, t)|2Gφ(x, t)eφ(x,t)dx 2p(t)2Zϑ
z(x, t)z(x, t)φ(x, t)eφ(x,t)dx
2p(t)2Zϑz(x, t)2φ(x, t)z(x, t)eφ(x,t)dx 2p(t)2Zϑ|∇z(x, t)φ(x, t)|2eφ(x,t)
+p(t)2Z∂ϑ |∇z(x, t)|2νφ(x, t)eφ(x,t)dx. (3.14)
18
Thus, from above results (3.9) and (3.14), (3.8) can be written
d
dt p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx
=p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx 2Zϑ|tz(x, t)|2eφ(x,t)dx + 2 Zϑ
w(x, t)tz(x, t)eφ(x,t)dx
2p(t)Zϑ
tz(x, t)z(x, t)φ(x, t)eφ(x,t)dx +p(t)Zϑ|∇z(x, t)|2Gφ(x, t)eφ(x,t)dx
2p(t)2Zϑ
z(x, t)z(x, t)φ(x, t)eφ(x,t)dx 2p(t)2Zϑz(x, t)2φ(x, t)z(x, t)eφ(x,t)dx
2p(t)2Zϑ|∇z(x, t)φ(x, t)|2eφ(x,t)dx +p(t)2Z∂ϑ |∇z(x, t)|2νφ(x, t)eφ(x,t)dx. (3.15)
Since p(t)∆z=tzw, one has
2p(t)2Zϑ
z(x, t)z(x, t)φ(x, t)eφ(x,t)dx
=2p(t)Zϑ
tz(x, t)z(x, t)φ(x, t)eφ(x,t)dx + 2p(t)Zϑ
w(x, t)z(x, t)φ(x, t)eφ(x,t)dx.
Moreover, we also have
2Zϑ|tz(x, t)|2eφ(x,t)dx + 2 Zϑ
w(x, t)tz(x, t)eφ(x,t)dx
4p(t)Zϑ
tz(x, t)z(x, t)φ(x, t)eφ(x,t)dx + 2p(t)Zϑ
w(x, t)z(x, t)φ(x, t)eφ(x,t)dx
2p(t)2Zϑ|∇z(x, t)φ(x, t)|2eφ(x,t)dx
=2Zϑtz(x, t) + p(t)z(x, t)φ(x, t)1
2w(x, t)2
eφ(x,t)dx +1
2Zϑ|w(x, t)|2eφ(x,t)dx.
(3.16)
Thus, (3.15) and (3.16) imply that
d
dt p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx
=p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx +p(t)Zϑ|∇z(x, t)|2Gφeφ(x,t)dx
2p(t)2Zϑz(x, t)2φ(x, t)z(x, t)eφ(x,t)dx +p(t)2Z∂ϑ |∇z(x, t)|2νφ(x, t)eφ(x,t)dx
2Zϑtz(x, t) + p(t)z(x, t)φ(x, t)1
2w(x, t)2
eφ(x,t)dx +1
2Zϑ|w(x, t)|2eφ(x,t)dx.
(3.17)
19
Step 2: Compute y(t)p(t)Rϑ|∇z(x, t)|2eφ(x,t)dx.
From the result i/, we have
y(t)p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx
=2p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx2
+ 2p(t)Zϑ
z(x, t)w(x, t)eφ(x,t)dx Zϑ|∇z(x, t)|2eφ(x,t)dx
+p(t)ZϑGφ(x, t)|z(x, t)|2eφ(x,t)dx Zϑ|∇z(x, t)|2eφ(x,t)dx
= 2A(A+B) + p(t)ZϑGφ(x, t)|z(x, t)|2eφ(x,t)dx Zϑ|∇z(x, t)|2eφ(x,t)dx. (3.18)
Here
A:= p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx
and
B:= Zϑ
z(x, t)w(x, t)eφ(x,t)dx
Our target is making appear the term tz(x, t) + p(t)z(x, t)φ(x, t)1
2w(x, t). First
of all, we compute Aby integrating by parts
A=p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx
=p(t)Zϑ
z(x, t)z(x, t)eφ(x,t)dx p(t)Zϑz(x, t)z(x, t)φ(x, t)eφ(x,t)dx
=Zϑ
w(x, t)z(x, t)eφ(x,t)dx Zϑ
tz(x, t)z(x, t)eφ(x,t)dx
p(t)Zϑz(x, t)z(x, t)φ(x, t)eφ(x,t)dx
=Zϑtz(x, t) + p(t)z(x, t)φ(x, t)1
2w(x, t)z(x, t)eφ(x,t)dx
+1
2Zϑ
w(x, t)z(x, t)eφ(x,t)dx. (3.19)
Thus
BA=Zϑtz(x, t) + p(t)z(x, t)φ(x, t)1
2w(x, t)z(x, t)eφ(x,t)dx
+1
2Zϑ
w(x, t)z(x, t)eφ(x,t)dx. (3.20)
20
Combining (3.18), (3.19) and (3.20), one gets
y(t)p(t)Zϑ|∇z(x, t)|2eφ(x,t)dx
=1
2Zϑ
w(x, t)z(x, t)eφ(x,t)dx2
2Zϑtz(x, t) + p(t)z(x, t)φ(x, t)1
2w(x, t)z(x, t)eφ(x,t)dx2
+p(t)ZϑGφ(x, t)|z(x, t)|2eφ(x,t)dx Zϑ|∇z(x, t)|2eφ(x,t)dx.
(3.21)
Step 3: Compute N(t).
We have
N(t) = 1
y(t)2y(t)d
dt p(t)Zϑ|∇z(x, t)|2eφ(x,t)y(t)p(t)Zϑ|∇z(x, t)|2eφ(x,t).
The result (3.17) in Step 1 and (3.21) in Step 2 provide us
N(t) = p(t)
p(t)N(t) + p(t)2
y(t)Z∂ϑ |∇z(x, t)|2νφ(x, t)eφ(x,t)dx +p(t)
y(t)Zϑ|∇z(x, t)|2Gφeφ(x,t)dx
2p(t)2
y(t)Zϑz(x, t)2φ(x, t)z(x, t)eφ(x,t)dx +1
2y(t)Zϑ|w(x, t)|2eφ(x,t)dx
2
y(t)Zϑtz(x, t) + p(t)z(x, t)φ(x, t)1
2w(x, t)2
eφ(x,t)dx
+2
y(t)2Zϑtz(x, t) + p(t)z(x, t)φ(x, t)1
2w(x, t)z(x, t)eφ(x,t)dx2
1
2y(t)2Zϑ
w(x, t)z(x, t)eφ(x,t)dx2
p(t)
y(t)2ZϑGφ|z(x, t)|2eφ(x,t)dx Zϑ|∇z(x, t)|2eφ(x,t)dx.
Thanks to Cauchy-Schwarz inequality:
Zϑtz(x, t) + p(t)z(x, t)φ(x, t)1
2w(x, t)z(x, t)eφ(x,t)dx2
Zϑtz(x, t) + p(t)z(x, t)φ(x, t)1
2w(x, t)2
eφ(x,t)dx Zϑ|z(x, t)|2eφ(x,t)dx,
we receive the assertion ii/.
21
Now, by choosing an explicit weight function eφinspired from the heat kernel, we
get the following result.
Corollary 3.2. Under the same assumption in Lemma 3.1, put R:= max
xϑ|xx0|.
Assume that ϑbe a convex domain or star-shaped with respect to x0. For any ρ > 0,
with φis chosen as below
φ(x, t) := −|xx0|2
4p(T)(Tt+ρ)n
2ln(Tt+ρ),(3.22)
we obtain two following estimates:
i/
|y(t) + 2N(t)y(t)| ≤ C0
Tt+ρ+C1y(t)+2 Zϑ|w(x, t)z(x, t)|eφ(x,t)dx,(3.23)
ii/
N(t)1 + C0
Tt+ρ+C1N(t) + 1
2Rϑ|w(x, t)|2eφ(x,t)dx
y(t)(3.24)
where
C0=|p|R2
2p2
1
and C1= (2 + n)|p|
p1
.
Proof of Corollary 3.2
Proof. Obviously, we can easily check the following properties of the function φ:
(1) tφ+p(T)∆φ+p(T)|∇φ|2= 0,
(2) φ=(xx0)
2p(T)(Tt+ρ),
(3) φ=n
2p(T)(Tt+ρ),
(4) 2φ=1
2p(T)(Tt+ρ)Inwhere Inis the identity matrix of size n.
Remind that Gφ=tφ+p(t)∆φ+p(t)|∇φ|2. Thanks to properties (1), (2) and (3), we
get
|Gφ| ≤ |p(t)p(T)|φ+|p(t)p(T)||∇φ|2
n|p|
2p(T)+|p|R2
4p(T)2
1
Tt+ρ
n|p|
2p1
+|p|R2
4p2
1
1
Tt+ρ. (3.25)
22
Hence, from result i/ in Lemma 3.1, we get the assertion i/. Now, we turn to prove the
assertion ii/. Thanks to the assumption that ϑis star-shaped with respect to x0, one
has
νφ=(xx0)ν
2p(T)(Tt+ρ)0x∂ϑ. (3.26)
Furthermore, property (4) implies
Zz(x, t)2φ(x, t)z(x, t)eφ(x,t)dx =1
2p(T)(Tt+ρ)Z|∇z(x, t)|2eφ(x,t)dx.
(3.27)
Consequently, combining result ii/ in Lemma 3.1 with (3.25), (3.26) and (3.27), we get
the assertion ii/.
Now, the following lemma will solve the ODE inequalities getting from Corollary
3.2.
Lemma 3.2. Let ρ > 0,FC0([0, T ]). Suppose two positive functions y, N
C1([0, T ]) satisfy the following conditions
1.
|y(t) + 2N(t)y(t)| ≤ C0
Tt+ρ+C1+F(t)y(t),(3.28)
2.
N(t)1 + C0
Tt+ρ+C1N(t) + 1
2F(t)(3.29)
where C0, C1>0. Then for any 0t1< t2< t3T, one has
(y(t2))1+MeGTt1+ρ
Tt3+ρC0(1+M)
y(t3) (y(t1))M(3.30)
with
M=Rt3
t2
eC1s
(Ts+h)1+C0ds
Rt2
t1
eC1s
(Ts+h)1+C0ds,
G= (1 + M)(t3t1)Zt3
t1
F(s)ds +Zt3
t1
F(s)ds + (t3t1)C1.
Proof of Lemma 3.2
Proof. From (3.29), we get:
N(t)(Tt+ρ)1+C0eC1t1
2F(t)(Tt+ρ)1+C0eC1t. (3.31)
23
For t1< t < t2:
Integrating (3.31) over (t;t2)gives
N(t)N(t2)Tt2+ρ
Tt+ρ1+C0
eC1(t2t)
1
2eC1t1
Tt+ρ1+C0Zt2
t
F(s)(Ts+ρ)1+C0eC1sds. (3.32)
Using the fact that (Ts+ρ)1+C0eC1s(Tt+ρ)1+C0eC1tst, one gets
N(t)Q(t2)eC1t
(Tt+ρ)1+C01
2Zt2
t1
F(s)ds (3.33)
where Q(t2) = eC1t2(Tt2+ρ)1+C0N(t2). From (3.28), we also have
y(t) + 2N(t)y(t)C0
Tt+ρ+C1+F(t)y(t). (3.34)
Combining to (3.33), we obtain:
y(t) + 2Q(t2)eC1t
(Tt+ρ)1+C0Zt2
t1
F(s)ds C0
Tt+ρC1F(t)y(t)0.
It is equivalent to
y(t)e2Q(t2)Rt
0
eC1s
(Ts+ρ)1+C0dse(Rt2
t1F(s)ds+C1)t(Tt+ρ)C0eRt
0F(s)ds0. (3.35)
Integrating (3.35) over (t1;t2), one has
y(t1)y(t2)e2Q(t2)Rt2
t1
eC1s
(Ts+ρ)1+C0dse(Rt2
t1F(s)ds+C1)(t2t1)Tt2+ρ
Tt1+ρC0
eRt2
t1F(s)ds.
(3.36)
For t2< t < t3:
Integrating (3.31) over (t2;t)gives
N(t)N(t2)Tt2+ρ
Tt+ρ1+C0
eC1(t2t)
+1
2eC1t1
Tt+ρ1+C0Zt
t2
F(s)(Ts+ρ)1+C0eC1sds. (3.37)
Using the fact that (Ts+ρ)1+C0eC1s(Tt2+ρ)1+C0eC1t2st2, we obtain
N(t)Q(t2)1
Tt+ρ1+C0
eC1t+1
2eC1(tt2)Tt2+ρ
Tt+ρ1+C0Zt3
t2
F(s)ds.(3.38)
24
From (3.28), we also have
y(t) + 2N(t)y(t)≥ −C0
Tt+ρ+C1+F(t)y(t). (3.39)
It deduces from (