Available via license: CC BY 4.0
Content may be subject to copyright.
arXiv:1802.04978v1 [math.DS] 14 Feb 2018
Singularly perturbed forward-backward stochastic differential equations:
application to the optimal control of bilinear systems
Omar Kebiri∗ † , Lara Neureither†,and Carsten Hartmann†
Abstract: We study linear-quadratic stochastic optimal control problems with bilinear state
dependence for which the underlying stochastic differential equation (SDE) consists of slow
and fast degrees of freedom. We show that, in the same way in which the underlying dynam-
ics can be well approximated by a reduced order effective dynamics in the time scale limit
(using classical homogenziation results), the associated optimal expected cost converges in
the time scale limit to an effective optimal cost. This entails that we can well approximate
the stochastic optimal control for the whole system by the reduced order stochastic optimal
control, which is clearly easier to solve because of lower dimensionality. The approach uses an
equivalent formulation of the Hamilton-Jacobi-Bellman (HJB) equation, in terms of forward-
backward SDEs (FBSDEs). We exploit the efficient solvability of FBSDEs via a least squares
Monte Carlo algorithm and show its applicability by a suitable numerical example.
Keywords: Linear quadratic stochastic control, bilinear systems, slow-fast dynamics, model
reduction, forward-backward stochastic differential equations, least squares Monte Carlo.
1. Introduction. Stochastic optimal control is one of the important fields in mathematics
which has attracted the attention of both pure and applied mathematicians [57,27]. Stochastic
control problems also appear in a variety of applications, such as statistics [61,23], financial
mathematics [21,55], molecular dynamics [58,34] or materials science [56,4], to mention
just a few. For some applictions in science and engineering, such as molecular dynamics
[58,62], the high dimensionality of the state space is an important aspect when solving optimal
control problems. Another issue when solving optimal control problems by discretising the
corresponding dynamic programming equations in space and time are multiscale effects that
come into play when the state space dynamics exhibit slow and fast motions.
Here we consider such systems that have slow and fast scales and that are possibly high-
dimensional. Several techniques have been developed to reduce the spatial dimension of control
systems (see e.g. [3,10] and the references therein), but these techniques treat the control as
a possibly time-dependent parameter (“open loop control”) and do not take into account that
the control may be a feedback control that depends on the state variables (“closed loop con-
trol”). Clearly, homogenization techniques for stochastic control systems have been extensively
studied by applied analysts using a variety of different mathematical tools, including viscosity
solutions of the Hamilton-Jacobi-Bellman equation [13,25], backward stochastic differential
equations [17,39], or occupation measures [45,46]. The convergence analysis of multiscale
stochastic control systems is quite involved and non-constructive, in that the limiting equa-
tions of motion are not given in explicit or closed form; see [43,42] for notable exceptions,
∗Institute of Mathematics, Freie Universit¨at Berlin, Arnimallee 6, 14195 Berlin, Germany
†Brandenburgische Technische Universit¨at (BTU) Cottbus-Senftenberg
1
dealing mainly with the case when the dynamics is linear.
In general, the elimination of variables and solving control problems do not commute,
so one of the key questions in control engineering is under which conditions it is possible
to eliminate variables before solving an optimal problem. We call this the model reduction
problem. In this paper we identify a class of stochastic feedback control problems with bilinear
state dependence that have the property that they admit the elimination of variables (i.e.
model reduction) before solving the control problem. These systems tuern oput to be relevant
in the control of high-dimensional transport PDEs, such as Fokker-Planck equations or the
evolution equations of open quantum systems [33,49]. Our approach is based on a Donsker-
Varadhan type duality principle between a linear Feynman-Kac PDE and the semi-linear
dynamic programming PDE associated with a stochastic control problem [32]. Here we exploit
the fact that the dynamic programming PDE can be recast as an uncoupled forward backward
stochastic differential equation (see e.g. [54,59]) that can be treated by model reduction
techniques, such as averaging or homogenisation.
The relation between semilinear PDEs of Hamilton-Jacobi-Bellman type and forward-
backward stochastic differential equations (FBSDE) is a classical subject that has been first
studied by Pardoux and Peng [52] and since then received lot of attention from various sides,
e.g.[6,22,24,36,37,44]. The solution theory has its roots in the work of Antonelli [2] and
since then has been extended in various directions; see e.g. [5,7,63,47].
From a theoretical point of view, this paper goes beyond our previous works [32,35]
in that we prove strong convergence of the value function and the control without relying
on compactness or periodicity assumptions for the fast variables, even though we focus on
bilinear systems only, which is the weakest form of nonlinearity. (Many nonlinear systems
however can be represented as bilinear systems by a so-called Carleman linearisation.) It also
goes beyond the classical works [43,42] that treat systems that are either fully linear or linear
in the fast variables. We stress that we are mainly aiming at the model reduction problem,
but we discuss alongside with the theoretical results some ideas to discretise the corresponding
FBSDE [8,19,15,9,38], since one of the main motivations for doing model reduction is to
reduce the numerical complexity of solving optimal control problems.
1.1. Set-up and problem statement. We briefly discuss the technical set-up of the con-
trol problem considered in this paper. In this paper, we consider the linear-quadratic (LQ)
stochastic control problem of the following form: minimize the expected cost
(1) J(u;t, x) = EZτ
tq0(Xu
s) + |us|2ds +q1(Xu
τ)Xu
t=x
over all admissible controls u∈ U and subject to:
(2) dXu
s= (a(Xu
s) + b(Xu
s)us)ds +σ(Xu
s)dWs,06t6s6τ .
Here τ < ∞is a bounded stopping time (specified below), and the set of admissible controls
Uis chosen such that (2) has a unique strong solution. The denomination linear-quadratic for
(1)–(2) is due to the specific dependence of the system on the control variable u. The state
vector x∈Rnis assumed to be high-dimensional, which is why we seek a low-dimensional
approximation of (1)–(2).
2
Specifically, we consider the case that q0and q1are quadratic in x,ais linear and σis
constant, and the control term is an affine function of x, i.e.,
b(x)u= (Nx +B)u
In this case the system is called bilinear (including linear systems as a special case), and the
aim is to replace (2) by a lower dimensional bilinear system
d¯
Xv
s=¯
A¯
Xv
sds +¯
N¯
Xv
s+¯
Bvsds +¯
CdWs,06t6s6τ ,
with states ¯x∈Rns,ns≪nand an associated reduced cost functional
¯
J(v; ¯x, t) = EZτ
t¯q0(¯
Xv
s) + |vs|2ds + ¯q1(¯
Xv
τ)
¯
Xv
t= ¯x,
that is solved instead of (1)–(2). Letting v∗denote the minimizer of ¯
J, we require that v∗is a
good approximation of the minimizer u∗of the original problem where ”good approximation”
is understood in the sense that
J(v∗;·, t = 0) ≈J(u∗;·, t = 0) .
In the last equation, closeness must be suitably interpreted, e.g. uniformly on all compact
subsets of Rn×[0, T ) for some T < ∞.
One situation in which the above approximation property holds is when u∗≈v∗uniformly
in tand the cost is continuous in the control, but it turns out that this requirement will be
too strong in general and overly restrictive. We will discuss alternative criteria in the course
of this paper.
1.2. Outline. The paper is organised as follows: In Section 2we introduce the bilin-
ear stochastic control problem studied in this paper and derive the corresponding forward-
backward stochastic differential equation (FBSDE). Section 3contains the main result, a
convergence result for the value function of a singularly perturbed control problem with bilin-
ear state dependence, based on an FBSDE formulation. In Section 4 we present a numerical
example to illustrate the theoretical findings and discuss the numerical discretization of the
FBSDE. The article concludes in Section 5with a short summary and a discussion of future
work. The proof of the main result and some technical lemmas are recorded in the Appendix.
2. Singularly perturbed bilinear control systems. We now specify the system dynamics
(2) and the corresponding cost functional (1). Let (x1, x2)∈Rns×Rnfwith ns+nf=n
denote a decomposition of the state vector x∈Rninto relevant (slow) and irrelevant (fast)
components. Further let W= (Wt)t≥0denote Rm-valued Brownian motion on a probability
space (Ω,F, P ) that is endowed with the filtration (Ft)t≥0generated by W. For any initial
condition x∈Rnand any A-valued admissible control u∈ U, with A ⊂ R, we consider the
following system of Itˆo stochastic differential equations
(3) dXǫ
s=AXǫ
sds + (NXǫ
s+B)usds +CdWs, Xǫ
t=x ,
3
that depends parametrically on a parameter ǫ > 0 via the coefficients
A=Aǫ∈Rn×n, N =Nǫ∈Rn×n, B =Bǫ∈Rn,and C=Cǫ∈Rn×m,
where for brevity we also drop the dependence of the process on the control u, i.e. Xǫ
s=Xu,ǫ
s.
The stiffness matrix Ain (3) is assumed to be of the form
(4) A=
A11 ǫ−1/2A12
ǫ−1/2A21 ǫ−1A22
∈R(ns+nf)×(ns+nf),
with n=ns+nf. Control and noise coefficients are given by
(5) N=
N11 N12
ǫ−1/2N21 ǫ−1/2N22
∈R(ns+nf)×(ns+nf)
and
(6) B=B1
ǫ−1/2B2∈R(ns+nf)×1, C =C1
ǫ−1/2C2∈R(ns+nf)×m,
where Nx +B∈range(C) for all x∈Rn; often we will consider either the case m= 1 with
Ci=√ρBi,ρ > 0, or m=n, with Cbeing a multiple of the identity when ǫ= 1. All block
matrices Aij , Nij ,Biand Cjare assumed to be order 1 and independent of ǫ.
The above ǫ-scaling of coefficients is natural for a system with nsslow and nffast degrees
of freedom and arises, for example, as a result of a balancing transformation applied to a
large-scale system of equations; see e.g. [31,33]. A special case of (3) is the linear system
(7) dXǫ
s= (AXǫ
s+Bus)ds +C dWs.
Our goal is to control the stochastic dynamics (3)—or (7) as a special variant—so that
a given cost criterion is optimised. Specifically, given two symmetric positive semidefinite
matrices Q0, Q1∈Rns×ns, we consider the quadratic cost functional
(8) J(u;t, x) = E1
2Zτ
t
((Xǫ
1,s)⊤Q0Xǫ
1,s +|us|2)ds +1
2(Xǫ
1,τ )⊤Q1Xǫ
1,τ ,
that we seek to minimize subject to the dynamics (3). Here the expectation is understood
over all realisations of (Xǫ
s)s∈[t,τ]starting at Xǫ
t=x, and as a consequence Jis a function of
the initial data (t, x). The stopping time is defined as the minimum of some time T < ∞and
the first exit time of a domain D=Ds×Rnf⊂Rns×Rnfwhere Dsis an open and bounded
set with smooth boundary. Specifically, we set τ= min{τD, T }, with
τD= inf{s≥t:Xǫ
s/∈D}.
In other words, τis the stopping time that is defined by the event that either s=Tor Xǫ
s
leaves the set D=Ds×Rnf, whichever comes first. Note that the cost function does not
explicitly depend on the fast variables x2. We define the corresponding value function by
(9) Vǫ(t, x) = inf
u∈U J(u;t, x).
4
Remark. As a consequence of the boundedness of Ds⊂Rns, we may assume that all
coefficients in our control problem are bounded or Lipschitz continuous, which makes some of
the proofs in the paper more transparent.
We further note that all of the following considerations trivially carry over to the case
N= 0 and a multi-dimensional control variable, i.e., u∈Rkand B∈Rn×k.
2.1. From stochastic control to forward-backward stochastic differential equations.
We suppose that the matrix pair (A, C ) satisfies the Kalman rank condition
(10) rank(C|AC|A2C|...|An−1C) = n .
A necessary—and in this case sufficient—condition for optimality of our optimal control prob-
lem is that the value function (9) solves a semilinear parabolic partial differential equation of
Hamilton-Jacobi-Bellman type (a.k.a. dynamic programming equation) [26]
(11) −∂V ǫ
∂t =LǫVǫ+f(x, V ǫ, C⊤∇Vǫ), V ǫ|E+=q1,
where
q1(x) = 1
2x⊤
1Q1x1
and E+is the terminal set of the augmented process (s, X ǫ
s), precisely E+= ([0, T )×∂D)∪
({T} × D). Here Lǫis the infinitesimal generator of the control-free process,
(12) Lǫ=1
2CC⊤:∇2+ (Ax)· ∇ ,
and the nonlinearity fis independent of ǫand given by
(13) f(x, y, z) = 1
2x⊤
1Q0x1−1
2x⊤N⊤+B⊤C⊤♯z
2.
Note that fis furthermore independent of yand that the Moore-Penrose pseudoinverse
C⊤♯=C(C⊤C)−1
is unambiguously defined since z=C⊤∇Vǫand (Nx +B)∈range(C), which by noting that
C⊤♯C⊤is the orthogonal projection onto range(C) implies that
(x⊤N⊤+B⊤)∇Vǫ
2=(x⊤N⊤+B⊤)C⊤♯z
2.
The specific semilinear form of the equation is a consequence of the control problem being
linear-quadratic. As a consequence, the dynamic programming equation (11) admits a repre-
sentation in form of an uncoupled forward-backward stochastic differential equation (FBSDE).
To appreciate this point, consider the control-free process Xǫ
s=Xǫ,u=0
swith infinitesimal gen-
erator Lǫand define an adapted process Yǫ
s=Yǫ,x,t
sby
(14) Yǫ
s=Vǫ(s, Xǫ
s).
5
(We abuse notation and denote both the controlled and the uncontrolled process by Xǫ
s.)
Then, by definition, Yǫ
t=Vǫ(x, t). Moreover, by Itˆo’s formula and the dynamic programming
equation (11), the pair (Xǫ
s, Y ǫ
s)s∈[t,τ]can be shown to solve the system of equations
(15) dXǫ
s=AXǫ
sds +C dWs, Xǫ
t=x
dY ǫ
s=−f(Xǫ
s, Y ǫ
s, Zǫ
s)ds +Zǫ
sdWs, Y ǫ
τ=q1(Xǫ
τ),
with Zǫ
s=C⊤∇Vǫ(s, Xǫ
s) being the control variable. Here, the second equation is only
meaningful if interpreted as a backward equation, since only in this case Zǫ
sis uniquely defined.
To see this, let f= 0 and q1(x) = xand note that the ansatz (14) implies that Yǫ
sis adapted
to the filtration generated by the forward process Xǫ
s. If the second equation was just a time-
reversed SDE then (Yǫ
s, Zǫ
s)≡(Xǫ
τ,0) would be the unique solution to the SDE dY ǫ
s=Zǫ
sdWs
with terminal condition Yǫ
τ=Xǫ
τ. But such a solution would not be adapted, because Yǫ
sfor
s < τ would depend on the future value Xǫ
τof the forward process.
Remark. Equation (15) is called an uncoupled FBSDE because the forward equation for
Xǫ
sis independent of Yǫ
sor Zǫ
s. The fact that the FBSDE is uncoupled furnishes a well-
known duality relation between the value function of an LQ optimal control problem and the
cumulate generating function of the cost [18,20]; specifically, in the case that N= 0, B=C
and the pair (A, B) being completely controllable, it holds that
(16) Vǫ(x, t) = −log Eexp −Zτ
t
q0(Xǫ
s)ds −q1(Xǫ
τ),
with
q0(x) = 1
2x⊤
1Q0x1.
Here the expectation on the right hand side is taken over all realisations of the control-free
process Xǫ
s=Xǫ,u=0
s, starting at Xǫ
t=x. By the Feynman-Kac theorem, the function
ψǫ= exp(−Vǫ) solves the linear parabolic boundary value problem
(17) ∂
∂t +Lǫψǫ=q0(x)ψǫ, ψǫ|E+= exp (−q1),
which is equivalent to the corresponding dynamic programming equation (11).
3. Model reduction. The idea now is to exploit the fact that (15) is uncoupled, which
allows us to derive an FBSDE for the slow variables ¯
Xǫ
s=Xǫ
1,s only, by standard singular
perturbation methods. The reduced FBSDE as ǫ→0 will then be of the form
(18) d¯
Xs=¯
A¯
Xsds +¯
C dWs,¯
Xt=x1
d¯
Ys=−¯
f(¯
Xs,¯
Ys,¯
Zs)ds +¯
ZsdWs,¯
Yτ= ¯q1(¯
Xτ),
where the limiting form of the backward SDE follows from the corresponding properties of
the forward SDE. Specifically, assuming that the solution of the associated SDE
(19) dξs=A22ξsds +C2dWs,
6
that is governing the fast dynamics as ǫ→0, is ergodic with unique Gaussian invariant
measure π=N(0,Σ), where Σ = Σ⊤>0 is the unique solution to the Lyapunov equation
(20) A22Σ+ΣA⊤
22 =−C2C⊤
2,
we obtain that, asymptotically as ǫ→0,
(21) Xǫ
2,s ∼ξs/ǫ , s > 0.
As a consequence, the limiting SDE governing the evolution of the slow process Xǫ
1,s— in
other words: the forward part of (18)—has the coefficients
(22) ¯
A=A11 −A12A−1
22 A21 ,¯
C=C1−A12A−1
22 C2,
as following from standard homogenisation arguments [53]; a formal derivation is given in the
appendix. By a similar reasoning we find that the driver of the limiting backward SDE reads
(23) ¯
f(x1, y, z1) = Z
Rnf
f((x1, x2), y, (z1,0)) π(dx2),
specifically,
(24) ¯
f(x1, y, z1) = 1
2x⊤
1¯
Q0x1−1
2x⊤
1¯
N⊤+¯
B⊤z1
2+K0,
with
(25) ¯
Q0=Q0,¯
N=C♯
1N11 ,¯
B=C♯
1B1+N12Σ1/2.
The limiting backward SDE is equipped with a terminal condition ¯q1that equals q1, namely,
(26) ¯q1(x1) = 1
2x⊤
1Q1x1.
Interpretation as an optimal control problem. It is possible to interpret the reduced
FBSDE again as the probabilistic version of a dynamic programming equation. To this end,
note that (10) implies that the matrix pair ( ¯
A, ¯
C) satisfies the Kalman rank condition [1]
rank( ¯
C|A¯
C|A2¯
C|...|Ans−1¯
C) = ns.
As a consequence, the semilinear partial differential equation
(27) −∂V
∂t =¯
LV +¯
f(x1, V, ¯
C⊤∇V), V |E+
s= ¯q1,
with E+
s= ([0, T )×∂Ds)∪({T} × Ds) and
(28) ¯
L=1
2¯
C¯
C⊤:∇2+ ( ¯
Ax1)· ∇
7
has a classical solution V∈C1,2([0, T )×D)∩C0,1(E+
s). Letting ¯
Ys:= V(s, ¯
Xs), 0 6t6s6τ,
with initial data ¯
Xt=x1and ¯
Zs=¯
C⊤∇V(s, ¯
Xs), the limiting FBSDE (18) can be readily
seen to be equivalent to (27). The latter is the dynamic programming equation of the following
LQ optimal control problem: minimize the cost functional
(29) ¯
J(v;t, x1) = E1
2Zτ
t
(¯
X⊤
s¯
Q0¯
Xs+|vs|2)ds +1
2¯
X⊤
τ¯
Q1¯
Xτ,
subject to
(30) d¯
Xs=¯
A¯
Xsds +¯
M¯
Xs+¯
Dvsds +¯
Cdws,¯
Xt=x1,
where (ws)s≥0denotes standard Brownian motion in Rnsand we have introduced the new
control coefficients ¯
M=¯
C¯
Nand ¯
D=¯
C¯
B.
3.1. Convergence of the control value. Before we state our main result and discuss
its implications for the model reduction of linear and bilinear systems, we recall that basic
assumptions that we impose on the system dynamics. Specifically, we say that the dynamics
(3) and the corresponding cost functional (8) satisfy Condition LQ if the following holds:
1. (A, C) is controllable, and the range of b(x) = Nx +Bis a subspace of range(C).
2. The matrix A22 is Hurwitz (i.e., its spectrum lies entirely in the open left complex
half-plane) and the matrix pair (A22 , C2) is controllable.
3. The driver of the FBSDE (15) is continuous and quadratically growing in Z.
4. The terminal condition in (15) is bounded; for simplicity we set Q1= 0 in (8).
Assumption 2implies that the fast subsystem (19) has a unique Gaussian invariant mea-
sure π=N(0,Σ) with full topological support, i.e., we have Σ = Σ⊤>0. According to [11,
Prop. 3.1] and [44], existence and uniqueness of (15) is guaranteed by Assumptions 3and
4and the controllability of (A, C) and the range condition, which imply that the transition
probability densities of the (controlled or uncontrolled) forward process Xǫ
sare smooth and
strictly positive. As a consequence of the complete controllability of the original system, the
reduced system (30) is completely controllable too, which guarantees existence and uniqueness
of a classical solution of the limiting dynamic programming equation (27); see, e.g., [50].
Uniform convergence of the value function Vǫ→Vis now entailed by the strong conver-
gence of the solution to the corresponding FBSDE as is expressed by the following Theorem.
Theorem 3.1. Let the assumptions of Condition LQ hold. Further let Vǫbe the classical
solution of the dynamic programming equation (11) and Vbe the solution of (27). Then
Vǫ→V ,
uniformly on all compact subsets of [0, T ]×D.
The proof of the Theorem is given in Appendix A.2. For the reader’s convenience, we present
a formal derivation of the limit equation in the next subsection.
3.2. Formal derivation of the limiting FBSDE. Our derivation of the limit FBSDE fol-
lows standard homogenisation arguments (see [29,41,53]), taking advantage of the fact that
8
the FBSDE is uncoupled. To this end we consider the following linear evolution equation
(31) ∂
∂t −Lǫφǫ= 0 , φǫ(x1, x2,0) = g(x1)
for a function φǫ:¯
Ds×Rnf×[0, T ] where
(32) Lǫ=1
ǫL0+1
√ǫL1+L2,
with
L0=1
2C2C⊤
2:∇2
x2+ (A22x2)· ∇x2
(33a)
L1=1
2C1C⊤
2:∇2
x2x1+1
2C2C⊤
1:∇2
x1x2+ (A12x2)· ∇x1+ (A21x1)· ∇x2
(33b)
L2=1
2C1C⊤
1:∇2
x1+ (A11x1)· ∇x1
(33c)
is the generator associated with the control-free forward process Xǫ
sin (15). We follow the
standard procedure of [53] and consider the perturbative expansion
φǫ=φ0+√ǫφ1+ǫφ2+...
that we insert into the Kolmogorov equation (31). Equating different powers of ǫwe find a
hierarchy of equations, the first three of which read
(34) L0φ0= 0 , L0φ1=−L1φ0, L0φ2=∂φ0
∂t −L1φ1−L2φ0.
Assumption 2on page 8implies that L0has a one-dimensional nullspace that is spanned by
functions that are constant in x2, and thus the first of the three equations implies that φ0is
independent of x1. Hence the second equation—the cell problem—reads
(35) L0φ1=−(A12x2)· ∇φ0(x1, t).
The last equation has a solution by the Fredholm alternative, since the right hand side averages
to zero under the invariant measure πof the fast dynamics that is generated by the operator
L0, in other words, the right hand side of the linear equation is orthogonal to the nullspace
of L∗
0spanned by the density of π.1The form of the equation suggests the general ansatz
φ1=ψ(x2)· ∇φ0(x1, t) + R(x1, t)
where the function Rplays no role in what follows, so we set it equal to zero. Since L0ψ=
−(A12x2)⊤, the function ψmust be of the form ψ=Qx2with a matrix Q∈Rns×nf. Hence
Q=−A12A−1
22 .
1Here L∗
0is the formal L2adjoint of the operator L0, defined on a suitable dense subspace of L2.
9
Now, solvability of the last of the three equations requires again that the right hand side
averages to zero under π, i.e.
(36) ZRnf∂φ
∂t +L1A12 A−1
22 x2· ∇φ−L2φπ(dx2),
which formally yields the limiting equation for φ=φ0(x1, t). Since πis a Gaussian measure
with mean 0 and covariance Σ given by (20), the integral (36) can be explicitly computed:
(37) ∂
∂t −¯
Lφ= 0, , φ(x1,0) = g(x1),
where ¯
Lis given by (28) and the initial condition φ(·,0) = gis a consequence of the fact
that the initial condition in (31) is independent of ǫ. By the controllability of the pair ( ¯
A, ¯
C),
the limiting equation (37) has a unique classical solution and uniform convergence φǫ→φis
guaranteed by standard results, e.g., [53, Thm. 20.1].
Since the backward part of (15) is independent of ǫ, the final form of the homogenised
FBSDE (18) is found by averaging over x2, with the unique solution of the corresponding
backward SDE satisfying Z2,s = 0 as the averaged backward process is independent of x2.
4. Numerical studies. In this section we presents numerical results for linear and bilinear
control systems and discuss the numerical discretisation of uncoupled FBSDE associated with
LQ stochastic control problems. We begin with the latter.
4.1. Numerical FBSDE discretisation. The fact that (15) or (18) are decoupled entails
that they can be discretised by an explicit time-stepping algorithm. Here we utilize a variant
of the least-squares Monte Carlo algorithm proposed in [9]; see also [30]. The convergence of
numerical schemes for FBSDE with quadratic nonlinearities in the driver has been analysed
in [60].
The least-squares Monte Carlo scheme is based on the Euler discretisation of (15):
(38)
ˆ
Xn+1 =ˆ
Xn+ ∆tA ˆ
Xn+√∆tCξn+1
ˆ
Yn+1 =ˆ
Yn−∆tf(ˆ
Xn,ˆ
Yn,ˆ
Zn) + √∆tˆ
Zn·ξn+1
where ( ˆ
Xn,ˆ
Yn) denotes the numerical discretisation of the joint process (Xǫ
s, Y ǫ
s), where we
set Xǫ
s=Xǫ
τDfor s∈(τD, T ] when τD< T , and (ξk)k≥1is an i.i.d. sequence of normalised
Gaussian random variables. Now let
Fn=σˆ
Wk: 0 6k6n
be the σ-algebra generated by the discrete Brownian motion ˆ
Wn:= √∆tPi6nξi. By defini-
tion the joint process (Xǫ
s, Y ǫ
s) is adapted to the filtration generated by (Wr)06r6s, therefore
(39) ˆ
Yn=Eˆ
Yn|Fn=Eˆ
Yn+1 + ∆tf(ˆ
Xn,ˆ
Yn,ˆ
Zn)|Fn,
where we have used that ˆ
Znis independent of ξn+1. In order to compute ˆ
Ynfrom ˆ
Yn+1 we
use the identification of Zǫ
swith C⊤∇Vǫ(s, Xǫ
s) and replace ˆ
Znin (39) by
(40) ˆ
Zn=C⊤∇Vǫ(tn,ˆ
Xn),
which, the parametric ansatz (42) for the Vǫmakes the overall scheme explicit in ˆ
Xnand ˆ
Yn.
10
Least-squares solution of the backward SDE. In order to evaluate the conditional ex-
pectation ˆ
Yn=E[·|Fn] we recall that a conditional expectation can be characterised as the
solution to the following quadratic minimisation problem:
ES|Fn= argmin
Y∈L2,Fn-measurable
E[|Y−S|2].
Given Mindependent realisations ˆ
X(i)
n,i= 1,...,M of the forward process ˆ
Xn, this suggests
the approximation scheme
(41) ˆ
Yn≈argmin
Y=Y(ˆ
Xn)
1
M
M
X
i=1 Y−ˆ
Y(i)
n+1 −∆tfˆ
X(i)
n,ˆ
Y(i)
n+1, C ⊤ˆ
Y(i)
n+1
2,
where ˆ
Y(i)is defined by ˆ
Y(i)=Yˆ
X(i)with terminal values
ˆ
Y(i)
N=q1X(i)
Nτ=N∆t .
(Note that N=NDis random.) For simplicity, we assume in what follows that the terminal
value is zero, i.e., we set q1= 0. (Recall that the existence and uniqueness result from [44]
requires q1to be bounded.) To represent ˆ
Ynas a function Y(ˆ
Xn) we use the ansatz
(42) Y(ˆ
Xn) =
K
X
k=1
αk(n)ϕk(ˆ
Xn),
with coefficients α1(·),...,αK(·)∈Rand suitable basis functions ϕ1,...,ϕK:Rn→R
(e.g. Gaussians). Note that the coefficients αkare the unknowns in the least-squares problem
(41) and thus are independent of the realisation. Now the least-squares problem that has to
be solved in the n-th step of the backward iteration is of the form
(43) ˆα(n) = argmin
α∈RKkAnα−bnk2,
with coefficients
(44) An=ϕkˆ
X(i)
ni=1,...,M;k=1,...,K
and data
(45) bn=ˆ
Y(i)
n+1 −∆tfˆ
X(i)
n,ˆ
Y(i)
n+1, C ⊤ˆ
Y(i)
n+1i=1,...,M .
Assuming that the coefficient matrix An∈RM×K,K6Mdefined by (44) has maximum
rank K, then the solution to the least-squares problem (43) is given by
(46) ˆα(n) = A⊤
nAn−1A⊤
nbn.
The thus defined scheme is strongly convergent of order 1/2 as ∆t→0 and M , K → ∞
as has been analysed by [9]. Controlling the approximation quality for finite values ∆t, M, K ,
however, requires a careful adjustment of the simulation parameters and appropriate basis
functions, especially with regard to the condition number of the matrix An.
11
4.2. Numerical example. Illustrating our theoretical findings of Theorem 3.1, we consider
a linear system of form (7) where the matrices A, B and Care given by
A=
0ǫ−1/2In×n
−ǫ−1/2In×n−γ ǫ−1In×n
∈R2n×2n,
and
B=C=0
σ ǫ−1/2In×n∈R2n×n.
This is an instance of a controlled Langevin equation with friction and noise coeffcient
γ, σ > 0 which are assumed to fulfill the fluctation-dissipation relation
2γ=σ2.
In the example we let γ= 1/2 and σ= 1. The quadratic cost functional (8) is determined by
the running cost via Q0=In×n∈Rn×nand we apply no terminal cost, i.e. Q1= 0.
The associated effective equations are given by (29)–(30), where
¯
A=−γ−1In×n,¯
D=¯
C=σγ−1, , ¯
M= 0,¯
Q0=In×n,¯
Q1= 0 ∈Rn×n.
We apply the previously described FBSDE scheme (38),(42),(43)–(46), which was shown to
yield good results in [40], to both the full and the reduced system, and we choose n= 3, i.e
the full system is six dimensional. To this end we choose the basis functions
φµk,δ
k,n (x) = exp −(µk−x)2
2δ
where δ= 0.1 is fixed but µk=µk(n) changes in each timestep such that the basis follows the
forward process. For this, we simulate Kadditional forward trajectories X(k), k = 1,...,K
and set µk(n) = X(k)
n.
We choose the parameters for the numerics as follows. The number of basis functions K
is given by K= 9 for the reduced system and Kǫ= 40 for the full system. We choose these
values because the maximally observed rank of the matrices Andefined in (44) is 9 for the
reduced system and we want these matrices to have rank K. For the full system we could
have used a greater values for K, but we want to keep the computational effort reasonable.
Further, we choose ∆t= 5 ·10−5, the final time T= 0.5 and the number of realisations
M= 400.
We let the whole algorithm run five times and compute the distance between the value func-
tions of the full and reduced systems
E(ǫ) := |Vǫ(0, x)−V(0, x)|
for which convergence of order 1/2 was found in the proof of Theorem 3.1. Indeed, this is the
order of convergence which we also observe in the numerics of our example as can be seen in
figure 1where we depict the mean and standard deviation of E(ǫ).
12
10-3 10 -2 10-1
10-2
10-1
100
Figure 1. Plot of the mean of E(ǫ)±its standard deviation (σ(E(ǫ))) and for comparison of √ǫagainst ǫ
on a doubly logarithmic scale: we observe convergence of order 1/2as predicted by the theory.
4.3. Discussion. We shall now discuss the implications of the above simple example when
it comes to more complicated dynamical systems. As a general remark the results show that it
is possible to to apply model reduction before solving the corresponding optimal control prob-
lem where the control variable in the original equation can simply be treated as a parameter.
This is in accordance with the general model reduction strategy in control engineering; see
e.g. [3,10] and the references therein. Our results not only guarantee convergence of the value
function via convergence of Yǫ, but they also imply strong convergence of the optimal control,
by the convergence of the control process Zǫin L2. (See the appendix for details.) This means
that in the case of a system with time scale separation, our result is highly valuable since we
can resort to the reduced system for finding the optimal control which can then be applied to
the full systems dynamics.
We stress that our results carry over to fully nonlinear stochastic control problems which
have a similar LQ structure [32]. Clearly, for realistic (i.e. high-dimensional or nonlinear)
systems the identification of a small parameter ǫremains challenging, and one has to resort
to e.g. semi-empirical approaches, such as [28,48].
If the dynamics is linear, as is the case here, small parameters may be identified using
system theoretic arguments based on balancing transformations (see, e.g., [31,33]). These
approaches require that the dynamics is either linear or bilinear in the state variables, but
the aforementioned duality for the quasi-linear dynamic programming equation can be used
here as well in order to change the drift of the forward SDE from some nonlinear vector field,
say, bto a linear vector field b0=Ax. Assuming that the noise coefficient Cis square and
invertible and ignoring ǫand the boundary condition for the moment, it is easy to see that
13
the dynamic programming PDE (11) can be recast as
−∂V ǫ
∂t =˜
LV +˜
f(x, V ǫ, C ⊤∇xVǫ) = 0 ,
Here
˜
L=1
2CC⊤+b(x)· ∇
is the generator of a forward SDE with nonlinear drift b, and
˜
f(x, y, z) = f(x, y, z) + C−1(Ax −b(x)) ·z .
is the driver of the corresponding backward SDE. Even though the change of drift is somewhat
arbitrary, it shows that by changing the driver in the backward SDE it is possible to reduce
the control problem to one with linear drift that falls within the category that is considered
in this paper, at the expense of having a possibly non-quadratic cost functional.
Remark. Changing the drift may be advantageous in connection with the numerical FB-
SDE solver. In the martingale basis approach of Bender and Steiner [9], the authors have
suggested to use basis functions that are defined as conditional expectations of certain linearly
independent candidate functions over the forward process, which makes the basis functions
martingales. Computing the martingale basis, however, comes with a large computational
overhead, which is why the authors consider only cases in which the conditional expectations
can be computed analytically. Changing the drift of the forward SDE may thus be used to
simplify the forward dynamics so that its distribution becomes analytically tractable.
5. Conclusions and outlook. We have given a proof of concept that model reduction
methods for singularly perturbed bilinear control systems can be applied to the dynamics,
before solving the corresponding optimal control problem. The key idea that to connect
the HJB corresponding to our stochastic optimal control which is a semi-linear PDE to a
perturbed forward-backward SDE which is decoupled, so we benefit from this end to derive
a reduced FBSDE as the perturbation parameter ǫgoes to zero. The reduced FBSDE can
then be interpreted as a reduced stochastic control problem, and we have proved uniform
convergence of the corresponding value function. As an auxiliary result, we obtain that the
optimal control converges as well in a strong sense, which implies that the optimal control
computed from the reduced system can be used to control the original dynamics.
We presented numerical results for linear control system and we discussed the numerical
discretisation of uncoupled FBSDE, based on the computation of conditional expectations.
For the latter the choice of the basis functions plays an essential role, and how to cleverly
choose the ansatz functions, possibly exploiting that the forward SDE has an explicit solution
(see e.g. [9]) is an important topic, especially for high dimensional problems. We leave the
question regarding the adaptive choice of ansatz functions to future work.
Another class of important problems, that we have not considered in this article, are slow-
fast systems with vanishing noise. The natural question here is how the limit equation depend
on the order in which noise and time scale parameters go to zero. This question has important
consequences for the associated deterministic control problems and its regularisation by noise.
We leave this topic for future work too.
14
Appendix A. Proofs and technical lemmas.
The idea of the proof of Theorem 3.1 closely follows the work [16], with the main differences
being (a) that we consider slow-fast systems exhibiting three time scales, in particular the slow
equation contains singular O(ǫ−1/2) terms, and (b) that the coefficients of the fast dynamics
are not periodic, with the fast process being asymptotically Gaussian as ǫ→0; in particular
the nf-dimensional fast process lives on the unbounded domain Rnf.
A.1. Poisson equation Lemma. Theorem 3.1 rests on the following Lemma that is similar
to a result in [12].
Lemma A.1. Suppose that the assumptions of Condition LQ on page 8hold and define
h: [0, T ]×Rns×Rnf→Rto be a function of the class C1,2,2
b. Further assume that his
centered with respect to the invariant measure πof the fast process. Then for every t∈[0, T ]
and initial conditions (Xǫ
1,u, X ǫ
2,u) = (x1, x2)∈Rns×Rnf,06u < t, we have
(47) lim
ǫ→0
E"Zv
u
h(s, Xǫ
1,s, X ǫ
2,s)ds2#= 0 ,06u < v 6t .
Proof. We remind the reader of the definition (33) of the differential operators L0, L1and
L2, and consider the Poisson equation
(48) L0ψ=−h
on the domain Rnf. (The variables x1∈Rnsand t∈[0, T ] are considered as parameters.)
Since his centered with respect to π, equation (48) has a solution by the Fredholm alternative.
By Assumption 2L0is a hypoelliptic operator in x2and thus by [51, Thm. 2], the Poisson
equation (48) has a unique solution that is smooth and bounded. Applying Itˆo’s formula to
ψand introducing the shorthand δψ(u, v) = ψ(v, X ǫ
1,v , Xǫ
2,v)−ψ(u, x1, x2) yields
(49)
δψ(u, v) = Zv
u
(∂tψ+L2ψ)(s, Xǫ
1,s, X ǫ
2,s)ds +1
√ǫZv
u
L1ψ(s, Xǫ
1,s, X ǫ
2,s)ds
+1
ǫZv
u
L0ψ(s, Xǫ
1,s, X ǫ
2,s)ds +M1(u, v) + 1
√ǫM2(u, v),
where M1and M2are square integrable martingales with respect to the natural filtration
generated by the Brownian motion Ws:
(50)
M1(u, v) = Zv
u
(∂tψ+L2ψ)(s, Xǫ
1,s, X ǫ
2,s)ds +1
√ǫZv
u
L1ψ(s, Xǫ
1,s, X ǫ
2,s)ds
+1
ǫZv
u
L0ψ(s, Xǫ
1,s, X ǫ
2,s)ds +M1(u, v) + 1
√ǫM2(u, v),
By the properties of the solution to (48) the first three integrals on the right hand side
are uniformly bounded in uand v, and thus
Zv
u
h(s, Xǫ
1,s, X ǫ
2,s)ds =−ǫδψ(u, v) + ǫZv
u
(∂tψ+L2ψ)(s, Xǫ
1,s, X ǫ
2,s)ds
+√ǫZv
u
L1ψ(s, Xǫ
1,s, X ǫ
2,s)ds +ǫM1(u, v) + √ǫM2(u, v).
15
By the Itˆo isometry and the boundedness of the derivatives ∇x1ψand ∇x2ψ, the martingale
term can be bounded by
E(Mi(u, v))26Ci(v−u),0< Ci<∞.
Hence
E"Zv
u
h(s, Xǫ
1,s, X ǫ
2,s)ds2#6C ǫ ,
with a generic constant 0 < C < ∞that is independent of u, v and ǫ.
A.2. Convergence of the value function.
Lemma A.2. Suppose that Condition LQ from page 8holds. Then
|Vǫ(t, x)−V(t, x1)| ≤ C√ǫ ,
with x= (x1, x2)∈D=Ds×Rnf, where Vǫis the solution of the original dynamic program-
ming equation (11) and Vis the solution of the limiting dynamic programming equation (27).
The constant and Cdepends on xand t, but is finite on every compact subset of D×[0, T ].
Proof. The idea of the proof is to apply Itˆo’s formula to |yǫ
s|2, where yǫ
s=Yǫ
s−V(s, Xǫ
1,s)
satisfies the backward SDE
(51) dyǫ
s=−Gǫ(s, Xǫ
1,s, X ǫ
2,s, yǫ
s, zǫ
s)ds +zǫ
s·dWs
where
zǫ
s=Zǫ
s−¯
C⊤∇V(s, Xǫ
1,s),0⊤(∇V=∇x1V)
and
Gǫ(t, x1, x2, y, z) = G1(t, x1, x2, y, z) + Gǫ
2(t, x1, x2, y, z),
with
G1=f(t, x, y +V(t, x1), z + ( ¯
C⊤∇V(t, x1),0)) −¯
f(t, x1, V (t, x1),¯
C⊤∇V(t, x1))
Gǫ
2=(A11 −A)x1+1
ǫA12x2· ∇V(t, x1) + 1
2(C1C⊤
1−¯
C¯
C⊤)∇2V(t, x1).
We set Xǫ
s=Xǫ
τDfor s∈(τD, T ] when τD< T . Then, by construction, G1(t, x, 0,0),
x= (x1, x2)∈Ds×Rnfis centered with respect to πand bounded (since the running cost is
independent of x2), therefore Lemma A.1 implies that
(52) sup
t∈[0,T ]
E"ZT
t
G1(s, Xǫ
1,s, X ǫ
2,s,0,0)ds2#6C1ǫ ,
The second contribution to the driver can be recast as Gǫ
2= (L−¯
L)V, with L2and ¯
Las
given by (12) and (28) and thus, as ǫ→0,
(53) sup
t∈[0,T ]
E"ZT
t
Gǫ
2(s, Xǫ
1,s, X ǫ
2,s,0,0)ds2#6C2ǫ
16
by the functional central limit theorem for diffusions with Lipschitz coefficients [29]; cf. also
Sec. 3.2. As a consequence of (52) and (53), we have Gǫ→0 in L2, which, since E[|yǫ
T|2]6C3ǫ,
implies strong convergence of the solution of the corresponding backward SDE in L2.
Specifically, since ∇Vis bounded ¯
Ds, Itˆo’s formula applied to |yǫ
s|2, yields after an appli-
cation of Gronwall’s Lemma:
E"sup
t≤s≤T|yǫ
s|2+ZT
t|zǫ
s|2ds#6ℓDE"ZT
t
Gǫ(s, Xǫ
1,s, X ǫ
2,s,0,0)ds2#+ℓDE[|yǫ
T|2]
where the Lipschitz constant ℓDis independent of ǫand finite for every compact subset
¯
Ds⊂Rnsby the boundedness of ∇V(since Vis a classical solution and Dsin bounded).
Hence E[|yǫ
s|2]≤C3ǫuniformly for s∈[t, T ], and by setting s=t, we obtain
|Yǫ
t|=|Vǫ(t, x)−V(t, x1)| ≤ C√ǫ
for a constant C∈(0,∞).
This proves Theorem 3.1.
Acknowledgements. This research has been partially funded by Deutsche Forschungsge-
meinschaft (DFG) through the grant CRC 1114 ”Scaling Cascades in Complex Systems”,
Project A05 ”Probing scales in equilibrated systems by optimal nonequilibrium forcing”.
Omar Kebiri acknowledges funding from the EU-METALIC II Programme.
REFERENCES
[1] B. D. O. Anderson and Y. Liu. Controller reduction: concepts and approaches. IEEE Trans. Autom.
Control 34, 802–812 (1989).
[2] F. Antonelli, Backward-forward stochastic differential equations. Ann. Appl. Probab. 3 (1993), no. 3,
777-793.
[3] A.C. Antoulas, Approximation of large-scale dynamical systems, Advances in design and control, Society
for Industrial and Applied Mathematics (2005).
[4] E. Asplund and T. Kl¨uner, Optimal control of open quantum systems applied to the photochemistry of
surfaces, Phys. Rev. Lett. 106, 140404 (2011).
[5] K. Bahlali , B. Gherbal , B. Mezerdi, Existence of optimal controls for systems driven by FBSDEs, Syst.
Control Letters 60, 344-349 (1995).
[6] K. Bahlali, O. Kebiri, N. Khelfallah and H. Moussaoui One dimensional BSDEs with logarithmic growth
application to PDEs. Stochastics, 1744-2516 (2017).
[7] K. Bahlali, O. Kebiri, A. Mtiraoui: Existence of an optimal Control for a system driven by a degenerate
coupled Forward-Backward Stochastic Differential Equations, C. R. Acad. Sci. Paris, Ser. I (2016).
[8] V. Bally Approximation scheme for solutions of BSDE. In: El Karoui, N., Mazliak, L. (eds.) Backward
Stochastic Differential Equations, Addison Wesley Longman (1997), 177-191.
[9] C. Bender, J. Steiner: Least-Squares Monte Carlo for BSDEs. In: Carmona et al. (Eds.), Numerical
Methods in Finance, Springer, (2012) 257-289.
[10] U. Baur, P. Benner, and L. Feng, Model order reduction for linear and nonlinear systems: A system-
theoretic perspective, Arch. Comput. Meth. Eng. 21, 331–358 (2014).
[11] A. Bensoussan, L. Boccardo, F. Murat, Homogenization of elliptic equations with principal part not in
divergence form and hamiltonian with quadratic growth, Commun. Pure Appl. Math. 39 (1986) 769-
805.
[12] A. Bensoussan, J. L. Lions, G. Papanicolaou, Asymptotic Analysis for Periodic Structures, North-
Holland,Amsterdam, (1978) 769-805.
17
[13] A. Bensoussan and G. Blankenship, Singular perturbations in stochastic control, in: Singular Perturba-
tions and Asymptotic Analysis in Control Systems (eds. P. V. Kokotovic, A. Bensoussan, and G. L.
Blankenship), vol. 90 of Lecture Notes in Control and Information Sciences, Springer Berlin Heidel-
berg, pp. 171–260 (1987).
[14] B. Bouchard, N. E. Karoui and N. Touzi, Maturity randomization for stochastic control problems, Ann.
Appl. Probab. (2005), Vol. 15, No. 4, 2575-2605
[15] B. Bouchard, R. Elie, N. Touzi, Discrete-time approximation of BSDEs and probabilistic schemes for
fully nonlinear PDEs, Advanced financial modelling, Radon Ser. Comput. Appl. Math., 8, Walter de
Gruyter, Berlin, (2009) 91-124.
[16] P. Briand, Y. Hu, Probabilistic approach to singular perturbations of semilinear and quasilinear parabolic
, Nonlinear Analysis 35, 815-831 (1999).
[17] R. Buckdahn and Y. Hu, Probabilistic approach to homogenizations of systems of quasilinear parabolic
PDEs with periodic structures, Nonlinear Analysis 32, 609 – 619 (1998).
[18] A. Budhiraja and P. Dupuis, A variational representation for positive functionals of infinite dimensional
Brownian motion, Probab. Math. Statist., 20 (2000), pp. 39-61.
[19] D. Chevance, Numerical methods for backward stochastic differential equations, Numerical methods in
finance, Publ. Newton Inst., Cambridge Univ. Press, Cambridge (1997) 232-244.
[20] P. Dai Pra., L. Meneghini, and W. J. Runggaldier, Connections between stochastic control and dynamic
games, Math. Control Signals Systems, 9 (1996), pp. 303-326.
[21] M. H. Davis and A. R. Norman, Portfolio selection with transaction costs, Math. Oper. Res., 15 (1990),
pp. 676-713
[22] D. Duffie and L. G. Epstein, Stochastic differential utility. Econometrica, 60(2), (1992)
[23] P. Dupuis, K. Spiliopoulos and H. Wang, Importance sampling for multiscale diffusions, Multiscale Model.
Simul. 10, 1–27, (2012).
[24] N. El Karoui, S. Peng, and M. C. Quenez. Backward stochastic differential equations in finance. Mathe-
matical Finance, 1, 1-71 (1997).
[25] L. C. Evans, The perturbed test function method for viscosity solutions of nonlinear PDE, P. Roy. Soc.
Edinb. A 111, 359–375 (1989).
[26] W. H. Fleming, Optimal investment models with minimum consumption criteria, Australian Economic
Papers 44, 307-321 (2005).
[27] W. H. Fleming and H. Mete Soner Controlled Markov processes and viscosity solutions. Applications of
mathematics. Springer, New York, 2nd edition, (2006).
[28] C. Franzke and A.J. Majda, and E. Vanden-Eijnden, Low-order stochastic mode reduction for a realistic
barotropic model climate. J. Atmos. Sci. 62, 1722–1745 (2005).
[29] M. Freidlin and A. Wentzell, Random Perturbations of Dynamical Systems, vol. 260 of Grundlehren der
mathematischen Wissenschaften, Springer Berlin Heidelberg, 2012.
[30] E. Gobet and P. Turkedjiev, Adaptive importance sampling in least-squares Monte Carlo algorithms for
backward stochastic differential equations, Stoch. Proc. Appl. 127, 1171-1203 (2005).
[31] C. Hartmann, Balanced model reduction of partially-observed Langevin equations: an averaging principle,
Math. Comput. Model. Dyn. Syst. 17, 463-490, (2011).
[32] C. Hartmann, J. Latorre, G. A. Pavliotis, and W. Zhang, Optimal control of multiscale systems using
reduced-order models, J. Computational Dynamics 1, 279-306 (2014)
[33] C. Hartmann, B. Sch¨afer-Bung and A. Zueva Balanced averaging of bilinear systems with applications to
stochastic control SIAM J. Control Optim. 51, 2356-2378, (2013).
[34] C. Hartmann and C. Sch¨utte, Efficient rare event simulation by optimal nonequilibrium forcing, J. Stat.
Mech. Theor. Exp. 2012, P11004 (2012).
[35] C. Hartmann, C. Sch¨utte, M. Weber, and W. Zhang Importance sampling in path space for diffusion
processes with slow-fast variables, Probab. Theory Relat. Fields 170, 177–228 (2017).
[36] Y. Hu, P. Imkeller, and M. M¨uller. Utility maximization in incomplete markets, Ann. Appl. Probab. 15,
1691–1712 (2005).
[37] Y. Hu, S. Peng, A stability theorem of backward stochastic differential equations and its application, C.
R. Acad. Sci. Paris, Ser. I Math. 324, 1059–1064 (1997.
[38] C. B. Hyndman, P. O. Ngou, A Convolution Method for Numerical Solution of Backward Stochastic
Differential Equations Methodol. Comput. Appl. Probab. 19, 1–29 (2017).
18
[39] N. Ichihara, A stochastic representation for fully nonlinear PDEs and its application to homogenization,
J. Math. Sci. Univ. Tokyo 12, 467–492 (2005).
[40] O. Kebiri, L. Neureither, and C. Hartmann, Adaptive importance sampling with forward-backward stochas-
tic differential equations, Submitted (2018) .
[41] R. Khasminskii, Principle of averaging for parabolic and el liptic differential equations and for Markov
processes with small diffusion, Theory Probab. Appl. 8, 1-21 (1963).
[42] Y. Kabanov and S. Pergamenshchikov, Two-scale stochastic systems: asymptotic analysis and control,
Springer, Berlin, Heidelberg, Paris (2003).
[43] P. V. Kokotovic, Applications of singular perturbation techniques to control problems, SIAM Review 26,
501–550 (1984).
[44] M. Kobylanski, Backward stochastic differential equations and partial differential equations with quadratic
growth, Ann. Probab. 28, 558–602 (2000).
[45] H. J. Kushner, Weak Convergence Methods and Singularly Perturbed Stochastic Control and Filtering
Problems, Birkh¨auser, Boston (1990).
[46] T. Kurtz and R. H. Stockbridge, Stationary solutions and forward equations for controlled and singular
martingale problems, Electron. J. Probab 6, 5 (2001).
[47] J. Ma, P. Protter, and J. Yong. Solving Forward-Backward Stochastic Differential Equations Explicitly-a
Four Step Scheme Probability Theory and Related Fields, 98, 339-359 (1994).
[48] S. Lall, J. Marsden, and S. Glavaˇski, A subspace approach to balanced truncation for model reduction of
nonlinear control systems, Int. J. Robust Nonlinear Control 12, 519–535 (2002).
[49] P. M. Pardalos and V. A. Yatsenko, Optimization and Control of Bilinear Systems: Theory, Algorithms,
and Applications, Springer US, 2010.
[50] E. Pardoux, S. Peng, Backward stochastic differential equations and quasilinear parabolic partial differen-
tial equations, in: B.L. Rozovskii, R.B. Sowers (Eds.), Stochastic Partial Differential Equations and
their Applications, Lecture Notes in Control and Information Sciences 176, Springer, Berlin, (1992).
[51] E. Pardoux and A. Yu. Veretennikov: On the poisson equation and diffusion approximation 3, The Annals
of Probability, Vol. 33, No. 3, 1111–1133 (2005).
[52] E. Pardoux and S. Peng. Adapted solution of a backward stochastic differential equation. System Control
Letters, 14(1):55-61, (1990).
[53] G. A. Pavliotis and A. M. Stuart, Multiscale Methods: Averaging and Homogenization, Springer, (2008).
[54] S. Peng, Backward Stochastic Differential Equations and Applications to Optimal Control, Appl. Math.
Optim. 27 (1993), pp.125-144.
[55] H. Pham, Continuous-time stochastic control and optimization with financial applications, Stochastic
modelling and applied probability, Springer, Berlin, Heidelberg, (2009).
[56] A. Steinbrecher, Optimal control of robot guided laser material treatment, in: Progress in Industrial
Mathematics at ECMI 2008 (eds. A. D. Fitt, J. Norbury, H. Ockendon, and E. Wilson),Springer
Berlin Heidelberg. pp., 501–511 (2010).
[57] F. Robert Stengel. Optimal control and estimation. Dover books on advanced mathematics. Dover Publi-
cations, New York, (1994).
[58] C. Sch¨utte, S. Winkelmann, and C. Hartmann , Optimal control of molecular dynamics using markov
state models, Math. Program. Ser. B 134, 259-282 (2012).
[59] N. Touzi Optimal stochastic control, stochastic target problem, and backward differential equation,
Springer-Verlag (2013).
[60] P. Turkedjiev: Numerical methods for backward stochastic differential equations of quadratic and locally
Lipschitz type, Dissertation, Humboldt-Universit¨at zu Berlin, Mathematisch-Naturwissenschaftliche
Fakult¨at II (2013).
[61] P. Dupuis and H. Wang, Importance sampling, large deviations, and differential games, Stochastics and
Stochastic Reports 76, 481–508 (2004),
[62] W. Zhang, H. Wang, C. Hartmann, M. Weber and C. Sch¨utte, Applications of the cross-entropy method
to importance sampling and optimal control of diffusions, SIAM J. Sci. Comput. 36, A2654-A2672,
(2014)
[63] W. Zhen, Forward-backward stochastic differential equations, linear quadratic stochastic optimal control
and nonzero sum differential games, Journal of Systems Science and Complexity (2005)
19