On a Class of Stochastic Optimal Control Problems Related to BSDEs with Quadratic Growth

Université de Rennes 1, Roazhon, Brittany, France
SIAM Journal on Control and Optimization (Impact Factor: 1.46). 01/2006; 45(4):1279-1296. DOI: 10.1137/050633548
Source: DBLP


In this paper, we study a class of stochastic optimal control problems, where the drift term of the equation has a linear growth on the control variable, the cost functional has a quadratic growth, and the control process takes values in a closed set (not necessarily compact). This problem is related to some backward stochastic differential equations (BSDEs) with quadratic growth and unbounded terminal value. We prove that the optimal feedback control exists, and the optimal cost is given by the initial value of the solution of the related BSDE.

15 Reads
  • Source
    • "Proof. The proof follows from the fundamental relation stated in theorem 6.4; the closed loop equation can be solved as in [10], proposition 5.2. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider a backward stochastic differential equation in a Markovian framework for the pair of processes $(Y,Z)$, with generator with quadratic growth with respect to $Z$. Under non-degeneracy assumptions, we prove an analogue of the well-known Bismut-Elworty formula when the generator has quadratic growth with respect to $Z$. Applications to the solution of a semilinear Kolmogorov equation for the unknown $v$ with nonlinear term with quadratic growth with respect to $\nabla v$ and final condition only bounded and continuous are given, as well as applications to stochastic optimal control problems with quadratic growth.
    Preview · Article · Apr 2014 · Stochastic Processes and their Applications
  • Source
    • "From [7] "
    [Show abstract] [Hide abstract]
    ABSTRACT: We study Hamilton Jacobi Bellman equations in an infinite dimensional Hilbert space, with Lipschitz coefficients, where the Hamiltonian has superquadratic growth with respect to the derivative of the value function, and the final condition is not bounded. This allows to study stochastic optimal control problems for suitable controlled state equations with unbounded control processes. The results are applied to a controlled wave equation.
    Preview · Article · Dec 2013 · Journal of Differential Equations
  • Source
    • " would be needed (since we haven't assumed any Lipshitz or growth conditions on the SDE (13) explicitly) to get part (ii) of Theorem. In most cases, the value function will be a viscosity solution to the PDE by a " formal " appeal to a version of the DPP, if available. (b) Under the Lipshitz conditions on µ and σ and a boundedness assumption on σ, Fuhrman et. al (2006) showed that the value function is given by the maximal solution of the BSDE (17), using some localization arguments. They also provide the LQR example as a special case and consider more general applications where the control set is constrained to take values from a closed set of R. The uniqueness to the solutions of the BSDEs (and henc"
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers the problem of uniqueness of the solutions to a class of Markovian backward stochastic differential equations (BSDEs) which are also connected to certain nonlinear partial differential equation (PDE) through a probabilistic representation. Assuming that there is a solution to the BSDE or to the corresponding PDE, we use the probabilistic interpretation to show the uniqueness of the solutions, and provide an example of a stochastic control application.
    Full-text · Article · Oct 2012
Show more