On a Class of Stochastic Optimal Control Problems Related to BSDEs with Quadratic Growth.

Université de Rennes 1, Roazhon, Brittany, France
SIAM Journal on Control and Optimization (Impact Factor: 1.39). 01/2006; 45(4):1279-1296. DOI: 10.1137/050633548
Source: DBLP

ABSTRACT In this paper, we study a class of stochastic optimal control problems, where the drift term of the equation has a linear growth on the control variable, the cost functional has a quadratic growth, and the control process takes values in a closed set (not necessarily compact). This problem is related to some backward stochastic differential equations (BSDEs) with quadratic growth and unbounded terminal value. We prove that the optimal feedback control exists, and the optimal cost is given by the initial value of the solution of the related BSDE.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider a backward stochastic differential equation in a Markovian framework for the pair of processes $(Y,Z)$, with generator with quadratic growth with respect to $Z$. Under non-degeneracy assumptions, we prove an analogue of the well-known Bismut-Elworty formula when the generator has quadratic growth with respect to $Z$. Applications to the solution of a semilinear Kolmogorov equation for the unknown $v$ with nonlinear term with quadratic growth with respect to $\nabla v$ and final condition only bounded and continuous are given, as well as applications to stochastic optimal control problems with quadratic growth.
    Stochastic Processes and their Applications 04/2014; DOI:10.1016/ · 1.05 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We study Hamilton Jacobi Bellman equations in an infinite dimensional Hilbert space, with Lipschitz coefficients, where the Hamiltonian has superquadratic growth with respect to the derivative of the value function, and the final condition is not bounded. This allows to study stochastic optimal control problems for suitable controlled state equations with unbounded control processes. The results are applied to a controlled wave equation.
    Journal of Differential Equations 12/2013; 257(6). DOI:10.1016/j.jde.2014.05.026 · 1.57 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The aim of the present paper is to study an infinite horizon optimal control problem in which the controlled state dynamics is governed by a stochastic delay evolution equation in Hilbert spaces. The existence and uniqueness of the optimal control are obtained by means of associated infinite horizon backward stochastic differential equations without assuming the Gâteaux differentiability of the drift coefficient and the diffusion coefficient. An optimal control problem of stochastic delay partial differential equations is also given as an example to illustrate our results.
    Abstract and Applied Analysis 02/2013; 2013. DOI:10.1155/2013/791786 · 1.27 Impact Factor