Article
On a Class of Stochastic Optimal Control Problems Related to BSDEs with Quadratic Growth
Université de Rennes 1, Roazhon, Brittany, France
SIAM Journal on Control and Optimization (Impact Factor: 1.46). 01/2006; 45(4):12791296. DOI: 10.1137/050633548 Source: DBLP
ABSTRACT
In this paper, we study a class of stochastic optimal control problems, where the drift term of the equation has a linear growth on the control variable, the cost functional has a quadratic growth, and the control process takes values in a closed set (not necessarily compact). This problem is related to some backward stochastic differential equations (BSDEs) with quadratic growth and unbounded terminal value. We prove that the optimal feedback control exists, and the optimal cost is given by the initial value of the solution of the related BSDE.

 "Proof. The proof follows from the fundamental relation stated in theorem 6.4; the closed loop equation can be solved as in [10], proposition 5.2. "
[Show abstract] [Hide abstract]
ABSTRACT: We consider a backward stochastic differential equation in a Markovian framework for the pair of processes $(Y,Z)$, with generator with quadratic growth with respect to $Z$. Under nondegeneracy assumptions, we prove an analogue of the wellknown BismutElworty formula when the generator has quadratic growth with respect to $Z$. Applications to the solution of a semilinear Kolmogorov equation for the unknown $v$ with nonlinear term with quadratic growth with respect to $\nabla v$ and final condition only bounded and continuous are given, as well as applications to stochastic optimal control problems with quadratic growth.Stochastic Processes and their Applications 04/2014; 125(5). DOI:10.1016/j.spa.2014.12.003 · 1.06 Impact Factor 
 "From [7] "
[Show abstract] [Hide abstract]
ABSTRACT: We study Hamilton Jacobi Bellman equations in an infinite dimensional Hilbert space, with Lipschitz coefficients, where the Hamiltonian has superquadratic growth with respect to the derivative of the value function, and the final condition is not bounded. This allows to study stochastic optimal control problems for suitable controlled state equations with unbounded control processes. The results are applied to a controlled wave equation.Journal of Differential Equations 12/2013; 257(6). DOI:10.1016/j.jde.2014.05.026 · 1.68 Impact Factor 
 " would be needed (since we haven't assumed any Lipshitz or growth conditions on the SDE (13) explicitly) to get part (ii) of Theorem. In most cases, the value function will be a viscosity solution to the PDE by a " formal " appeal to a version of the DPP, if available. (b) Under the Lipshitz conditions on µ and σ and a boundedness assumption on σ, Fuhrman et. al (2006) showed that the value function is given by the maximal solution of the BSDE (17), using some localization arguments. They also provide the LQR example as a special case and consider more general applications where the control set is constrained to take values from a closed set of R. The uniqueness to the solutions of the BSDEs (and henc"
[Show abstract] [Hide abstract]
ABSTRACT: This paper considers the problem of uniqueness of the solutions to a class of Markovian backward stochastic differential equations (BSDEs) which are also connected to certain nonlinear partial differential equation (PDE) through a probabilistic representation. Assuming that there is a solution to the BSDE or to the corresponding PDE, we use the probabilistic interpretation to show the uniqueness of the solutions, and provide an example of a stochastic control application.
Similar Publications
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.