Publications (28)20.36 Total impact

Article: Value in mixed strategies for zerosum stochastic differential games without Isaacs condition
[Show abstract] [Hide abstract]
ABSTRACT: In the present work, we consider 2person zerosum stochastic differential games with a nonlinear payoff functional which is defined through a backward stochastic differential equation. Our main objective is to study for such a game the problem of the existence of a value without Isaacs condition. Not surprising, this requires a suitable concept of mixed strategies which, to the authors' best knowledge, was not known in the context of stochastic differential games. For this, we consider nonanticipative strategies with a delay defined through a partition $\pi$ of the time interval $[0,T]$. The underlying stochastic controls for the both players are randomized along $\pi$ by a hazard which is independent of the governing Brownian motion, and knowing the information available at the left time point $t_{j1}$ of the subintervals generated by $\pi$, the controls of Players 1 and 2 are conditionally independent over $[t_{j1},t_j)$. It is shown that the associated lower and upper value functions $W^{\pi}$ and $U^{\pi}$ converge uniformly on compacts to a function $V$, the socalled value in mixed strategies, as the mesh of $\pi$ tends to zero. This function $V$ is characterized as the unique viscosity solution of the associated HamiltonJacobiBellmanIsaacs equation.The Annals of Probability 07/2014; 42(4). · 1.38 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: In this paper we consider a meanfield stochastic differential equation, also called Mc KeanVlasov equation, with initial data $(t,x)\in[0,T]\times R^d,$ which coefficients depend on both the solution $X^{t,x}_s$ but also its law. By considering square integrable random variables $\xi$ as initial condition for this equation, we can easily show the flow property of the solution $X^{t,\xi}_s$ of this new equation. Associating it with a process $X^{t,x,P_\xi}_s$ which coincides with $X^{t,\xi}_s$, when one substitutes $\xi$ for $x$, but which has the advantage to depend only on the law $P_\xi$ of $\xi$, we characterise the function $V(t,x,P_\xi)=E[\Phi(X^{t,x,P_\xi}_T,P_{X^{t,\xi}_T})]$ under appropriate regularity conditions on the coefficients of the stochastic differential equation as the unique classical solution of a non local PDE of meanfield type, involving the first and second order derivatives of $V$ with respect to its space variable and the probability law. The proof bases heavily on a preliminary study of the first and second order derivatives of the solution of the meanfield stochastic differential equation with respect to the probability law and a corresponding It\^{o} formula. In our approach we use the notion of derivative with respect to a square integrable probability measure introduced in \cite{PL} and we extend it in a direct way to second order derivatives.07/2014;  [Show abstract] [Hide abstract]
ABSTRACT: The purpose of this paper is to study 2person zerosum stochastic differential games, in which one player is a major one and the other player is a group of $N$ minor agents which are collectively playing, statistically identical and have the same costfunctional. The game is studied in a weak formulation; this means in particular, we can study it as a game of the type "feedback control against feedback control". The payoff/cost functional is defined through a controlled backward stochastic differential equation, for which driving coefficient is assumed to satisfy strict concavityconvexity with respect to the control parameters. This ensures the existence of saddle point feedback controls for the game with $N$ minor agents. We study the limit behavior of these saddle point controls and of the associated Hamiltonian, and we characterize the limit of the saddle point controls as the unique saddle point control of the limit meanfield stochastic differential game.SIAM Journal on Control and Optimization 08/2013; 52(1). · 1.38 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: In this paper we study useful estimates, in particular $L^p$estimates, for fully coupled forwardbackward stochastic differential equations (FBSDEs) with jumps. These estimates are proved at one hand for fully coupled FBSDEs with jumps under the monotonicity assumption for arbitrary time intervals and on the other hand for such equations on small time intervals. Moreover, the wellposedness of this kind of equation is studied and regularity results are obtained.Stochastic Processes and their Applications 02/2013; 124(4). · 0.95 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: This paper is concerned with stochastic differential games (SDGs) defined through fully coupled forwardbackward stochastic differential equations (FBSDEs) which are governed by Brownian motion and Poisson random measure. For SDGs, the upper and the lower value functions are defined by the controlled fully coupled FBSDEs with jumps. Using a new transformation introduced in [6], we prove that the upper and the lower value functions are deterministic. Then, after establishing the dynamic programming principle for the upper and the lower value functions of this SDGs, we prove that the upper and the lower value functions are the viscosity solutions to the associated upper and the lower HamiltonJacobiBellmanIsaacs (HJBI) equations, respectively. Furthermore, for a special case (when $\sigma,\ h$ do not depend on $y,\ z,\ k$), under the Isaacs' condition, we get the existence of the value of the game.Applied Mathematics and Optimization 02/2013; · 0.86 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: In this paper we study stochastic optimal control problems of fully coupled forwardbackward stochastic differential equations (FBSDEs). The recursive cost functionals are defined by controlled fully coupled FBSDEs. We study two cases of diffusion coefficients $\sigma$ of FSDEs. We use a new method to prove that the value functions are deterministic, satisfy the dynamic programming principle (DPP), and are viscosity solutions to the associated generalized HamiltonJacobiBellman (HJB) equations. The associated generalized HJB equations are related with algebraic equations when $\sigma$ depends on the second component of the solution $(Y, Z)$ of the BSDE and doesn't depend on the control. For this we adopt Peng's BSDE method, and so in particular, the notion of stochastic backward semigroup in [16]. We emphasize that the fact that $\sigma$ also depends on $Z$ makes the stochastic control much more complicate and has as consequence that the associated HJB equation is combined with an algebraic equation, which is inspired by Wu and Yu [19]. We use the continuation method combined with the fixed point theorem to prove that the algebraic equation has a unique solution, and moreover, we also give the representation for this solution. On the other hand, we prove some new basic estimates for fully coupled FBSDEs under the monotonic assumptions. In particular, we prove under the Lipschitz and linear growth conditions that fully coupled FBSDEs have a unique solution on the small time interval, if the Lipschitz constant of $\sigma$\ with respect to $z$ is sufficiently small. We also establish a generalized comparison theorem for such fully coupled FBSDEs.SIAM Journal on Control and Optimization 02/2013; · 1.38 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: Mathematical meanfield approaches have been used in many fields, not only in Physics and Chemistry, but also recently in Finance, Economics, and Game Theory. In this paper we will study a new special meanfield problem in a purely probabilistic method, to characterize its limit which is the solution of meanfield backward stochastic differential equations (BSDEs) with reflections. On the other hand, we will prove that this type of reflected meanfield BSDEs can also be obtained as the limit equation of the meanfield BSDEs by penalization method. Finally, we give the probabilistic interpretation of the nonlinear and nonlocal partial differential equations with the obstacles by the solutions of reflected meanfield BSDEs.Journal of Mathematical Analysis and Applications 10/2012; · 1.05 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: In this paper we study stochastic optimal control problems of general fully coupled forwardbackward stochastic differential equations (FBSDEs). In Li and Wei [8] the authors studied two cases of diffusion coefficients $\sigma$ of FSDEs, in one case when $\sigma$\ depends on the control and does not depend on the second component of the solution $(Y, Z)$ of the BSDE, and in the other case $\sigma$ depends on $Z$ and doesn't depend on the control. Here we study the general case when $\sigma$ depends on both $Z$ and the control at the same time. The recursive cost functionals are defined by controlled general fully coupled FBSDEs, then the value functions are given by taking the essential supremum of the cost functionals over all admissible controls. We give the formulation of related generalized HamiltonJacobiBellman (HJB) equations, and prove the value function is its viscosity solution.06/2012;  [Show abstract] [Hide abstract]
ABSTRACT: In this paper we study the optimal stochastic control problem for stochastic differential systems reflected in a domain. The cost functional is a recursive one, which is defined via generalized backward stochastic differential equations developed by Pardoux and Zhang [20]. The value function is shown to be the unique viscosity solution to the associated HamiltonJacobiBellman equation, which is a fully nonlinear parabolic partial differential equation with a nonlinear Neumann boundary condition. For this, we also prove some new estimates for stochastic differential systems reflected in a domain.02/2012;  [Show abstract] [Hide abstract]
ABSTRACT: In the present paper we investigate the problem of the existence of a value for differential games without Isaacs condition. For this we introduce a suitable concept of mixed strategies along a partition of the time interval, which are associated with classical nonanticipative strategies (with delay). Imposing on the underlying controls for both players a conditional independence property, we obtain the existence of the value in mixed strategies as the limit of the lower as well as of the upper value functions along a sequence of partitions which mesh tends to zero. Moreover, we characterize this value in mixed strategies as the unique viscosity solution of the corresponding HamiltonJacobiIsaacs equation.International Journal of Games Theory 02/2012; · 0.58 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: In this work we investigate regularity properties of a large class of HamiltonJacobiBellman (HJB) equations with or without obstacles, which can be stochastically interpreted in form of a stochastic control system which nonlinear cost functional is defined with the help of a backward stochastic differential equation (BSDE) or a reflected BSDE (RBSDE). More precisely, we prove that, firstly, the unique viscosity solution $V(t,x)$ of such a HJB equation over the time interval $[0,T],$ with or without an obstacle, and with terminal condition at time $T$, is jointly Lipschitz in $(t,x)$, for $t$ running any compact subinterval of $[0,T)$. Secondly, for the case that $V$ solves a HJB equation without an obstacle or with an upper obstacle it is shown under appropriate assumptions that $V(t,x)$ is jointly semiconcave in $(t,x)$. These results extend earlier ones by Buckdahn, Cannarsa and Quincampoix [1]. Our approach embeds their idea of time change into a BSDE analysis. We also provide an elementary counterexample which shows that, in general, for the case that $V$ solves a HJB equation with a lower obstacle the semiconcavity doesn't hold true.02/2012; 
Article: Regularity properties for general HJB equations: a backward stochastic differential equation method
[Show abstract] [Hide abstract]
ABSTRACT: We investigate regularity properties of a large class of Hamilton–Jacobi–Bellman (HJB) equations with or without obstacles, which can be stochastically interpreted in the form of a stochastic control system in which nonlinear cost functional is defined with the help of a Backward Stochastic Differential Equation (BSDE) or a reflected BSDE. More precisely, we prove that, first, the unique viscosity solution V(t,x) of an HJB equation over the time interval [0,T], with or without an obstacle, and with terminal condition at time T, is jointly Lipschitz in (t,x) for t running any compact subinterval of [0,T). Second, for the case that V solves an HJB equation without an obstacle or with an upper obstacle it is shown under appropriate assumptions that V(t,x) is jointly semiconcave in (t,x). These results extend earlier ones by R. Buckdahn, P. Cannarsa. M. Quincampoix [”Lipschitz continuity and semiconcavity properties of the value function of a stochastic control problem”, NoDEA, Nonlinear Differ. Equ. Appl. 17, No. 6, 715728 (2010; Zbl 1205.93162)] . Our approach embeds their idea of time change into a BSDE analysis. We also provide an elementary counterexample which shows that, in general, for the case that V solves an HJB equation with a lower obstacle the semiconcavity doesn’t hold true.SIAM Journal on Control and Optimization 01/2012; 50(3). · 1.38 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: In Buckdahn, Djehiche, Li, and Peng (2009), the authors obtained meanfield Backward Stochastic Differential Equations (BSDEs) in a natural way as a limit of some highly dimensional system of forward and backward SDEs, corresponding to a great number of “particles” (or “agents”). The objective of the present paper is to deepen the investigation of such meanfield BSDEs by studying their stochastic maximum principle. This paper studies the stochastic maximum principle (SMP) for meanfield controls, which is different from the classical ones. This paper deduces an SMP in integral form, and it also gets, under additional assumptions, necessary conditions as well as sufficient conditions for the optimality of a control. As an application, this paper studies a linear quadratic stochastic control problem of meanfield type.Automatica 01/2012; 48:366373. · 2.92 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: We study the optimal control for stochastic differential equations (SDEs) of meanfield type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of meanfield type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng’stype stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966–979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear meanfield backward SDE, while the second order adjoint equation remains the same as in Peng’s stochastic maximum principle. KeywordsStochastic control–Maximum principle–Meanfield SDE–McKeanVlasov equation–Time inconsistent controlApplied Mathematics and Optimization 01/2011; 64(2):197216. · 0.86 Impact Factor 
Article: Stochastic representation for solutions of Isaacs’ type integral–partial differential equations
[Show abstract] [Hide abstract]
ABSTRACT: In this paper we study the integral–partial differential equations of Isaacs’ type by zerosum twoplayer stochastic differential games (SDGs) with jumpdiffusion. The results of Fleming and Souganidis (1989) [9] and those of Biswas (2009) [3] are extended, we investigate a controlled stochastic system with a Brownian motion and a Poisson random measure, and with nonlinear cost functionals defined by controlled backward stochastic differential equations (BSDEs). Furthermore, unlike the two papers cited above the admissible control processes of the two players are allowed to rely on all events from the past. This quite natural generalization permits the players to consider those earlier information, and it makes more convenient to get the dynamic programming principle (DPP). However, the cost functionals are not deterministic anymore and hence also the upper and the lower value functions become a priori random fields. We use a new method to prove that, indeed, the upper and the lower value functions are deterministic. On the other hand, thanks to BSDE methods (Peng, 1997) [18] we can directly prove a DPP for the upper and the lower value functions, and also that both these functions are the unique viscosity solutions of the upper and the lower integral–partial differential equations of Hamilton–Jacobi–Bellman–Isaacs’ type, respectively. Moreover, the existence of the value of the game is got in this more general setting under Isaacs’ condition.Stochastic Processes and their Applications 01/2011; 121(12):27152750. · 0.95 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: In this paper we consider the existence result of one dimensional meanfield backward stochastic differential equations (MFBSDEs) when their coefficients are continuous, nondecreasing in y, and have a linear growth. We get an existence of the minimal solution and a comparison theorem for such kind of MFBSDEs.01/2011;  [Show abstract] [Hide abstract]
ABSTRACT: In this paper we study zerosum twoplayer stochastic differential games with jumps with the help of theory of Backward Stochastic Differential Equations (BSDEs). We generalize the results of Fleming and Souganidis [10] and those by Biswas [3] by considering a controlled stochastic system driven by a ddimensional Brownian motion and a Poisson random measure and by associating nonlinear cost functionals defined by controlled BSDEs. Moreover, unlike the both papers cited above we allow the admissible control processes of both players to depend on all events occurring before the beginning of the game. This quite natural extension allows the players to take into account such earlier events, and it makes even easier to derive the dynamic programming principle. The price to pay is that the cost functionals become random variables and so also the upper and the lower value functions of the game are a priori random fields. The use of a new method allows to prove that, in fact, the upper and the lower value functions are deterministic. On the other hand, the application of BSDE methods [18] allows to prove a dynamic programming principle for the upper and the lower value functions in a very straightforward way, as well as the fact that they are the unique viscosity solutions of the upper and the lower integralpartial differential equations of HamiltonJacobiBellmanIsaacs' type, respectively. Finally, the existence of the value of the game is got in this more general setting if Isaacs' condition holds. Comment: 30 pages.04/2010;  [Show abstract] [Hide abstract]
ABSTRACT: The paper presents a valuation model of futures options trading at exchanges with initial margin requirements and daily price limit, and this result gives an academic guidance to design trading rules at exchanges. Unlike the leading work of Black, certain trading rules are considered so as to be more fit for practical futures markets. The paper prices futures options with initial margin requirements and daily price limit by duplicating them with the help of the theory of backward stochastic differential equations (BSDEs, for short). Furthermore, an explicit expression of the price of the call (or the put) futures option is given and also is shown to be the unique solution of the associated nonlinear partial differential equation.Acta Mathematica Sinica 01/2010; 26(3):579586. · 0.48 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: In this paper we study stochastic optimal control problems with jumps with the help of the theory of Backward Stochastic Differential Equations (BSDEs) with jumps. We generalize the results of Peng [S. Peng, BSDE and stochastic optimizations, in: J. Yan, S. Peng, S. Fang, L. Wu, Topics in Stochastic Analysis, Science Press, Beijing, 1997 (Chapter 2) (in Chinese)] by considering cost functionals defined by controlled BSDEs with jumps. The application of BSDE methods, in particular, the use of the notion of stochastic backward semigroups introduced by Peng in the abovementioned work allows a straightforward proof of a dynamic programming principle for value functions associated with stochastic optimal control problems with jumps. We prove that the value functions are the viscosity solutions of the associated generalized Hamilton–Jacobi–Bellman equations with integraldifferential operators. For this proof, we adapt Peng’s BSDE approach, given in the abovementioned reference, developed in the framework of stochastic control problems driven by Brownian motion to that of stochastic control problems driven by Brownian motion and Poisson random measure.Nonlinear Analysis 01/2009; · 1.64 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: In this paper we investigate zerosum twoplayer stochastic differential games whose cost functionals are given by doubly controlled reflected backward stochastic differential equations (RBSDEs) with two barriers. For admissible controls which can depend on the whole past and so include, in particular, information occurring before the beginning of the game, the games are interpreted as games of the type "admissible strategy" against "admissible control", and the associated lower and upper value functions are studied. A priori random, they are shown to be deterministic, and it is proved that they are the unique viscosity solutions of the associated upper and the lower BellmanIsaacs equations with two barriers, respectively. For the proofs we make full use of the penalization method for RBSDEs with one barrier and RBSDEs with two barriers. For this end we also prove new estimates for RBSDEs with two barriers, which are sharper than those in [18]. Furthermore, we show that the viscosity solution of the Isaacs equation with two reflecting barriers not only can be approximated by the viscosity solutions of penalized Isaacs equations with one barrier, but also directly by the viscosity solutions of penalized Isaacs equations without barrier.Nonlinear Differential Equations and Applications NoDEA 05/2008; · 0.67 Impact Factor
Publication Stats
160  Citations  
20.36  Total Impact Points  
Top Journals
Institutions

2007–2013

Shandong University
 • School of Mathematics and Statistics
 • Department of Pure Mathematics
Chinanshih, Shandong Sheng, China


2006–2009

Fudan University
 Institute of Mathematics
Shanghai, Shanghai Shi, China
