Juan Li

Shandong University, Chi-nan-shih, Shandong Sheng, China

Are you Juan Li?

Claim your profile

Publications (26)5.52 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: In the present work, we consider 2-person zero-sum stochastic differential games with a nonlinear pay-off functional which is defined through a backward stochastic differential equation. Our main objective is to study for such a game the problem of the existence of a value without Isaacs condition. Not surprising, this requires a suitable concept of mixed strategies which, to the authors' best knowledge, was not known in the context of stochastic differential games. For this, we consider nonanticipative strategies with a delay defined through a partition $\pi$ of the time interval $[0,T]$. The underlying stochastic controls for the both players are randomized along $\pi$ by a hazard which is independent of the governing Brownian motion, and knowing the information available at the left time point $t_{j-1}$ of the subintervals generated by $\pi$, the controls of Players 1 and 2 are conditionally independent over $[t_{j-1},t_j)$. It is shown that the associated lower and upper value functions $W^{\pi}$ and $U^{\pi}$ converge uniformly on compacts to a function $V$, the so-called value in mixed strategies, as the mesh of $\pi$ tends to zero. This function $V$ is characterized as the unique viscosity solution of the associated Hamilton-Jacobi-Bellman-Isaacs equation.
    07/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we consider a mean-field stochastic differential equation, also called Mc Kean-Vlasov equation, with initial data $(t,x)\in[0,T]\times R^d,$ which coefficients depend on both the solution $X^{t,x}_s$ but also its law. By considering square integrable random variables $\xi$ as initial condition for this equation, we can easily show the flow property of the solution $X^{t,\xi}_s$ of this new equation. Associating it with a process $X^{t,x,P_\xi}_s$ which coincides with $X^{t,\xi}_s$, when one substitutes $\xi$ for $x$, but which has the advantage to depend only on the law $P_\xi$ of $\xi$, we characterise the function $V(t,x,P_\xi)=E[\Phi(X^{t,x,P_\xi}_T,P_{X^{t,\xi}_T})]$ under appropriate regularity conditions on the coefficients of the stochastic differential equation as the unique classical solution of a non local PDE of mean-field type, involving the first and second order derivatives of $V$ with respect to its space variable and the probability law. The proof bases heavily on a preliminary study of the first and second order derivatives of the solution of the mean-field stochastic differential equation with respect to the probability law and a corresponding It\^{o} formula. In our approach we use the notion of derivative with respect to a square integrable probability measure introduced in \cite{PL} and we extend it in a direct way to second order derivatives.
    07/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The purpose of this paper is to study 2-person zero-sum stochastic differential games, in which one player is a major one and the other player is a group of $N$ minor agents which are collectively playing, statistically identical and have the same cost-functional. The game is studied in a weak formulation; this means in particular, we can study it as a game of the type "feedback control against feedback control". The payoff/cost functional is defined through a controlled backward stochastic differential equation, for which driving coefficient is assumed to satisfy strict concavity-convexity with respect to the control parameters. This ensures the existence of saddle point feedback controls for the game with $N$ minor agents. We study the limit behavior of these saddle point controls and of the associated Hamiltonian, and we characterize the limit of the saddle point controls as the unique saddle point control of the limit mean-field stochastic differential game.
    08/2013;
  • Source
    Juan Li, Qingmeng Wei
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we study useful estimates, in particular $L^p$-estimates, for fully coupled forward-backward stochastic differential equations (FBSDEs) with jumps. These estimates are proved at one hand for fully coupled FBSDEs with jumps under the monotonicity assumption for arbitrary time intervals and on the other hand for such equations on small time intervals. Moreover, the well-posedness of this kind of equation is studied and regularity results are obtained.
    Stochastic Processes and their Applications. 02/2013; 124(4).
  • Source
    Juan Li, Qingmeng Wei
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper is concerned with stochastic differential games (SDGs) defined through fully coupled forward-backward stochastic differential equations (FBSDEs) which are governed by Brownian motion and Poisson random measure. For SDGs, the upper and the lower value functions are defined by the controlled fully coupled FBSDEs with jumps. Using a new transformation introduced in [6], we prove that the upper and the lower value functions are deterministic. Then, after establishing the dynamic programming principle for the upper and the lower value functions of this SDGs, we prove that the upper and the lower value functions are the viscosity solutions to the associated upper and the lower Hamilton-Jacobi-Bellman-Isaacs (HJBI) equations, respectively. Furthermore, for a special case (when $\sigma,\ h$ do not depend on $y,\ z,\ k$), under the Isaacs' condition, we get the existence of the value of the game.
    02/2013;
  • Source
    Juan Li, Qingmeng Wei
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we study stochastic optimal control problems of fully coupled forward-backward stochastic differential equations (FBSDEs). The recursive cost functionals are defined by controlled fully coupled FBSDEs. We study two cases of diffusion coefficients $\sigma$ of FSDEs. We use a new method to prove that the value functions are deterministic, satisfy the dynamic programming principle (DPP), and are viscosity solutions to the associated generalized Hamilton-Jacobi-Bellman (HJB) equations. The associated generalized HJB equations are related with algebraic equations when $\sigma$ depends on the second component of the solution $(Y, Z)$ of the BSDE and doesn't depend on the control. For this we adopt Peng's BSDE method, and so in particular, the notion of stochastic backward semigroup in [16]. We emphasize that the fact that $\sigma$ also depends on $Z$ makes the stochastic control much more complicate and has as consequence that the associated HJB equation is combined with an algebraic equation, which is inspired by Wu and Yu [19]. We use the continuation method combined with the fixed point theorem to prove that the algebraic equation has a unique solution, and moreover, we also give the representation for this solution. On the other hand, we prove some new basic estimates for fully coupled FBSDEs under the monotonic assumptions. In particular, we prove under the Lipschitz and linear growth conditions that fully coupled FBSDEs have a unique solution on the small time interval, if the Lipschitz constant of $\sigma$\ with respect to $z$ is sufficiently small. We also establish a generalized comparison theorem for such fully coupled FBSDEs.
    02/2013;
  • Source
    Juan Li
    [Show abstract] [Hide abstract]
    ABSTRACT: Mathematical mean-field approaches have been used in many fields, not only in Physics and Chemistry, but also recently in Finance, Economics, and Game Theory. In this paper we will study a new special mean-field problem in a purely probabilistic method, to characterize its limit which is the solution of mean-field backward stochastic differential equations (BSDEs) with reflections. On the other hand, we will prove that this type of reflected mean-field BSDEs can also be obtained as the limit equation of the mean-field BSDEs by penalization method. Finally, we give the probabilistic interpretation of the nonlinear and nonlocal partial differential equations with the obstacles by the solutions of reflected mean-field BSDEs.
    10/2012;
  • Source
    Juan Li
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we study stochastic optimal control problems of general fully coupled forward-backward stochastic differential equations (FBSDEs). In Li and Wei [8] the authors studied two cases of diffusion coefficients $\sigma$ of FSDEs, in one case when $\sigma$\ depends on the control and does not depend on the second component of the solution $(Y, Z)$ of the BSDE, and in the other case $\sigma$ depends on $Z$ and doesn't depend on the control. Here we study the general case when $\sigma$ depends on both $Z$ and the control at the same time. The recursive cost functionals are defined by controlled general fully coupled FBSDEs, then the value functions are given by taking the essential supremum of the cost functionals over all admissible controls. We give the formulation of related generalized Hamilton-Jacobi-Bellman (HJB) equations, and prove the value function is its viscosity solution.
    06/2012;
  • Source
    Juan Li, Shanjian Tang
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we study the optimal stochastic control problem for stochastic differential systems reflected in a domain. The cost functional is a recursive one, which is defined via generalized backward stochastic differential equations developed by Pardoux and Zhang [20]. The value function is shown to be the unique viscosity solution to the associated Hamilton-Jacobi-Bellman equation, which is a fully nonlinear parabolic partial differential equation with a nonlinear Neumann boundary condition. For this, we also prove some new estimates for stochastic differential systems reflected in a domain.
    02/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In the present paper we investigate the problem of the existence of a value for differential games without Isaacs condition. For this we introduce a suitable concept of mixed strategies along a partition of the time interval, which are associated with classical nonanticipative strategies (with delay). Imposing on the underlying controls for both players a conditional independence property, we obtain the existence of the value in mixed strategies as the limit of the lower as well as of the upper value functions along a sequence of partitions which mesh tends to zero. Moreover, we characterize this value in mixed strategies as the unique viscosity solution of the corresponding Hamilton-Jacobi-Isaacs equation.
    International Journal of Games Theory 02/2012; · 0.58 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work we investigate regularity properties of a large class of Hamilton-Jacobi-Bellman (HJB) equations with or without obstacles, which can be stochastically interpreted in form of a stochastic control system which nonlinear cost functional is defined with the help of a backward stochastic differential equation (BSDE) or a reflected BSDE (RBSDE). More precisely, we prove that, firstly, the unique viscosity solution $V(t,x)$ of such a HJB equation over the time interval $[0,T],$ with or without an obstacle, and with terminal condition at time $T$, is jointly Lipschitz in $(t,x)$, for $t$ running any compact subinterval of $[0,T)$. Secondly, for the case that $V$ solves a HJB equation without an obstacle or with an upper obstacle it is shown under appropriate assumptions that $V(t,x)$ is jointly semiconcave in $(t,x)$. These results extend earlier ones by Buckdahn, Cannarsa and Quincampoix [1]. Our approach embeds their idea of time change into a BSDE analysis. We also provide an elementary counter-example which shows that, in general, for the case that $V$ solves a HJB equation with a lower obstacle the semi-concavity doesn't hold true.
    02/2012;
  • Juan Li
    [Show abstract] [Hide abstract]
    ABSTRACT: In Buckdahn, Djehiche, Li, and Peng (2009), the authors obtained mean-field Backward Stochastic Differential Equations (BSDEs) in a natural way as a limit of some highly dimensional system of forward and backward SDEs, corresponding to a great number of “particles” (or “agents”). The objective of the present paper is to deepen the investigation of such mean-field BSDEs by studying their stochastic maximum principle. This paper studies the stochastic maximum principle (SMP) for mean-field controls, which is different from the classical ones. This paper deduces an SMP in integral form, and it also gets, under additional assumptions, necessary conditions as well as sufficient conditions for the optimality of a control. As an application, this paper studies a linear quadratic stochastic control problem of mean-field type.
    Automatica. 01/2012; 48:366-373.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng’s-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966–979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng’s stochastic maximum principle. KeywordsStochastic control–Maximum principle–Mean-field SDE–McKean-Vlasov equation–Time inconsistent control
    Applied Mathematics and Optimization 01/2011; 64(2):197-216. · 0.86 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we study the integral–partial differential equations of Isaacs’ type by zero-sum two-player stochastic differential games (SDGs) with jump-diffusion. The results of Fleming and Souganidis (1989) [9] and those of Biswas (2009) [3] are extended, we investigate a controlled stochastic system with a Brownian motion and a Poisson random measure, and with nonlinear cost functionals defined by controlled backward stochastic differential equations (BSDEs). Furthermore, unlike the two papers cited above the admissible control processes of the two players are allowed to rely on all events from the past. This quite natural generalization permits the players to consider those earlier information, and it makes more convenient to get the dynamic programming principle (DPP). However, the cost functionals are not deterministic anymore and hence also the upper and the lower value functions become a priori random fields. We use a new method to prove that, indeed, the upper and the lower value functions are deterministic. On the other hand, thanks to BSDE methods (Peng, 1997) [18] we can directly prove a DPP for the upper and the lower value functions, and also that both these functions are the unique viscosity solutions of the upper and the lower integral–partial differential equations of Hamilton–Jacobi–Bellman–Isaacs’ type, respectively. Moreover, the existence of the value of the game is got in this more general setting under Isaacs’ condition.
    Stochastic Processes and Their Applications - STOCH PROC APPL. 01/2011; 121(12):2715-2750.
  • Heng Du, Juan Li, Qingmeng Wei
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we consider the existence result of one dimensional mean-field backward stochastic differential equations (MFBSDEs) when their coefficients are continuous, non-decreasing in y, and have a linear growth. We get an existence of the minimal solution and a comparison theorem for such kind of MFBSDEs.
    01/2011;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we study zero-sum two-player stochastic differential games with jumps with the help of theory of Backward Stochastic Differential Equations (BSDEs). We generalize the results of Fleming and Souganidis [10] and those by Biswas [3] by considering a controlled stochastic system driven by a d-dimensional Brownian motion and a Poisson random measure and by associating nonlinear cost functionals defined by controlled BSDEs. Moreover, unlike the both papers cited above we allow the admissible control processes of both players to depend on all events occurring before the beginning of the game. This quite natural extension allows the players to take into account such earlier events, and it makes even easier to derive the dynamic programming principle. The price to pay is that the cost functionals become random variables and so also the upper and the lower value functions of the game are a priori random fields. The use of a new method allows to prove that, in fact, the upper and the lower value functions are deterministic. On the other hand, the application of BSDE methods [18] allows to prove a dynamic programming principle for the upper and the lower value functions in a very straight-forward way, as well as the fact that they are the unique viscosity solutions of the upper and the lower integral-partial differential equations of Hamilton-Jacobi-Bellman-Isaacs' type, respectively. Finally, the existence of the value of the game is got in this more general setting if Isaacs' condition holds. Comment: 30 pages.
    04/2010;
  • Juan Li, Shige Peng
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we study stochastic optimal control problems with jumps with the help of the theory of Backward Stochastic Differential Equations (BSDEs) with jumps. We generalize the results of Peng [S. Peng, BSDE and stochastic optimizations, in: J. Yan, S. Peng, S. Fang, L. Wu, Topics in Stochastic Analysis, Science Press, Beijing, 1997 (Chapter 2) (in Chinese)] by considering cost functionals defined by controlled BSDEs with jumps. The application of BSDE methods, in particular, the use of the notion of stochastic backward semigroups introduced by Peng in the above-mentioned work allows a straightforward proof of a dynamic programming principle for value functions associated with stochastic optimal control problems with jumps. We prove that the value functions are the viscosity solutions of the associated generalized Hamilton–Jacobi–Bellman equations with integral-differential operators. For this proof, we adapt Peng’s BSDE approach, given in the above-mentioned reference, developed in the framework of stochastic control problems driven by Brownian motion to that of stochastic control problems driven by Brownian motion and Poisson random measure.
    Nonlinear Analysis: Theory, Methods & Applications. 01/2009;
  • Source
    Rainer Buckdahn, Juan Li
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we investigate zero-sum two-player stochastic differential games whose cost functionals are given by doubly controlled reflected backward stochastic differential equations (RBSDEs) with two barriers. For admissible controls which can depend on the whole past and so include, in particular, information occurring before the beginning of the game, the games are interpreted as games of the type "admissible strategy" against "admissible control", and the associated lower and upper value functions are studied. A priori random, they are shown to be deterministic, and it is proved that they are the unique viscosity solutions of the associated upper and the lower Bellman-Isaacs equations with two barriers, respectively. For the proofs we make full use of the penalization method for RBSDEs with one barrier and RBSDEs with two barriers. For this end we also prove new estimates for RBSDEs with two barriers, which are sharper than those in [18]. Furthermore, we show that the viscosity solution of the Isaacs equation with two reflecting barriers not only can be approximated by the viscosity solutions of penalized Isaacs equations with one barrier, but also directly by the viscosity solutions of penalized Isaacs equations without barrier.
    Nonlinear Differential Equations and Applications NoDEA 05/2008; · 0.67 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In [5] the authors obtained Mean-Field backward stochastic differential equations (BSDE) associated with a Mean-field stochastic differential equation (SDE) in a natural way as limit of some highly dimensional system of forward and backward SDEs, corresponding to a large number of ``particles'' (or ``agents''). The objective of the present paper is to deepen the investigation of such Mean-Field BSDEs by studying them in a more general framework, with general driver, and to discuss comparison results for them. In a second step we are interested in partial differential equations (PDE) whose solutions can be stochastically interpreted in terms of Mean-Field BSDEs. For this we study a Mean-Field BSDE in a Markovian framework, associated with a Mean-Field forward equation. By combining classical BSDE methods, in particular that of ``backward semigroups" introduced by Peng [14], with specific arguments for Mean-Field BSDEs we prove that this Mean-Field BSDE describes the viscosity solution of a nonlocal PDE. The uniqueness of this viscosity solution is obtained for the space of continuous functions with polynomial growth. With the help of an example it is shown that for the nonlocal PDEs associated to Mean-Field BSDEs one cannot expect to have uniqueness in a larger space of continuous functions.
    Stochastic Processes and their Applications 12/2007; · 0.95 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Mathematical mean-field approaches play an important role in different fields of Physics and Chemistry, but have found in recent works also their application in Economics, Finance and Game Theory. The objective of our paper is to investigate a special mean-field problem in a purely stochastic approach: for the solution $(Y,Z)$ of a mean-field backward stochastic differential equation driven by a forward stochastic differential of McKean--Vlasov type with solution $X$ we study a special approximation by the solution $(X^N,Y^N,Z^N)$ of some decoupled forward--backward equation which coefficients are governed by $N$ independent copies of $(X^N,Y^N,Z^N)$. We show that the convergence speed of this approximation is of order $1/\sqrt{N}$. Moreover, our special choice of the approximation allows to characterize the limit behavior of $\sqrt{N}(X^N-X,Y^N-Y,Z^N-Z)$. We prove that this triplet converges in law to the solution of some forward--backward stochastic differential equation of mean-field type, which is not only governed by a Brownian motion but also by an independent Gaussian field. Comment: Published in at http://dx.doi.org/10.1214/08-AOP442 the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org)
    The Annals of Probability 11/2007; · 1.38 Impact Factor

Publication Stats

140 Citations
5.52 Total Impact Points

Institutions

  • 2007–2013
    • Shandong University
      • • School of Mathematics and Statistics
      • • Department of Pure Mathematics
      Chi-nan-shih, Shandong Sheng, China
  • 2006–2009
    • Fudan University
      • Institute of Mathematics
      Shanghai, Shanghai Shi, China