Article

Adaptive importance sampling with forward-backward stochastic differential equations

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We describe an adaptive importance sampling algorithm for rare events that is based on a dual stochastic control formulation of a path sampling problem. Specifically, we focus on path functionals that have the form of cumulate generating functions, which appear relevant in the context of, e.g.~molecular dynamics, and we discuss the construction of an optimal (i.e. minimum variance) change of measure by solving a stochastic control problem. We show that the associated semi-linear dynamic programming equations admit an equivalent formulation as a system of uncoupled forward-backward stochastic differential equations that can be solved efficiently by a least squares Monte Carlo algorithm. We illustrate the approach with a suitable numerical example and discuss the extension of the algorithm to high-dimensional systems.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In the recent years the framework of G-expectation has found increasing application in the domain of finance and economics, e.g., Epstein and Ji [17,16] study the asset pricing with ambiguity preferences, Beissner [5] who studies the equilibrium theory with ambiguous volatility, and many others see e.g. [50,6,51], also see [25,26,27].for numerical methods. ...
... Theorem 19 (Stability theorem for G-BSDE's with jumps) Let (p n , q n , r n ) and (p * , q * , r * ), be the solutions of (26) and (28), respectively. We have ...
Preprint
This paper is concerned with optimal control of systems driven by G-stochastic differential equations (G-SDEs), with controlled jump term. We study the relaxed problem, in which admissible controls are measurevalued processes and the state variable is governed by an G-SDE driven by a counting measure valued process called relaxed Poisson measure such that the compensator is a product measure. Under some conditions on the coefficients, using the G-chattering lemma, we show that the strict and the relaxed control problems have the same value function. Additionally, we derive a maximum principle for this relaxed problem.
... Specifically, we propose a reformulation of the semilinear dynamic programming equations of the optimal control problem as a pair of uncoupled forward-backward stochastic differential equations (FBSDE) that can be solved by Monte Carlo. 28 The advantage of the FBSDE approach is that it offers good control of the variance of the resulting estimators at low additional numerical cost. One of the key results of this paper is that the control that is obtained from the solution to the FBSDE acts as a control variate that, when augmented by an additional bias, produces a whole family of zero-variance estimators. ...
... Additionally to considering only uncontrolled forward trajectories, we add the control v s = −Z s as described in (27)- (28). More precisely, we use the approximation of the optimal control from a previous iteration step ...
Preprint
We propose an adaptive importance sampling scheme for the simulation of rare events when the underlying dynamics is given by a diffusion. The scheme is based on a Gibbs variational principle that is used to determine the optimal (i.e. zero-variance) change of measure and exploits the fact that the latter can be rephrased as a stochastic optimal control problem. The control problem can be solved by a stochastic approximation algorithm, using the Feynman-Kac representation of the associated dynamic programming equations, and we discuss numerical aspects for high-dimensional problems along with simple toy examples.
... Some theoretical bounds for the KL-type losses above were established in [29]. Besides the PG-based algorithms, other related importance sampling methods include the well-known forward-backward stochastic differential equation (FBSDE) approaches [36,20,60], where one approximates the target value Z via the solution of some SDE with given terminal-time state and a forward filtration. ...
Preprint
The particle filter (PF), also known as the sequential Monte Carlo (SMC), is designed to approximate high-dimensional probability distributions and their normalizing constants in the discrete-time setting. To reduce the variance of the Monte Carlo approximation, several twisted particle filters (TPF) have been proposed by researchers, where one chooses or learns a twisting function that modifies the Markov transition kernel. In this paper, we study the TPF from a continuous-time perspective. Under suitable settings, we show that the discrete-time model converges to a continuous-time limit, which can be solved through a series of well-studied control-based importance sampling algorithms. This discrete-continuous connection allows the design of new TPF algorithms inspired by established continuous-time algorithms. As a concrete example, guided by existing importance sampling algorithms in the continuous-time setting, we propose a novel algorithm called ``Twisted-Path Particle Filter" (TPPF), where the twist function, parameterized by neural networks, minimizes specific KL-divergence between path measures. Some numerical experiments are given to illustrate the capability of the proposed algorithm.
... where we have used the abbreviations = ( , ) and = ( ). Using an additional bias can be useful in situations in which the terminal condition is difficult to sample, resulting in a large sample variance of the loss function or its gradient (see Kebiri et al. 2019. ...
Article
Full-text available
One of the main challenges in molecular dynamics is overcoming the ‘timescale barrier’: in many realistic molecular systems, biologically important rare transitions occur on timescales that are not accessible to direct numerical simulation, even on the largest or specifically dedicated supercomputers. This article discusses how to circumvent the timescale barrier by a collection of transfer operator-based techniques that have emerged from dynamical systems theory, numerical mathematics and machine learning over the last two decades. We will focus on how transfer operators can be used to approximate the dynamical behaviour on long timescales, review the introduction of this approach into molecular dynamics, and outline the respective theory, as well as the algorithmic development, from the early numerics-based methods, via variational reformulations, to modern data-based techniques utilizing and improving concepts from machine learning. Furthermore, its relation to rare event simulation techniques will be explained, revealing a broad equivalence of variational principles for long-time quantities in molecular dynamics. The article will mainly take a mathematical perspective and will leave the application to real-world molecular systems to the more than 1000 research articles already written on this subject.
... Z tn+1 = σ (k tn ) ∇ 0 V (t n , k tn ) to get Y tn . By least square method [19], ...
Preprint
Full-text available
This paper investigates the existence of a G-relaxed optimal control of a controlled stochastic differential delay equation driven by G-Brownian motion (G-SDDE in short). First, we show that optimal control of G-SDDE exists for the finite horizon case. We present as an application of our result an economic model, which is represented by a G-SDDE, where we studied the optimization of this model. We connected the corresponding Hamilton Jacobi Bellman equation of our controlled system to a decoupled G-forward backward stochastic differential delay equation (G-FBSDDE in short). Finally, we simulate this G-FBSDDE to get the optimal strategy and cost.
... A similar approach for BSDEs can be found in [18]. There is no convergence analysis of this scheme for our assumptions on the coefficients, this should only give an idea how to solve the adjoint equation in practice. ...
Article
Full-text available
We are interested in the optimal control problem associated with certain quadratic cost functionals depending on the solution X=XαX=X^\alpha X = X α of the stochastic mean-field type evolution equation in Rd{\mathbb {R}}^d R d dXt=b(t,Xt,L(Xt),αt)dt+σ(t,Xt,L(Xt),αt)dWt,X0μ(μ given),(1)\begin{aligned} dX_t=b(t,X_t,{\mathcal {L}}(X_t),\alpha _t)dt+\sigma (t,X_t,{\mathcal {L}}(X_t),\alpha _t)dW_t\,, \quad X_0\sim \mu (\mu \text { given),}\qquad (1) \end{aligned} d X t = b ( t , X t , L ( X t ) , α t ) d t + σ ( t , X t , L ( X t ) , α t ) d W t , X 0 ∼ μ ( μ given), ( 1 ) under assumptions that enclose a system of FitzHugh–Nagumo neuron networks, and where for practical purposes the control αt\alpha _t α t is deterministic. To do so, we assume that we are given a drift coefficient that satisfies a one-sided Lipschitz condition, and that the dynamics (2) satisfies an almost sure boundedness property of the form π(Xt)0\pi (X_t)\le 0 π ( X t ) ≤ 0 . The mathematical treatment we propose follows the lines of the recent monograph of Carmona and Delarue for similar control problems with Lipschitz coefficients. After addressing the existence of minimizers via a martingale approach, we show a maximum principle for (2), and numerically investigate a gradient algorithm for the approximation of the optimal control.
... The application of FBSDE in the standard case has been the interest of many authors; we refer to [16,23,20]. ...
Article
In this paper, we study the existence and the uniqueness of solution of coupled G-forward-backward stochastic differential equations (G-FBDSEs in short). Our systems are described by coupled multi-dimensional G-FBDSEs. We construct a mapping for which the fixed point is the solution of our G-FBSDE, where we prove that this mapping is a contraction. In this paper we do not require the monotonicity condition to prove the existence.
... We apply the previously described FBSDE scheme (38), (42),(43)- (46), which was shown to yield good results in [40], to both the full and the reduced system, and we choose n = 3, i.e the full system is six dimensional. To this end we choose the basis functions ...
Article
Full-text available
We study linear-quadratic stochastic optimal control problems with bilinear state dependence for which the underlying stochastic differential equation (SDE) consists of slow and fast degrees of freedom. We show that, in the same way in which the underlying dynamics can be well approximated by a reduced order effective dynamics in the time scale limit (using classical homogenziation results), the associated optimal expected cost converges in the time scale limit to an effective optimal cost. This entails that we can well approximate the stochastic optimal control for the whole system by the reduced order stochastic optimal control, which is clearly easier to solve because of lower dimensionality. The approach uses an equivalent formulation of the Hamilton-Jacobi-Bellman (HJB) equation, in terms of forward-backward SDEs (FBSDEs). We exploit the efficient solvability of FBSDEs via a least squares Monte Carlo algorithm and show its applicability by a suitable numerical example.
Article
Full-text available
We explore efficient estimation of statistical quantities, particularly rare event probabilities, for stochastic reaction networks. Consequently, we propose an importance sampling (IS) approach to improve the Monte Carlo (MC) estimator efficiency based on an approximate tau-leap scheme. The crucial step in the IS framework is choosing an appropriate change of probability measure to achieve substantial variance reduction. This task is typically challenging and often requires insights into the underlying problem. Therefore, we propose an automated approach to obtain a highly efficient path-dependent measure change based on an original connection in the stochastic reaction network context between finding optimal IS parameters within a class of probability measures and a stochastic optimal control formulation. Optimal IS parameters are obtained by solving a variance minimization problem. First, we derive an associated dynamic programming equation. Analytically solving this backward equation is challenging, hence we propose an approximate dynamic programming formulation to find near-optimal control parameters. To mitigate the curse of dimensionality, we propose a learning-based method to approximate the value function using a neural network, where the parameters are determined via a stochastic optimization algorithm. Our analysis and numerical experiments verify that the proposed learning-based IS approach substantially reduces MC estimator variance, resulting in a lower computational complexity in the rare event regime, compared with standard tau-leap MC estimators.
Article
Full-text available
Optimal control of diffusion processes is intimately connected to the problem of solving certain Hamilton–Jacobi–Bellman equations. Building on recent machine learning inspired approaches towards high-dimensional PDEs, we investigate the potential of iterative diffusion optimisation techniques, in particular considering applications in importance sampling and rare event simulation, and focusing on problems without diffusion control, with linearly controlled drift and running costs that depend quadratically on the control. More generally, our methods apply to nonlinear parabolic PDEs with a certain shift invariance. The choice of an appropriate loss function being a central element in the algorithmic design, we develop a principled framework based on divergences between path measures, encompassing various existing methods. Motivated by connections to forward-backward SDEs, we propose and study the novel log-variance divergence, showing favourable properties of corresponding Monte Carlo estimators. The promise of the developed approach is exemplified by a range of high-dimensional and metastable numerical examples.
Article
We propose an adaptive importance sampling scheme for the simulation of rare events when the underlying dynamics is given by diffusion. The scheme is based on a Gibbs variational principle that is used to determine the optimal (i.e., zero-variance) change of measure and exploits the fact that the latter can be rephrased as a stochastic optimal control problem. The control problem can be solved by a stochastic approximation algorithm, using the Feynman–Kac representation of the associated dynamic programming equations, and we discuss numerical aspects for high-dimensional problems along with simple toy examples.
Article
Full-text available
The article surveys and extends variational formulations of the thermodynamic free energy and discusses their information-theoretic content from the perspective of mathematical statistics. We revisit the well-known Jarzynski equality for nonequilibrium free energy sampling within the framework of importance sampling and Girsanov change-of-measure transformations. The implications of the different variational formulations for designing efficient stochastic optimization and nonequilibrium simulation algorithms for computing free energies are discussed and illustrated.
Article
Full-text available
In this paper we address the use of rare event computation techniques to estimate small over-threshold probabilities of observables in deterministic dynamical systems. We demonstrate that the genealogical particle analysis algorithms can be successfully applied to a toy model of atmospheric dynamics, the Lorenz '96 model. We furthermore use the Ornstein-Uhlenbeck system to illustrate a number of implementation issues. We also show how a time-dependent objective function based on the fluctuation path to a high threshold can greatly improve the performance of the estimator compared to a fixed-in-time objective function.
Article
Full-text available
A good deal of molecular dynamics simulations aims at predicting and quantifying rare events, such as the folding of a protein or a phase transition. Simulating rare events is often prohibitive, especially if the equations of motion are high-dimensional, as is the case in molecular dynamics. Various algorithms have been proposed for efficiently computing mean first passage times, transition rates or reaction pathways. This article surveys and discusses recent developments in the field of rare event simulation and outlines a new approach that combines ideas from optimal control and statistical mechanics. The optimal control approach described in detail resembles the use of Jarzynski's equality for free energy calculations, but with an optimized protocol that speeds up the sampling, while (theoretically) giving variance-free estimators of the rare events statistics. We illustrate the new approach with two numerical examples and discuss its relation to existing methods.
Article
Full-text available
An important sampling method for certain rare event problems involving small noise diffusions is proposed. Standard Monte Carlo schemes for these problems behave exponentially poorly in the small noise limit. Previous work in rare event simulation has focused on developing estimators with optimal exponential variance decay rates. This criterion still allows for exponential growth of the statistical relative error. We show that an estimator related to a deterministic control problem not only has an optimal variance decay rate but can have vanishingly small relative statistical error in the small noise limit. The sampling method based on this estimator can be seen as the limit of the zero variance importance sampling scheme, which uses the solution of the second-order partial differential equation (PDE) associated with the diffusion. In the scheme proposed here this PDE is replaced by a Hamilton-Jacobi equation whose solution is computed pointwise on the fly from its variational formulation, an operation that remains practical even in high-dimensional problems. We test the scheme on several simple illustrative examples as well as a stochastic PDE, the noisy Allen-Cahn equation. © 2012 Wiley Periodicals, Inc.
Article
Full-text available
This paper introduces a "dual" way to price American options, based on simulating the paths of the option payoff, and of a judiciously chosen Lagrangian martingale. Taking the pathwise maximum of the payoff less the martingale provides an upper bound for the price of the option, and this bound is sharp for the optimal choice of Lagrangian martingale. As a first exploration of this method, four examples are investigated numerically; the accuracy achieved with even very simple choices of Lagrangian martingale is surprising. The method also leads naturally to candidate hedging policies for the option, and estimates of the risk involved in using them. Copyright 2002 Blackwell Publishing, Inc..
Article
Full-text available
We consider duality relations between risk-sensitive stochastic control problems and dynamic games. They are derived from two basic duality results, the first involving free energy and relative entropy and resulting from a Legendre-type transformation, the second involving power functions. Our approach allows us to treat, in essentially the same way, continuous- and discrete-time problems, with complete and partial state observation, and leads to a very natural formal justification of the structure of the cost functional of the dual. It also allows us to obtain the solution of a stochastic game problem by solving a risk-sensitive control problem.
Article
Full-text available
We are concerned with the numerical resolution of backward stochastic differential equations. We propose a new numerical scheme based on iterative regressions on function bases, which coefficients are evaluated using Monte Carlo simulations. A full convergence analysis is derived. Numerical experiments about finance are included, in particular, concerning option pricing with differential interest rates.
Article
We establish the existence of an optimal control for a system driven by a coupled forward–backward stochastic differential equation (FBDSE) whose diffusion coefficient may degenerate (i.e. are not necessary uniformly elliptic). The cost functional is defined as the initial value of the backward component of the solution. We construct a sequence of approximating controlled systems, for which we show the existence of a sequence of feedback optimal controls. By passing to the limit, we get the existence of a feedback optimal control. Filippov's convexity condition is used to ensure that the optimal control is strict. The present result extends those obtained in [2,4] to controlled systems of coupled SDE–BSDE.
Article
We study the cross-entropy method for diffusions. One of the results is a versatile cross-entropy algorithm that can be used to design efficient importance sampling strategies for rare events or to solve optimal control problems. The approach is based on the minimization of a suitable cross-entropy functional, with a parametric family of exponentially tilted probability distributions. We illustrate the new algorithm with several numerical examples and discuss algorithmic issues and possible extensions of the method.
Book
Sampling-based computational methods have become a fundamental part of the numerical toolset of practitioners and researchers across an enormous number of different applied domains and academic disciplines. This book provides a broad treatment of such sampling-based methods, as well as accompanying mathematical analysis of the convergence properties of the methods discussed. The reach of the ideas is illustrated by discussing a wide range of applications and the models that have found wide usage. The first half of the book focuses on general methods, whereas the second half discusses model-specific algorithms. Given the wide range of examples, exercises and applications students, practitioners and researchers in probability, statistics, operations research, economics, finance, engineering as well as biology and chemistry and physics will find the book of value. Søren Asmussen is a professor of Applied Probability at Aarhus University, Denmark and Peter Glynn is the Thomas Ford professor of Engineering at Stanford University.
Article
Existence and uniqueness results of fully coupled forward-backward stochastic dif- ferential equations with an arbitrarily large time duration are obtained. Some stochastic Hamilton systems arising in stochastic optimal control systems and mathematical finance can be treated within our framework.
Article
A heuristic that has emerged in the area of importance sampling is that the changes of measure used to prove large deviation lower bounds give good performance when used for importance sampling. Recent work, however, has suggested that the heuristic is incorrect in many situations. The perspective put forth in the present paper is that large deviation theory suggests many changes of measure, and that not all are suitable for importance sampling. In the setting of Cramer's Theorem, the traditional interpretation of the heuristic suggests a fixed change of distribution on the underlying independent and identically distributed summands. In contrast, we consider importance sampling schemes where the exponential change of measure is adaptive, in the sense that it depends on the historical empirical mean. The existence of asymptotically optimal schemes within this class is demonstrated. The result indicates that an adaptive change of measure, rather than a static change of measure, is what the large deviations analysis truly suggests. The proofs utilize a control-theoretic approach to large deviations, which naturally leads to the construction of asymptotically optimal adaptive schemes in terms of a limit Bellman equation. Numerical examples contrasting the adaptive and standard schemes are presented, as well as an interpretation of their different performances in terms of differential games.
Book
Stochastic optimization problems arise in decision-making problems under uncertainty, and find various applications in economics and finance. On the other hand, problems in finance have recently led to new developments in the theory of stochastic control. This volume provides a systematic treatment of stochastic optimization problems applied to finance by presenting the different existing methods: dynamic programming, viscosity solutions, backward stochastic differential equations, and martingale duality methods. The theory is discussed in the context of recent developments in this field, with complete and detailed proofs, and is illustrated by means of concrete examples from the world of finance: portfolio allocation, option hedging, real options, optimal investment, etc. This book is directed towards graduate students and researchers in mathematical finance, and will also benefit applied mathematicians interested in financial applications and practitioners wishing to know more about the use of stochastic optimization methods in finance.
Article
Rare event simulation and estimation for systems in equilibrium are among the most challenging topics in molecular dynamics. As was shown by Jarzynski and others, nonequilibrium forcing can theoretically be used to obtain equilibrium rare event statistics. The advantage seems to be that the external force can speed up the sampling of the rare events by biasing the equilibrium distribution towards a distribution under which the rare events is no longer rare. Yet algorithmic methods based on Jarzynski's and related results often fail to be efficient because they are based on sampling in path space. We present a new method that replaces the path sampling problem by minimization of a cross-entropy-like functional which boils down to finding the optimal nonequilibrium forcing. We show how to solve the related optimization problem in an efficient way by using an iterative strategy based on milestoning.
Article
In this paper, we consider the following problem: Here the coefficients aij and bi are smooth, periodic with respect to the second variable, and the matrix (aij)ij is uniformly elliptic. The Hamiltonian H is locally Lipschitz continuous with respect to uϵ and Duϵ, and has quadratic growth with respect to Duϵ. The Hamilton-Jacobi-Beliman equations of some stochastic control problems are of this type. Our aim is to pass to the limit in (0ϵ) as ϵ tends to zero. We assume the coefficients bi to be centered with respect to the invariant measure of the problem (see the main assumption (3.13)). Then we derive L∞, H and W, p0 > 2, estimates for the solutions of (0ϵ). We also prove the following corrector's result: This allows us to pass to the limit in (0ϵ) and to obtain This problem is of the same type as the initial one. When (0ϵ) is the Hamilton-Jacobi-Bellman equation of a stochastic control problem, then (00) is also a Hamilton-Jacobi-Bellman equation but one corresponding to a modified set of controls.
Article
In this paper we first give a review of the least-squares Monte Carlo approach for approximating the solution of backward stochastic differ-ential equations (BSDEs) first suggested by Gobet, Lemor, and Warin (Ann. Appl. Probab., 15, 2005, 2172–2202). We then propose the use of basis functions, which form a system of martingales, and explain how the least-squares Monte Carlo scheme can be simplified by exploiting the martingale property of the basis functions. We partially compare the convergence behavior of the original scheme and the scheme based on martingale basis functions, and provide several numerical examples related to option pricing problems under different interest rates for borrowing and investing.
Article
Several widely used importance sampling methods for the estimation of failure probabilities are compared. The methods are briefly reviewed, and a set of evaluation criteria for the comparison of the methods is chosen. In order to perform a fair comparison the developers of the schemes were asked to solve a number of problems selected in view of the evaluation criteria. Their solutions are presented and discussed. Conclusions about the performances of the schemes under different circumstances are given.
Article
We prove the existence of optimal relaxed controls as well as strict optimal controls for systems governed by non linear forward–backward stochastic differential equations (FBSDEs). Our approach is based on weak convergence techniques for the associated FBSDEs in the Jakubowski S-topology and a suitable Skorokhod representation theorem.
Article
It was established in (6, 7) that importance sampling algorithms for estimating rare-event probabilities are intimately connected with two-person zero-sum dierential games and the associated Isaacs equa- tion. This game interpretation shows that dynamic or state-dependent schemes are needed in order to attain asymptotic optimality in a gen- eral setting. The purpose of the present paper is to show that classical subsolutions of the Isaacs equation can be used as a basic and flexible tool for the construction and analysis of ecient dynamic importance sampling schemes. There are two main contributions. The first is a basic theoretical result characterizing the asymptotic performance of importance sampling estimators based on subsolutions. The second is an explicit method for constructing classical subsolutions as a mollifi- cation of piecewise ane functions. Numerical examples are included for illustration and to demonstrate that simple, nearly asymptotically optimal importance sampling schemes can be obtained for a variety of problems via the subsolution approach.
Chapter
This accessible new edition explores the major topics in Monte Carlo simulation Simulation and the Monte Carlo Method, Second Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over twenty-five years ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo Variance reduction techniques such as the transform likelihood ratio method and the screening method The score function method for sensitivity analysis The stochastic approximation method and the stochastic counter-part method for Monte Carlo optimization The cross-entropy method to rare events estimation and combinatorial optimization Application of Monte Carlo techniques for counting problems, with an emphasis on the parametric minimum cross-entropy method An extensive range of exercises is provided at the end of each chapter, with more difficult sections and exercises marked accordingly for advanced readers. A generous sampling of applied examples is positioned throughout the book, emphasizing various areas of application, and a detailed appendix presents an introduction to exponential families, a discussion of the computational complexity of stochastic programming problems, and sample MATLAB programs. Requiring only a basic, introductory knowledge of probability and statistics, Simulation and the Monte Carlo Method, Second Edition is an excellent text for upper-undergraduate and beginning graduate courses in simulation and Monte Carlo techniques. The book also serves as a valuable reference for professionals who would like to achieve a more formal understanding of the Monte Carlo method.
Article
In this paper we show that the variational representation logEef(W)=infvE1/201vs2ds+f(W+0vsds)-\log Ee^{-f(W)} = \inf_v E{1/2 \int_0^1 \parallel v_s \parallel^2 ds + f(W + \int_0^{\cdot} v_s ds)} holds, where W is a standard d-dimensional Brownian motion, f is any bounded measurable function that maps C([0,1]:Rd)C([0, 1]: \mathbb{R}^d) into R\mathbb{R} and the infimum is over all processes v that are progressively measurable with respect to the augmentation of the filtration generated by W. An application is made to a problem concerned with large deviations, and an extension to unbounded functions is given.
Article
We provide existence, comparison and stability results for one- dimensional backward stochastic differential equations (BSDEs) when the coefficient (or generator) F(t,Y,Z)F(t,Y, Z) is continuous and has a quadratic growth in Z and the terminal condition is bounded.e also give, in this framework, the links between the solutions of BSDEs set on a diffusion and viscosity or Sobolev solutions of the corresponding semilinear partial differential equations.
Article
We develop a new method for pricing American options. The main practical contribution of this paper is a general algorithm for constructing upper and lower bounds on the true price of the option using any approximation to the option price. We show that our bounds are tight, so that if the initial approximation is close to the true price of the option, the bounds are also guaranteed to be close. We also explicitly characterize the worst-case performance of the pricing bounds. The computation of the lower bound is straightforward and relies on simulating the suboptimal exercise strategy implied by the approximate option price. The upper bound is also computed using Monte Carlo simulation. This is made feasible by the representation of the American option price as a solution of a properly defined dual minimization problem, which is the main theoretical result of this paper. Our algorithm proves to be accurate on a set of sample problems where we price call options on the maximum and the geometric mean of a collection of stocks. These numerical results suggest that our pricing method can be successfully applied to problems of practical interest.
Numerical methods for backward stochastic differential equations of quadratic and locally Lipschitz type
  • P Turkedjiev
P. Turkedjiev: Numerical methods for backward stochastic differential equations of quadratic and locally Lipschitz type, Dissertation, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II (2013).