ArticlePDF Available

Importance sampling and interacting particle systems for the estimation of markovian credit portfolios loss distribution

Authors:

Abstract

The goal of the paper is the numerical analysis of the performance of Monte Carlo simulation based methods for the computation of credit-portfolio loss-distributions in the context of Markovian intensity models of credit risk. We concentrate on two of the most frequently touted methods of variance reduction in the case of stochastic processes: importance sampling (IS) and interacting particle systems (IPS) based algorithms. Because the subtle differences between these meth-ods are often misunderstood, as IPS is often regarded as a mere particular case of IP, we describe in detail the two kinds of algorithms, and we highlight their funda-mental differences. We then proceed to a detailed comparative case study based on benchmark numerical experiments chosen for their popularity in the quantitative finance circles.
A preview of the PDF is not available
... Remark 4. Note that the above interpretation is not limited to [0, 1]-valued potential functions G as long as G is non-negative and bounded, and as long as we replace G n by ε n G n in (15) and (16) with ε n such that ε n G n ∈ [0, 1]. ...
... Though interacting particle systems are known to provide very efficient variance reductions in Monte Carlo approximations of rare events, these algorithms have only appeared recently in the credit risk literature with for instance the articles of Carmona, Fouque and Vestal [16] and Carmona and Crepey [15]. In Chapter 21 of [12], the authors provide an overview of the main techniques and results of the application of interacting particle systems to credit risk analysis. ...
... All these results show the strengths of IPS based Monte Carlo computations of small default probabilities, especially when other methods fail. A systematic comparison with importance sampling is provided in [15]. ...
Article
Full-text available
The aim of this article is to give a general introduction to the theory of interacting particle methods, and an overview of its applications to computational fi-nance. We survey the main techniques and results on interacting particle systems and explain how they can be applied to the numerical solution of a variety of financial applications such as pricing complex path dependent European options, computing sensitivities, pricing American options or numerically solving partially observed control and estimation problems.
... This happens quite often for the square-root diffusion which we choose as a model for the stochastic volatility. Already several papers [3, 8] appeared after the first version of this paper was first circulated. They show the strength of IPS-based Monte Carlo computations of small default probabilities, especially when other methods fail. ...
... They show the strength of IPS-based Monte Carlo computations of small default probabilities, especially when other methods fail. The interested reader is referred to [3] for a systematic comparison with importance sampling. The rest of the paper is organized as follows. ...
... For a given value of k, in contrast to a plain Monte Carlo computation, the IPS algorithm produces enough sample paths with k losses for the estimation procedure to be acceptable if we choose α appropriately. In the numerical computations reported below, we use an idea which could be traced back to [5], at least in an implicit form, and which was used systematically in [3]. Instead of choosing α and getting reasonable estimates of P(L(t) = k) for some values of k depending upon α, we reverse the procedure , and for each k, we pick the best α. ...
Article
Full-text available
In this paper, we introduce the use of interacting particle systems in the computation of probabilities of simultaneous defaults in large credit portfolios. The method can be applied to compute small historical as well as risk-neutral probabilities. It only requires that the model be based on a background Markov chain for which a simulation algorithm is available. We use the strategy developed by Del Moral and Garnier in (Ann. Appl. Probab. 15:2496–2534, 2005) for the estimation of random walk rare events probabilities. For the purpose of illustration, we consider a discrete-time version of a first passage model for default. We use a structural model with stochastic volatility, and we demonstrate the efficiency of our method in situations where importance sampling is not possible or numerically unstable.
... Then, Monte-Carlo methods, accompanied with variance reduction techniques (in order to remedy the slowness of the crude Monte-Carlo method, see section 7.1), seem to be imperative. The main Monte-Carlo-type method proposed for the purpose of computing CDO prices has been the interacting particle systems (IPS for short) approach, developed by Carmona and Crepey [CC09] and Carmona et al. [CFV09]. It consists in a more sophisticated variant of the importance sampling (IS for short) approach (see section 7.1), that is essentially suited for the computation of rare events (here, defaults) ...
... A variant of this model, using an additional time change, has been suggested by Ding et al. in [DGT09], and another variant has been presented by Arnsdorff and Halperin [AH08] and by Lopatin and Misirpashayev [LM07], where an additional stochastic factor is substituted for the constants c 0 and c 1 . An exponential model has been suggested by Davis and Lo [DL01] (also used by Carmona and Crepey in [CC09]), where λ t := c 0 e c 1 Nt , expressing a fast contagion growth with respect to the number of defaults. ...
Article
This thesis deals with three issues from numerical probability and mathematical finance. First, we study the L2-time regularity modulus of the Z-component of a Markovian BSDE with Lipschitz-continuous coefficients, but with irregular terminal function g. This modulus is linked to the approximation error of the Euler scheme. We show, in an optimal way, that the order of convergence is explicitly connected to the fractional regularity of g. Second, we propose a sequential Monte-Carlo method in order to efficiently compute the price of a CDO tranche, based on sequential control variates. The recoveries are supposed to be i.i.d. random variables. Third, we analyze the tracking error related to the Delta-Gamma hedging strategy. The fractional regularity of the payoff function plays a crucial role in the choice of the trading dates, in order to achieve optimal rates of convergence.
... A natural alternative is then interacting particles methods. Though interacting particle systems are known to provide very efficient variance reductions in Monte Carlo approximations of rare events, these algorithms have only appeared recently in the credit risk literature with for instance the articles of Carmona , Fouque and Vestal [16] and Carmona and Crepey [15]. In Chapter 21 of [12], the authors provide an overview of the main techniques and results of the application of interacting particle systems to credit risk analysis. ...
... All these results show the strengths of IPS based Monte Carlo computations of small default probabilities, especially when other methods fail. A systematic comparison with importance sampling is provided in [15]. ...
Chapter
Full-text available
The aim of this article is to give a general introduction to the theory of interacting particle methods, and an overview of its applications to computational finance. We survey the main techniques and results on interacting particle systems and explain how they can be applied to the numerical solution of a variety of financial applications such as pricing complex path dependent European options, computing sensitivities, pricing American options or numerically solving partially observed control and estimation problems.
... Having in mind that real life CVA applications bear on portfolios of thousands of contracts, the computation times in Sections 3.3 are still quite high. So the question arises of suitable variance reduction techniques like importance sampling or particle methods [13] which could be used to efficiently address the issue of rare default events simulation. A challenging question is how to possibly extend the technique of this paper to a more general CVA setup of bilateral counterparty risk under funding constraints [16, 17]. ...
Article
We devise simulation/regression numerical schemes for pricing the CVA on CDO tranches, where CVA stands for Credit Valuation Adjustment, or price correction accounting for the defaultability of a counterparty in an OTC derivatives transaction. This is done in the setup of a continuous-time Markov chain model of default times, in which dependence between credit names is represented by the possibility of simultaneous defaults.The main idea of the paper is to perform the nonlinear regressions which are used for computing conditional expectations, in the time variable for a given state of the model, rather than in the space variables at a given time in diffusive setups. This idea is formalized as a lemma which is valid in any continuous-time Markov chain model. It is then implemented on the targeted application of CVA computations on CDO tranches.
Chapter
Time series are ubiquitous in everyday manipulations of financial data. They are especially well suited to the nature of financial markets, and models and methods have been developed to capture time dependencies and produce forecasts. This is the main reason for their popularity. This chapter is devoted to a general introduction to the linear theory of time series, restricted to the univariate case. Later in the book, we will consider the multivariate case, and we will recast the analysis of time series data in the framework of state space models in order to consider and analyze nonlinear models.
Article
Full-text available
This paper discusses the main modeling approaches that have been developed for handling portfolio credit derivatives, with a focus on the question of hedging. In particular, the so-called top, top down and bottom up approaches are considered. We give some mathematical insights regarding the fact that information, namely the choice of a relevant model filtration, is the major modeling issue. In this regard, we examine the notion of thinning that was recently advocated for the purpose of hedging a multi-name derivative by single-name derivatives. We then illustrate by means of numerical simulations (semi-static hedging experiments) why and when the portfolio loss process may not be a 'sufficient statistic' for the purpose of valuation and hedging of portfolio credit risk.
Chapter
Full-text available
In this article we study a decoupled forward backward stochastic differential equation (FBSDE) and the associated system of partial integro-differential obstacle problems, in a flexible Markovian set-up made of a jump-diffusion with regimes. These equations are motivated by numerous applications in financial modeling, whence the title of the paper. This financial motivation is developed in the first part of the paper, which provides a synthetic view of the theory of pricing and hedging financial derivatives, using backward stochastic differential equations (BSDEs) as main tool. In the second part of the paper, we establish the well-posedness of reflected BSDEs with jumps coming out of the pricing and hedging problems exposed in the first part. We first provide a construction of a Markovian model made of a jump-diffusion – like component X interacting with a continuous-time Markov chain – like component N. The jump process N defines the so-called regime of the coefficients of X, whence the name of jump-diffusion with regimes for this model. Motivated by optimal stopping and optimal stopping game problems (pricing equations of American or game contingent claims), we introduce the related reflected and doubly reflected Markovian BSDEs, showing that they are well-posed in the sense that they have unique solutions, which depend continuously on their input data. As an aside, we establish the Markov property of the model. In the third part of the paper we derive the related variational inequality approach. We first introduce the systems of partial integro-differential variational inequalities formally associated to the reflected BSDEs, and we state suitable definitions of viscosity solutions for these problems, accounting for jumps and/or systems of equations. We then show that the state-processes (first components Y ) of the solutions to the reflected BSDEs can be characterized in terms of the value functions of related optimal stopping or game problems, given as viscosity solutions with polynomial growth to related integro-differential obstacle problems. We further establish a comparison principle for semi-continuous viscosity solutions to these problems, which implies in particular the uniqueness of the viscosity solutions. This comparison principle is subsequently used for proving the convergence of stable, monotone and consistent approximation schemes to the value functions. Finally in the last part of the paper we provide various extensions of the results needed for applications in finance to pricing problems involving discrete dividends on a financial derivative or on the underlying asset, as well as various forms of discrete path-dependence.
Article
The paper is concerned with the hedging of credit derivatives, in particular synthetic CDO tranches, in a dynamic portfolio credit risk model with spread risk and default contagion. The model is constructed and studied via Markov-chain techniques. We discuss the immunization of a CDO tranche against spread- and event risk in the Markov-chain model and compare the results with market-standard hedge ratios obtained in a Gauss copula model. In the main part of the paper we derive model-based dynamic hedging strategies and study their properties in numerical experiments.
Article
Full-text available
We propose an Interacting Particle System method to accurately cal-culate the distribution of the losses in a highly dimensional portfolio by using a selection and mutation algorithm. We demonstrate the efficiency of this method for computing rare default probabilities on a toy model for which we have explicit formulas. This method has the advantage of accurately computing small probabilities without requiring the user to compute a change of measure as in the Importance Sampling method. This method will be useful for computing the senior tranche spreads in Collateralized Debt Obligations (CDOs).
Article
Full-text available
Monte Carlo simulation is widely used to measure the credit risk in portfolios of loans, corporate bonds, and other instruments subject to possible default. The accurate measurement of credit risk is often a rare-event simulation problem because default probabilities are low for highly rated obligors and because risk management is particularly concerned with rare but significant losses resulting from a large number of defaults. This makes importance sampling (IS) potentially attractive. But the application of IS is complicated by the mechanisms used to model dependence between obligors, and capturing this dependence is essential to a portfolio view of credit risk. This paper provides an IS procedure for the widely used normal copula model of portfolio credit risk. The procedure has two parts: One applies IS conditional on a set of common factors affecting multiple obligors, the other applies IS to the factors themselves. The relative importance of the two parts of the procedure is determined by the strength of the dependence between obligors. We provide both theoretical and numerical support for the method.
Conference Paper
Full-text available
We present novel sequential Monte Carlo (SMC) algorithms for the simulation of two broad classes of rare events which,are suitable for the estimation of tail probabilities and probability density functions in the regions of rare events, as well as the simulation of rare system trajectories. These methods have some,connection with previously proposed importance sampling (IS) and interacting particle system (IPS) methodologies, particularly those of [8, 4], but differ significantly from previous approaches,in a number,of respects: especially in that they operate directly on the path space of the Markov process of interest.
Article
Full-text available
We consider reduced-form models for portfolio credit risk with interacting default intensities. In this class of models default intensities are modeled as functions of time and of the default state of the entire portfolio, so that phenomena such as default contagion or counterparty risk can be modeled explicitly. In the present paper this class of models is analyzed by Markov process techniques. We study in detail the pricing and the hedging of portfolio-related credit derivatives such as basket default swaps and collaterized debt obligations (CDOs) and discuss the calibration to market data.
Article
We describe a replicating strategy of CDO tranches based upon dynamic trading of the corresponding credit default swap index. The aggregate loss follows a homogeneous Markov chain associated with contagion effects. Default intensities depend upon the number of defaults and are calibrated onto an input loss surface. Numerical implementation can be carried out thanks to a recombining tree. We examine how input loss distributions drive the credit deltas. We find that the deltas of the equity tranche are lower than those computed in the standard base correlation framework. This is related to the dynamics of dependence between defaults.
Article
The pricing of collateralized debt obligations (CDOs) and other basket credit derivatives is contingent upon (i) a realistic modelling of the firms' default times and the correlation between them, and (ii) efficient computational methods for computing the portfolio loss distribution from the individual firms' default time distributions. Factor models, a widely used class of pricing models, are computationally tractable despite the large dimension of the pricing problem, thus satisfying issue (ii), but to have any hope of calibrating CDO data, numerically intense versions of these models are required. We revisit the intensity-based modelling setup for basket credit derivatives and, with the aforementioned issues in mind, we propose improvements (a) via incorporating fast mean-reverting stochastic volatility in the default intensity processes, and (b) by considering homogeneous groups within the original set of firms. This can be thought of as a hybrid of top-down and bottom-up approaches. We present a calibration example, including data in the midst of the 2008 financial credit crisis, and discuss the relative performance of the framework.
Article
We propose a stable non-parametric algorithm for the calibration of pricing models for portfolio credit derivatives: given a set of observations of market spreads for CDO tranches, we construct a risk-neutral default intensity process for the portfolio underlying the CDO which matches these observations, by looking for the risk neutral loss process 'closest' to a prior loss process, verifying the calibration constraints. We formalize the problem in terms of minimization of relative entropy with respect to the prior under calibration constraints and use convex duality methods to solve the problem: the dual problem is shown to be an intensity control problem, characterized in terms of a Hamilton--Jacobi system of differential equations, for which we present an analytical solution. We illustrate our method on ITRAXX index data: our results reveal strong evidence for the dependence of loss transitions rates on the past number of defaults, thus offering quantitative evidence for contagion effects in the risk--neutral loss process.
Article
The measurement of portfolio credit risk focuses on rare but significant large-loss events. This paper investigates rare event asymptotics for the loss distribution in the widely used Gaussian copula model of portfolio credit risk. We establish logarithmic limits for the tail of the loss distribution in two limiting regimes. The first limit examines the tail of the loss distribution at increasingly high loss thresholds; the second limiting regime is based on letting the individual loss probabilities decrease toward zero. Both limits are also based on letting the size of the portfolio increase. Our analysis reveals a qualitative distinction between the two cases: in the rare-default regime, the tail of the loss distribution decreases exponentially, but in the large-threshold regime the decay is consistent with a power law. This indicates that the dependence between defaults imposed by the Gaussian copula is qualitatively different for portfolios of high-quality and lower-quality credits.