ArticlePDF Available

Interacting Particle Systems for the Computation of CDO Tranche Spreads with Rare Defaults

Authors:

Abstract and Figures

We propose an Interacting Particle System method to accurately cal-culate the distribution of the losses in a highly dimensional portfolio by using a selection and mutation algorithm. We demonstrate the efficiency of this method for computing rare default probabilities on a toy model for which we have explicit formulas. This method has the advantage of accurately computing small probabilities without requiring the user to compute a change of measure as in the Importance Sampling method. This method will be useful for computing the senior tranche spreads in Collateralized Debt Obligations (CDOs).
Content may be subject to copyright.
Interacting Particle Systems for the
Computation of CDO Tranche Spreads
with Rare Defaults
Douglas VestalRen´e Carmona& Jean-Pierre Fouque §
January 24, 2008
Abstract
We propose an Interacting Particle System method to accurately cal-
culate the distribution of the losses in a highly dimensional portfolio by
using a selection and mutation algorithm. We demonstrate the efficiency
of this method for computing rare default probabilities on a toy model
for which we have explicit formulas. This method has the advantage of
accurately computing small probabilities without requiring the user to
compute a change of measure as in the Importance Sampling method.
This method will be useful for computing the senior tranche spreads in
Collateralized Debt Obligations (CDOs).
1 Introduction
The past few years have seen an explosion in the credit risk market. At the
same time, the field of credit risk and credit derivatives research has substan-
tially increased. As the credit derivative products have grown in complexity, so
has the need for fast and accurate numerical methods to accurately price such
derivatives.
In this paper, we consider the pricing of CDOs under the first passage model.
A CDO is a credit derivative that pools together many different firms (125 is a
typical contract) and sells exposure to different default levels of the portfolio;
the so-called tranches. This segmentation of the risk in a portfolio enables
the buyer to purchase only the tranches that they deem appropriate for their
Work supported by NSF-FRG-DMS-0455982
Department of Statistics and Applied Probability, University of California, Santa Barbara,
CA 93106-3110 vestal@pstat.ucsb.edu.
Department of Operations Research & Financial Engineering, Princeton University, E-
Quad, Princeton, NJ 08544 rcarmona@princeton.edu.
§Department of Statistics and Applied Probability, University of California, Santa Barbara,
CA 93106-3110 fouque@pstat.ucsb.edu.
1
hedging positions. Since the tranche levels are fairly standardized, there are
also new products called bespoke CDOs that sell customized default levels.
The main difficulty in pricing CDOs is the high-dimensional nature of the
problem. To accurately price the CDO, the distribution of the joint default for
many names is needed. Even if the distribution of joint defaults is found explic-
itly, there is no guarantee that the expectations that are then needed to compute
tranche spreads can be found analytically. Therefore, the user has to rely on
numerical schemes to compute the CDO prices. Due to the high-dimensional
nature of the problem, PDE based methods are ruled out and Monte Carlo
(MC) methods are heavily relied upon. While MC methods are easy to imple-
ment in high-dimensional problems, they do suffer from slow convergence. In
addition, due to the rare nature of joint defaults, the computational problem
is exacerbated since many MC simulations are needed to observe the joint de-
fault of many names. Therefore, variance reduction and efficiency become very
important for these MC methods.
The main variance reduction technique used is importance sampling. There
have been many successful applications of importance sampling towards credit
risk [1, 12]. However, most authors have concentrated on multifactor Gaussian
copula models or the reduced-form model for credit risk. We have not found any
reference in the literature that applies importance sampling to the first passage
model. In addition, the difficulty with implementing importance sampling is
computing the change of measure under which one simulates the random vari-
ables. More information about importance sampling and the difficulty required
can be found in [11] and the references therein.
In this paper we are concerned with first passage models where it is imprac-
tical, if not impossible, to compute the importance sampling change of measure
explicitly. Examples we have in mind are first passage models with stochastic
volatility (see for instance [10]), and/or regime switching models with many
factors.
In this situation, our solution is to use an Interacting Particle System (IPS)
technique that can briefly be loosely described as applying importance sampling
techniques in path space. That is, we never compute a change of measure ex-
plicitly but rather decide on a judicious choice of a potential function and a
selection/mutation algorithm under the original measure such that the measure
in path space converges to the desired “twisted” measure. This “twisted” mea-
sure will be exactly the change of measure that one would use in an importance
sampling scheme. The advantage is that the random processes are always simu-
lated under the original measure, but through the selection/mutation algorithm
converge to the desired importance sampling measure. The IPS techniques and
theoretical support we use to design our algorithm can be found in the book [5]
where Pierre Del Moral developed the theory of IPS, and in [6] which provides
applications as well as a toy model very similar to the one used in this paper in
the one-dimensional case.
In this paper, we are interested in the intersection of two events; a compli-
cated model that doesn’t lend itself to importance sampling, and the computa-
tion of rare probabilities under such a model. The rest of the paper is organized
2
as follows. Section 2 discusses the first passage model that we will be using as
a toy model. Section 3 gives an overview of Feynman-Kac measures and the
associated IPS interpretation. Section 4 provides the algorithm we propose and
outlines its implementation on our toy model. Section 5 discusses the numerical
results of using IPS on our toy model and the comparison with traditional MC.
2 Problem Formulation
In contrast to intensity-based approaches to credit risk where default is given
by an exonously defined process, default in the firm value model has a very nice
economical appeal. The firm value approach, or structural model, models the
total value of the firm that has issued the defaultable bonds. Typically, the
value of the firm includes the value of equity (shares) and the debt (bonds) of
the firm [15]. There are two main approaches to modeling default in the firm
value approach: one is that default can only happen at the maturity of the
bond, and the second is that default can occur any time before maturity. The
latter is referred to as the first-passage model and is the one we consider in this
paper.
2.1 Review of the First Passage Model
We follow both [14, 2] and assume that the value of the firm, St, follows geomet-
ric Brownian motion. We also assume that interest rates are constant. Under
the risk-neutral probability measure Pwe have,
dS(t) = rS(t)dt +σS(t)dW (t),(1)
where ris the risk-free interest rate, and σis constant volatility. At any time
tT, the price of the unit-notional nondefaultable bond is Γ(t, T ) = er(Tt).
We also assume that at time 0, the firm issued non-coupon corporate bonds
expiring at time T. The price of the defaultable bond at time tTis denoted
by ¯
Γ(t, T ).
In [14] default was assumed to only occur at the expiration date T. Fur-
thermore, default at time Tis triggered if the value of the firm was below some
default threshold B; that is if STB. Therefore, assuming zero recovery, the
price of the defaultable bond satisfies,
¯
Γ(t, T ) = Eher(Tt)1ST>B |Sti
= Γ(t, T )P(ST> B|St)
= Γ(t, T )N(d2),
where N(·) is the standard cumulative normal distribution function and,
d2=ln St
B+ (r1
2σ2)(Tt)
σTt.
3
The next model, the Black-Cox model, is also known as the first passage ap-
proach. Developed in [2], default can occur anytime before the expiration of
the bond and the barrier level, B(t), is some deterministic function of time. In
[2], they assume that the default barrier is given by the function B(t) = Keηt
(exponentially increasing barrier) with K > 0 and η0. The default time τis
defined by
τ= inf {t:StB(t)}.
That is, default happens the first time the value of the firm passes below the
default barrier. This has the very appealing economical intuition that default
occurs when the value of the firm falls below its debt level, thereby not allowing
the firm to pay off its debt.
Assuming zero recovery, we can price the defaultable bond by pricing a
barrier option. Therefore,
¯
Γ(t, T ) = 1τ >tEher(Tt)1τ >T |Sti
=1τ >tΓ(t, T )P(τ > T |St)
=1τ >tΓ(t, T )N(d+
2)St
B(t)p
N(d
2),
where
d±
2=±ln St
B(t)+ (rη1
2σ2)(Tt)
σTt,
p= 1 2(rη)
σ2.
In addition, we denote the probability of default for the firm between time t
and time Tby P(t, T ). Hence,
P(t, T ) = 1 N(d+
2)St
B(t)p
N(d
2).(2)
The yield spread of the defaultable bond is defined as
Y(t, T ) = 1
Ttln ¯
Γ(t, T )
Γ(t, T ).
We remark that in the first passage model above, and all firm value models,
the yield spread for very short maturities goes to 0 in contrast to the empirical
evidence found in [8]. However, by incorporating a fast mean-reverting stochas-
tic volatility σt, the authors in [9] were able to raise the yield spread for short
maturities.
2.2 Multiname Model
For our purposes, we consider a CDO written on Nfirms under the first passage
model. That is, we assume that the firm values for the Nnames have the
4
following dynamics,
dSi(t) = rSi(t)dt +σiSi(t)dWi(t), i = 1,...,N (3)
where ris the risk-free interest rate, σiis constant volatility, and the driving
innovations dWi(t) are infinitesimal increments of Wiener processes Wiwith
correlation
dhWi, Wjit=ρijdt.
Each firm iis also assumed to have a deterministic boundary process Bi(t) and
default for firm iis given by
τi= inf {t:Si(t)Bi(t)}.(4)
We define the loss function as
L(T) =
N
X
i=1
1{τiT}.(5)
That is, L(T) counts the number of firms among the Nfirms that have defaulted
before time T. We remark that in the independent homogeneous portfolio case,
the distribution of L(T) is Binomial(N,P(0, T )) where P(0, T ) is defined in
(2).
It is well known (see for instance [7]) that the spread on a single tranche can
be computed from the knowledge of expectations of the form,
E{(L(Ti)Kj)+},
where Tiis any of the coupon payment dates, and Kjis proportional to the
attachment points of the tranche.
The most interesting and challenging computational problem is when all of
the names in the portfolio are correlated. In [18], the distribution of losses
for N= 2 is found by finding the distribution of the hitting times of a pair
of correlated Brownian motions. However, the distribution is given in terms
of modified Bessel functions and a tractable general result for N > 2 is not
available. Since the distribution of L(T) is not known in the dependent case,
for N > 2, Monte Carlo methods are generally used to calculate the spread on
the tranches. Since Nis typically very large (125 names is a standard contract),
PDE based methods are ruled out and one has to use Monte Carlo.
Instead of computing the spread on the tranches numerically, our goal is to
calculate the probability mass function for L(T), that is calculate
P(L(T) = i) = pi(T), i = 0,...,N. (6)
In this manner, we will then be able to calculate all expectations that are a
function of L(T), not just the spreads. In addition, as the reader will see, our
method is dynamically consistent in time so that we can actually calculate, for
all coupon dates TjT,
P(L(Tj) = i) = pi(Tj), i = 0,...,N,
5
with one Monte Carlo run. This is in contrast to a lot of importance sampling
techniques where the change of measure has a dependence on the final time
through the Girsanov transformation thereby requiring a different MC run for
each coupon date.
3 Feynman-Kac Path Measures and IPS
Feynman-Kac path measures, and their subsequent interacting particle system
interpretation, are closely related to the stochastic filtering techniques used in
mathematical finance. In this paper, we adapt an original interacting particle
system developed in [6] to the computation of the probability mass function
(6). In [6], the authors develop a general interacting particle system method
for calculating the probabilities of rare events. For the sake of completeness,
we provide a fairly thorough overview of IPS as developed in [6] and the main
results that will be the foundation of this paper. Briefly, the method can be
described as formulating a Markov process and conducting a mutation-selection
algorithm so that the chain is “forced” into the rare event regime.
We suppose that we have a continuous time non-homogeneous Markov chain,
(˜
Xt)t[0,T ]. However, for the variance analysis and to foreshadow the method
ahead, we only consider the chain (Xp)0pn= ( ˜
XpT/n)0pn, where nis fixed.
The chain Xntakes values in some measurable state space (En,En) with Markov
transitions Kn(xn1, dxn). We denote by Ynthe historical process of Xn, that
is
Yndef.
= (X0,...,Xn)Fndef.
= (E0× · ·· × En).
Let Mn(yn1, dyn) be the Markov transitions associated with the chain Yn. Let
Bb(E) denote the space of bounded, measurable functions with the uniform
norm on some measurable space (E , E). Then, given any fnBb(Fn), and the
pair of potentials/transitions (Gn, Mn), we have the following Feynman-Kac
measure defined by
γn(fn) = E
f(Yn)Y
1k<n
Gk(Yk)
.(7)
We denote by ηn(·) the normalized measure defined as
ηn(fn) =
Ehf(Yn)Q1k<n Gk(Yk)i
EhQ1k<n Gk(Yk)i=γn(fn)n(1).(8)
In addition, in [6] they assume that the potential functions are chosen such
that
sup
(yn,¯yn)F2
n
Gn(yn)/Gnyn)<.
However, the authors note that this condition can be relaxed by considering tra-
ditional cut-off techniques, among other techniques (see [6, 5] for more details).
6
A very important observation is that
γn+1(1) = γn(Gn) = ηn(Gn)γn(1) =
n
Y
p=1
ηp(Gp).
Therefore, given any bounded measurable function fn, we have
γn(fn) = ηn(fn)Y
1p<n
ηp(Gp).
The above relationship is crucial because it allows us to relate the un-normalized
measure in terms of only the normalized “twisted” measures. In our study, we
will also make use of the distribution flow (γ
n, η
n) defined exactly the same
way as (γn, ηn) except we replace Gpby its inverse
G
p= 1/Gp.
Then, using the definition for γnand ηnit is easy to see that E[fn(Yn)] admits
the following representation,
E[fn(Yn)] = E
fn(Yn)Y
1p<n
G
p(Yp)×Y
1p<n
Gp(Yp)
=γn
fnY
1p<n
G
p
=ηn
fnY
1p<n
G
p
Y
1p<n
ηp(Gp).
Finally, it can be checked that the measures (ηn)n1satisfy the nonlinear re-
cursive equation
ηn= Φn(ηn1)def.
=ZFn1
ηn1(dyn1)Gn1(yn1)Mn(yn1,·)n1(Gn1),
starting from η1=M1(x0,·).
3.1 IPS Interpretation and General Algorithm
The above definitions and results lend themselves to a very natural interacting
path-particle interpretation. We denote the Markov chain taking values in the
product space FM
nwith transformation Φnby ξn= (ξi
n)1iM, for each time
n1. One constructs a numerical algorithm so that each path-particle
ξi
n= (ξi
0,n, ξ i
1,n,...,ξi
n,n)Fn= (E0× · · · × En),
is sampled almost according to the twisted measure ηn.
7
We start with an initial configuration ξ1= (ξi
1)1iMthat consists of M
independent and identically distributed random variables with distribution,
η1(d(y0, y1)) = M1(x0, d(y0, y1)) = δx0(dy0)K1(y0, dy1),
i.e., ξi
1
def.
= (ξi
0,1, ξi
1,1) = (x0, ξi
1,1)F1= (E0×E1). Then, the elementary
transitions ξn1ξnfrom FM
n1into FM
nare defined by
P(ξnd(y1
n,...,yM
n)|ξn1) =
M
Y
j=1
Φn(m(ξn1))(dyi
n),(9)
where m(ξn1)def.
=1
MPM
i=1 δξi
n1, and d(y1
n,...,yM
n) is an infinitesimal neigh-
borhood of the point (y1
n,...,yM
n)Fm
n. From the definition of Φn, one can
see that (9) is the overlapping of a simple selection and mutation transition,
ξn1FM
n1
selection
ˆ
ξn1FM
n1
mutation
ξnFM
n.
The selection stage is performed by choosing randomly and independently M
path-particles
ˆ
ξi
n1= (ˆ
ξi
0,n1,ˆ
ξi
1,n1,...,ˆ
ξi
n1,n1)Fn1,
according to the Boltzmann-Gibbs particle measure
M
X
j=1
Gn1(ξj
0,n1,...,ξj
n1,n1)
PM
i=1 Gn1(ξi
0,n1,...,ξi
n1,n1)δ(ξj
0,n1,...,ξj
n1,n1).
Then, for the mutation stage, each selected path-particle ˆ
ξi
n1is extended by
ξi
n= ((ξi
0,n,...,ξi
n1,n), ξi
n,n)
= ((ˆ
ξi
0,n,..., ˆ
ξi
n1,n), ξi
n,n)Fn=Fn1×En,
where ξi
n,n is a random variable with distribution Kn(ˆ
ξi
n1,n1,·). In other
words, the transition is made by applying the original kernel Kn. All of the
mutations are performed independently. We just quote the results from [5, 6]
in stating the weak convergence result:
ηM
n
def.
=1
M
M
X
i=1
δ(ξi
0,ni
1,n,...,ξi
n,n)
N→∞
ηn.
Furthermore, there are several propagation of chaos estimates that ensure that
(ξi
0,n, ξ i
1,n,...,ξi
n,n) are asymptotically independent and identically distributed
with distribution ηn[5]. Therefore, we can form the particle approximation γM
n
defined as
γM
n(fn) = ηM
n(fn)Y
1p<n
ηM
p(Gp).
8
Lemma 1 ([6]).γM
nis an unbiased estimator for γn, in the sense that for any
p1and fnBb(Fn)with ||fn|| ≤ 1, we have
E(γM
n(fn)) = γn(fn),
and in addition
sup
M1
ME[|γM
n(fn)γn(fn)|p]1/p cp(n),
for some constant cp(n)<whose value does not depend on the function fn.
Proof. Refer to [6].
4 Pricing CDOs using IPS
In this section, we present our adaptation of the interacting particle system
approach to computing rare event probabilities in credit risk with a structural
based approach by applying it to the following model.
Our Markov process is the 3 ×Ndimensional process ( ˜
Xt)t[0,T ]defined as:
˜
Xt=S1(t),min
utS1(u),1τ1t, S2(t),min
utS2(u),1τ2t,···
···, SN(t),min
utSN(t),1τNt,
where the dynamics of Si(t) are given in equation (3). We assume a constant
barrier Bifor each firm 1 iN. While it is redundant to also include
1τitin the above expression since we also know minutSi(u), we keep track
of it because it will tell us the default time of the firm when we implement the
algorithm numerically. We divide the time interval [0, T ] into n equal intervals
[ti1, ti], i= 1,2,...,n. These are the times we stop and perform the selection
step. We introduce the chain (Xp)0pn= ( ˜
XpT/n)0pnand the whole history
of the chain is denoted by Yp= (X0,...,Xp).
Since it is not possible to sample directly from the distribution of (Xp)0pn
for N > 2, we will have to apply an Euler scheme during the mutation stage;
we let tdenote the sufficiently small time step used. In general twill be
chosen so that t << T /n.
Our general strategy is to find a potential function that increases the likeli-
hood of default among the firms. In the IPS algorithm, given a particular choice
of weight function G(·), particles with low scores are replaced by particles with
high scores. Therefore, we would like to select a potential function G(·) that
places a higher score on firms which have reduced their distance to default dur-
ing the previous mutation step. Since the rare event in this case is that the
minimum of the firm value falls below a certain level, we would like to put more
emphasis on particles whose firm minimums are decreasing during a mutation
step. Therefore, we fix some parameter α < 0, and define the potential function,
Gα(Yp) = exp[α(V(Xp)V(Xp1))],(10)
9
where
V(Xp) =
N
X
i=1
log(min
utp
Si(u)).
The choice of α < 0 may seem peculiar initially, but it is chosen to be
negative because the potential function Gα(Yp) can be written in the form,
Gα(Yp) = exp[α(V(Xp)V(Xp1))]
= exp "N
X
i=1
αlog min
utp
(Si(u)/min
utp1
Si(u)#,
where,
log min
utp
(Si(u)/min
utp1
Si(u)0.
Therefore, to place more weight on the firms whose minimum has decreased, we
must multiply by α < 0. In addition, by choosing the weight function as we did,
if the minimum value did not decrease during the last mutation, then that firm
has a small relative contribution to the total empirical measure. Therefore, we
will be putting more weight onto path-particles whose minimum has decreased
the most between two mutation times.
In addition, there are several computational advantages for choosing the
weight function above. Chiefly among them are:
1. Our choice of weight function, while not unique in this regard, will only
require us to keep track of (Xp1, Xp) instead of the full history Yp=
(X0, X1,···, Xp) thereby minimizing the increased dimensionality of using
an IPS scheme.
2. In addition, our weight function has the added advantage of having the
property that Q1k<p G(Yk) = exp[α(V(Xp1)V(X0))] thereby ensur-
ing that the Feynman-Kac measures defined in equations (7) and (8) are
simpler to analyze.
4.1 Detailed IPS Algorithm
Our algorithm is built with the weight function defined in equation (10).
Initialization. We start with Midentical copies, ˆ
X(i)
0, 1 iM, of the
initial condition X0. That is,
ˆ
X(i)
0= (S1(0), S1(0),0, S2(0), S2(0),0,···, SN(0), SN(0),0),1iM.
We also have a set of “parents”, ˆ
W(i)
0, defined by ˆ
W(i)
0=ˆ
X(i)
0. We denote
V0
def.
=V(ˆ
W(i)
0). This forms a set of Mparticles ( ˆ
W(i)
0,ˆ
X(i)
0), 1 iM.
Now suppose that at time p, we have the set of Mparticles ( ˆ
W(i)
p,ˆ
X(i)
p),
1iM.
10
Selection Stage
We first compute the normalizing constant,
ˆηM
p=1
M
M
X
i=1
exp hαV(ˆ
X(i)
p)V(ˆ
W(i)
p)i.(11)
Then, we choose independently Mparticles according to the empirical dis-
tribution,
ηM
p(dˇ
W , d ˇ
X) = 1
MˆηM
p
M
X
i=1
exp hαV(ˆ
X(i)
p)V(ˆ
W(i)
p)i×δ(ˆ
W(i)
p,ˆ
X(i)
p)(dˇ
W , d ˇ
X).
(12)
The particles that are selected are denoted ( ˇ
W(i)
p,ˇ
X(i)
p).
Mutation Stage
For each of the selected particles, ( ˇ
W(i)
p,ˇ
X(i)
p), we apply an Euler scheme
from time tpto time tp+1 with step size tfor each ˇ
X(i)
pso that ˇ
X(i)
pbecomes
ˆ
X(i)
p+1. We then set ˆ
W(i)
p+1 =ˇ
X(i)
pIt should be noted, that each of the particles
are evolved independently and that the true dynamics (given in equation (3))
of Xpis applied rather than some other measure. It is this fact that separates
IPS from IS (Importance Sampling).
Then let,
f(ˆ
X(i)
n) =
N
X
j=1
1{minuTS(i)
j(u)Bj}
denote the number of firms that have defaulted by time Tfor the ith particle.
Then, the estimator for P(L(T) = k) = pk(T) is given by
PM
k(T) = "1
M
M
X
i=1
1f(ˆ
X(i)
n)=kexp(α(V(ˆ
W(i)
n)V0))#×"n1
Y
p=0
ˆηM
p#.(13)
This estimator is unbiased in the sense that E[PM
k(T)] = pk(T). The unbiased-
ness follows directly from Lemma 1.
4.2 Single-Name Case: Variance Analysis
We analyze the variance of the estimator in equation (13) for the single name
case. Therefore, we take N= 1, with constant barrier Band we are interested
in computing, using IPS, the probability of default before maturity T. That is,
we compute
PB(0, T ) = P(min
uTS(u)B) = E[1minuTS(u)B].
Of course, we have an explicit formula for PB(0, T ) given by (2), and this case
will be precisely our toy model used to compare the variance for IPS and pure
11
MC. In the more general case, where Nis large and the names are correlated,
we will provide an empirical comparison. It should be noted that we are only
interested in values of Bthat make the above event rare.
We remark that it is a standard result that the variance associated with
the traditional Monte Carlo method for computing PB(0, T ) is PB(0, T )(1
PB(0, T )). We also remark that for a single name the Markov chain (Xp)0pn
defined in Section 4 simplifies to
Xp= (S(tp),min
utp
S(u),1τtp).
Then, following the setup described in Section 3, we see that the rare event
probability PB(0, T ) has the following Feynman-Kac representation:
PB(0, T ) = γn(L(B)
n(1)),
where L(B)
n(1) is given by the weighted indicator function defined for any path
yn= (x0,...,xn)Fnby
L(B)
n(1)(yn) = L(B)
n(1)(x0,...,xn) = 1{minuTS(u)B}Y
1p<n
G
p(x0,...,xp)
=1{minuTS(u)B}eα(V(xn1)V(x0))
=1{minuTS(u)B}eα(log(minutn1S(u)/S0)).
Also, notice that ||L(B)
n(1)(yn)|| ≤ 1 since log(minutn1S(u)/S0)0 and
α > 0 by assumption.
Next, following the IPS selection-mutation algorithm outlined in Section 4.1,
we form the estimator
PB
M(0, T ) = γM
n(L(B)
n(1)) = ηM
n(L(B)
n(1)) Y
1p<n
ηM
p(Gp).(14)
By Lemma 1, PB
M(0, T ) is an unbiased consistent estimator of PB(0, T ). While
many estimators are unbiased, the key to determining the efficiency of our es-
timator is to look at its variance. As such, we have the following central limit
theorem for our estimator.
Theorem 1. The estimator PM
B(0, T )given in equation (14) is unbiased, and
it satisfies the central limit theorem
ME[PB
M(0, T )PB(0, T )] M→∞
N(0, σB
n(α)2),
with the asymptotic variance
σB
n(α)2
=
n
X
p=1 hEneαlog(minutp1S(u))o×EnP2
B,p,n eαlog(minutp1S(u))oPB(0, T )2i,
(15)
12
where PB,p,n is the collection of functions defined by
PB,p,n(x) = E1mintTS(t)B|Xp=x,
and PB(0, T )is given by (2).
Proof. The proof follows directly by applying Theorem 2.3 in [6] with the weight
function that we have defined in (10).
In the constant volatility single-name case, the asymptotic variance σB
n(α)2
can be obtained explicitly in terms of double and triple integrals with respect
to explicit densities. This will be used in our comparison of variances for IPS
and pure MC in Section 5.1. The details of these explicit formulas are given in
the Appendix A.
As shown numerically in the next section the variance for IPS is of order
p2with p=PB(0, T ) (small in the regime of interest), in contrast to being of
order pfor the direct MC simulation. This is indeed a very significant variance
reduction in the regime psmall, as already observed in [6], in a different context.
5 Numerical Results
In this section we investigate numerically the results of implementing the IPS
procedure for estimating the probability mass function of the loss function for
single names and multinames.
5.1 Single-Name Case
For the single-name case, we compute the probability of default for different
values of the barrier using IPS and traditional Monte Carlo. In addition, for each
method, we implemented the continuity correction for the barrier level described
in [4] to account for the fact that we are using a discrete approximation to the
continuous barrier for both IPS and MC. For the different values of the barrier
we use, we can calculate the exact probability of default from equation (2).
The following are the parameters we used for both IPS and MC.
rσ S0tT n (# of mutations in IPS) M
.06 .25 80 .001 1 20 20000
/Users/dougvestal/Desktop/IPS Paper/VCF-paper.tex The number of sim-
ulations Mis the same for IPS and MC, and from an empirical investigation,
we chose α=18.5 in the IPS method. The results are shown in Figure 1.
Indeed probabilities of order 1014 will be irrelevant in the context of default
probabilities but the user can see that IPS is capturing the rare events proba-
bilities for the single name case whereas traditional Monte Carlo is not able to
capture these values below 104.
In Figure 2 we show how the variance decreases with the barrier level, and
therefore with the default probability, for MC and IPS. In the IPS case the
13
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
10−14
10−12
10−10
10−8
10−6
10−4
10−2
100
Barrier/S0
Probabilitiy of Default
MC vs IPS
IPS
MC
True Value
Figure 1: Default probabilities for different barrier levels for IPS and MC
variance is obtained empirically and using the integral formulas derived in the
Appendix. We deduce that the variance for IPS decreases as p2(pis the default
probability), as opposed to pin the case of MC simulation.
Each MC and IPS simulation gives an estimate of the probability of default
(whose theoretical value does not depend on the method) as well as an estimate
of the standard deviation of the estimator (whose theoretical value does depend
on the method). Therefore, it is instructive from a practical point of view to
compare the two methods by comparing the empirical ratios of their standard
deviation to the probability of default for each method. If p(B) is the probability
of default for a certain barrier level B, then the standard deviation, p2(B), for
traditional MC is given by,
pMC
2(B) = pp(B)×p(1 p(B)),
and the theoretical ratio for MC is given by
pMC
2(B)
p(B)=p(1 p(B))
pp(B),
which can computed using (2).
For IPS, the corresponding ratio is
pIPS
2(B)
p(B)=σB
n(α)
p(B),
14
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
10−30
10−25
10−20
10−15
10−10
10−5
100
105
B/S0
Variance
Comparison of MC with IPS
IPS Empirical Variance
IPS Theoretical Variance
MC Theoretical Variance
MC prob. default squared
Figure 2: Variances for different barrier levels for IPS and MC
where σB
n(α) is given in Theorem 1. It is computed using the formula given in
Corollary 1 in the Appendix.
In Figure 3 one sees that there are specific regimes where it is more efficient
to use IPS as opposed to traditional MC for certain values of the barrier level
(below .60 ×S0). This is to be expected since IPS is well suited to rare event
probabilities whereas MC is not.
5.2 Multiname Case
For the multiname case, we tested using 25 firms (N= 25). In addition, we took
all of the firms to be homogeneous, meaning that they have the same parame-
ters, starting value, and default barrier in (3). The following are the parameters
that we used:
r σiSi(0) BitT n M
.06 .3 90 36 .001 1 20 10000
In addition, we took the correlation between the driving Brownian motions
to be ρij =.4 for i6=j. The above parameters give, in the independent case,
a probability of default of .0018, a realistic default probability for highly rated
firms. For the IPS simulation we used α=18.5/25 so that it is consistent
with the value used in the single-name case.
The following picture illustrates the difference between using MC and IPS to
estimate the Loss probability mass function. That is, we calculate numerically
15
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
100
101
102
103
104
105
106
107
B/S0
p2(x)/p(x)
Standard Deviation to Probability Ratio for MC and IPS
IPS Theoretical
IPS Empirical
MC Theoretical
MC Emprical
Figure 3: Standard deviation-to-probability ratio for MC and IPS
P(L(T) = k) where L(T) is the number of firms that have defaulted before time
Tas defined in (5). In Figure 4 we plot the pmf on a log scale more adapted
to our range of values. One can see that the IPS method is picking up more of
the tail events for the pmf of the loss function and the distinction between IPS
and MC becomes clear. in this regime, MC in only good for estimating the pmf
for K= 0,1,2,3,4, but IPS is good for estimating the pmf for K= 0,1,...,10.
Considering the fact that most contracts are written only on the first 40% of
losses and 10 = .4×25, we see that IPS does a good job of describing the rare,
but economically (in the sense of contracts) significant events.
6 Conclusion
In this paper, we adapted an original IPS approach to the computation of rare
probabilities [6] to the field of credit risk under the first passage model. We
showed that with our choice of weight function, IPS performs better than tra-
ditional MC methods for computing the probability of tail events in credit risk
under a simple toy model. We also derived an explicit formula (up to double
and triple integrals) for the asymptotic variance of the IPS estimator in the
single-name case. In addition, we also showed that there are specific regimes
where IPS is better suited to credit risk than traditional MC methods. For
16
0 2 4 6 8 10
10−10
10−8
10−6
10−4
10−2
100
Number of Defaults (K)
P(Loss = K)
Density function for Loss given by different numerical procedures
MC
IPS
Figure 4: Probability mass function of the Loss shown in log-scale, with N= 25.
practical purposes our algorithm can be adapted to more complicated models,
for instance, with stochastic volatility [10].
Appendix
A Formulas for the Variance in the Single-Name
Constant Volatility Case
The asymptotic variance σB
n(α)2of the IPS estimator is given by (15) in The-
orem 1. As a Corollary, we deduce in this Appendix the explicit formulas used
in Section 5.1.
Corollary 1. Given the choice of weight function in (10) with α < 0, and
17
constant barrier B, we have that
σB
n(α)2
=
n
X
p=1 "f(α, tp1)e1
2θ2tp Z(1
σ) ln(B/S0)
−∞ Z0
yZ
y
eασx+θz Ψ(tp1,tp)(x, y, z )dzdxdy
+Z(1
σ) ln(B/S0)
−∞ Z
x
eασx+θz Υ(tp1,tp)(x, z )dzdx
+Z0
(1
σ) ln(B/S0)Z0
yZ
y
h(tp, z)2eασx+θ z Ψ(tp1,tp)(x, y, z )dzdxdy
+Z0
(1
σ) ln(B/S0)Z
x
h(tp, z)2eασx+θ z Υ(tp1,tp)(x, z)dzdx!
1N(d+
2(0,0)) + S0
B12r
σ2
N(d
2(0,0))!2#,(16)
where,
f(α, tp1) = ασ +θ
ασ + 2θeασ(ασ+2θ)tp1/2Erfc (ασ +θ)tp1
2+θ
ασ + 2θErfc θtp1
2,
Erfc(x) = 2
πZ
x
ev2dv,
θ=r1
2σ2
σ,
Ψ(tp1,tp)(x, y, z)
=22e(2x2y+z)2/2tp
qπtpt2
p1(tptp1)2×"σ(tp1,tp)(µ(1)
(tp1,tp)(x, y, z)x+z2y)
2πe(xµ(1)
(tp1,tp)(x,y,y))2/2σ2
(tp1,tp)
+σ2
(tp1,tp)+ (µ(1)
(tp1,tp)(x, y, z))2
+µ(1)
(tp1,tp)(x, y, z)(z2y2x)2x(z2y)
1Φ
xµ(1)
(tp1,tp)(x, y, z)
σ(tp1,tp)
#,
18
Υ(tp1,tp)(x, z)
=s2
πtptp12×"e(2xz)2/2tp σ(tp1,tp)
2πe(xµ(2)
(tp1,tp)(x,z))2/2σ2
(tp1,tp)
+(µ(2)
(tp1,tp)(x, z)2x)1Φxµ(2)
(tp1,tp)(x, z)
σ(tp1,tp)!
ez2/2tp σ(tp1,tp)
2πe(xµ(3)
(tp1,tp)(x,z))2/2σ2
(tp1,tp)
+(µ(3)
(tp1,tp)(x, z)2x)1Φxµ(3)
(tp1,tp)(x, z)
σ(tp1,tp)!#,
Φ(x) = Zx
−∞
1
2πey2/2dy,
µ(1)
(tp1,tp)(x, y, z) = 2x(tptp1) + (2yz)tp1
tp
,
µ(2)
(tp1,tp)(x, z) = 2x(tptp1) + ztp1
tp
,
µ(3)
(tp1,tp)(x, z) = 2xtpztp1
tp
,
σ2
(tp1,tp)=tp1
tp
(tptp1),
h(tp, z) = 1N(d+
2(tp, z)) + S0eσz
B12r
σ2
N(d
2(tp, z))!,
d±
2(tp, z) = ±(ln(S0) + σz ln(B)) + (r1
2σ2)(Ttp)
σpTtp
.
Proof. As can be seen in (15), we need to compute the joint distribution of
min
utp1
S(u),min
utp
S(u), S(tp).
First, recall that since S(u) = S0e(r1
2σ2)u+σWu,σ > 0 by assumption, and
log is an increasing function we have:
Eneαlog(minutp1S(u))o=Eeαlogminutp1S0e(r1
2σ2)u+σWu)
=Sα
0Eneα(minutp1(r1
2)u+σWu)o
=Sα
0Eneασ(minutp1c
Wu)o,
19
where c
Wu=θu +Wuis a Brownian motion with deterministic drift θ=
r1
2σ2
σ. Therefore, the above computation simplifies to computing the mo-
ment generating function of the running minimum of Brownian motion with
drift. This formula is well known (see for instance [3]) and so we have,
Eneαlog(minutp1S(u))o
=Sα
0ασ +θ
ασ + 2θeασ(ασ+2θ)tp1/2Erfc (ασ +θ)tp1
2+θ
ασ + 2θErfc θtp1
2
:= Sα
0f(α, tp1),(17)
where
Erfc(x) = 2
πZ
x
ev2dv.
Now, we compute EnP2
B,p,neαlog(minutp1S(u))o. The general goal will be to
write everything in terms of expectations of functionals of c
Wu. First, we note
that
PB,p,n(x) = E1minuTS(u)B|Xp=x
=E1minuTS(u)B|Xp= (min
utp
S(u), S(tp),1τtp)
=1minutpS(u)B+1minutpS(u)>BP( min
tpuTS(u)B|S(tp))
=1minutpS(u)B+1minutpS(u)>B 1N(d+
2) + S(tp)
B12r
σ2
N(d
2)!,
where
d±
2=±ln S(tp)
B+ (r1
2σ2)(Ttp)
σpTtp
.
In the formula for d±
2we will find it useful to substitute the formula S(tp) =
S0eσc
Wtpand write the dependence on tpand c
Wtpexplicitly as,
d±
2(tp,c
Wtp) = ±ln S0eσc
Wtp
B+ (r1
2σ2)(Ttp)
σpTtp
=±ln(S0) + σc
Wtpln(B)+ (r1
2σ2)(Ttp)
σpTtp
,
In addition, we also substitute S(u) = S0eσc
Wuinto the expression for PB,p,n
20
and rearrange to get,
PB,p,n(x) = 1minutpc
Wu(1
σ) ln(B/S0)
+1minutpc
Wu>(1
σ) ln(B/S0)
1N(d+
2(tp,c
Wtp)) + S0eσc
Wtp
B!12r
σ2
N(d
2(tp,c
Wtp))
Hence,
PB,p,n(x)2=1minutpc
Wu(1
σ) ln(B/S0)+1minutpc
Wu>(1
σ) ln(B/S0)h(tp,c
Wtp)2,
where
h(tp,c
Wtp) =
1N(d+
2(tp,c
Wtp)) + S0eσc
Wtp
B!12r
σ2
N(d
2(tp,c
Wtp))
.
Hence, plugging in the expression for P2
B,p,n into EnP2
B,p,neαlog(minutp1S(u))o
we have
EnP2
B,p,neαlog(minutp1S(u))o
=Sα
0En1minutpc
Wu(1
σ) ln(B/S0)eασ minutp1c
Wuo
+Sα
0E(1minutpc
Wu>(1
σ) ln(B/S0)h(tp,c
Wtp)2eασ minutp1c
Wu),(18)
where the expectation above is taken with respect to the measure Pfor which
c
Wuis a Brownian motion with drift. Recall that under P,
dc
Wt=θdt +dWt
where Wtis a Pstandard Brownian motion and θ=r1
2σ2
σ. Using Girsanov’s
theorem (see [13] or [16]), c
Wtis a standard Brownian motion under b
Pand the
Radon-Nikodym density, Z(t), is given by
Z(t) = exp Zt
0
θdWu1
2Zt
0
θ2du
= exp Zt
0
θ(dc
Wuθu)1
2θ2t
= exp θc
Wt+1
2θ2t.
21
Therefore, we rewrite (18) as an expectation under b
Pto get
EnP2
B,p,neαlog(minutp1S(u))o
=Sα
0b
En1minutpc
Wu(1
σ) ln(B/S0)eασ minutp1c
WuZ(tp)1o
+Sα
0b
E(1minutpc
Wu>(1
σ) ln(B/S0)h(tp,c
Wtp)2eασ minutp1c
WuZ(tp)1)
=Sα
0b
En1minutpc
Wu(1
σ) ln(B/S0)eασ minutp1c
Wueθc
Wtp1
2θ2tpo
+Sα
0b
E(1minutpc
Wu>(1
σ) ln(B/S0)h(tp,c
Wtp)2eασ minutp1c
Wueθc
Wtp1
2θ2tp)
=Sα
0e1
2θ2tp ZZZ1y(1
σ) ln(B/S0)eασx+θzΓ(tp1,tp)(dx, dy , dz)
+Z Z Z 1y> 1
σln(B/S0)h(tp, z )2eασx+θz Γ(tp1,tp)(dx, dy, dz)!,(19)
where,
Γ(tp1,tp)(dx, dy, dz) = b
P( min
utp1c
Wudx, min
utpc
Wudy, c
Wtpdz),(20)
and as stated before, c
Wuis a standard Brownian motion under b
P.
The formula for Γ(tp1,tp)(dx, dy, dz) is given by
Γ(tp1,tp)(dx, dy, dz) = Ψ(tp1,tp)(x, y, z )1y<x,yz,x0dzdydx
(tp1,tp)(x, z)1xz,x0δx(dy)dzdx,
where the functions Ψ(tp1,tp)and Υ(tp1,tp)are given in Corollary 1.
The derivation of these formulas is obtained by:
1. Introducing the value of c
Wuat the intermediate time tp1in (20).
2. Using the Markov property at present time tp1.
3. Using the classical joint distribution of a Brownian motion and its running
minimum.
4. Re-integrating with respect to c
Wtp1.
The details of this derivation are in [17]. Substituting the formulas for Ψ(tp1,tp)
and Υ(tp1,tp)into (19) ends the proof of Corollary 1.
References
[1] A. Bassamboo and S. Jain,Efficient importance sampling for reduced
form models in credit risk, in WSC ’06: Proceedings of the 38th conference
on Winter simulation, Winter Simulation Conference, 2006, pp. 741–748.
22
[2] F. Black and J. C. Cox,Valuing corporate securities: Some effects of
bond indenture provisions, Journal of Finance, 31 (1976), pp. 351–67.
[3] A. Borodin and P. Salminen,Handbook of Brownian motion: Facts
and formulae, Basel: Birkauser, 2002.
[4] M. Broadie, P. Glasserman, and S. Kou,A continuity correction for
discrete barrier options, Mathematical Finance, 7 (1997), pp. 325–349.
[5] P. Del Moral,Feynman-Kac Formulae: Genealogical and Interacting
Particle Systems with Applications, Springer-Verlag, 2004.
[6] P. Del Moral and J. Garnier,Genealogical particle analysis of rare
events, Annals of Applied Probability, 15 (2005), pp. 2496–2534.
[7] A. Elizalde,Credit risk models iv: Understanding and pricing cdos, work-
ing papers, CEMFI, Apr. 2006.
[8] Y. Eom, J. Helwege, and J. Z. Huang,Structural models of corporate
bond pricing: An empirical analysis, Review of Financial Studies, 17 (2004),
pp. 499–544.
[9] J.-P. Fouque, R. Sircar, and K. Solna,Stochastic volatility effects on
defaultable bonds, Applied Mathematical Finance, 13 (2006), pp. 215–244.
[10] J.-P. Fouque, B. Wignall, and X. Zhou,Modeling correlated defaults:
First passage model under stochastic volatility, Journal of Computational
Finance, 11 (2008).
[11] P. Glasserman,Monte Carlo Methods in Financial Engineering,
Springer, 2003.
[12] P. Glasserman and J. Li,Importance sampling for portfolio credit risk,
Management Science, 51 (2005), pp. 1643–1656.
[13] I. Karatas and S. E. Shreve,Brownian Motion and Stochastic Calculus,
Springer, 1991.
[14] R. C. Merton,On the pricing of corporate debt: the risk structure of
interest rates, Journal of Finance, 29 (1974), pp. 449–470.
[15] P. J. Schonbucher,Credit Derivative Pricing Models: Models, Pricing
and Implementation, Wiley Finance Series, 2003.
[16] S. E. Shreve,Stochastic Calculus for Finance II: Continuous-Time Mod-
els, Springer, 2003.
[17] D. Vestal,Interacting particle systems for pricing credit derivatives, PhD
Dissertation, University of California Santa Barbara, (2008).
[18] C. Zhou,An analysis of default correlations and multiple defaults, Review
of Financial Studies, 14 (2001), pp. 555–76.
23
... We now jump onto the third formalism where we recommend the following reading [38] and explore the following potential function: ...
Thesis
Full-text available
Feynman-Kac Particle Method for estimating large deviations for Markovian Jump Processes, namely Scaled Cumulant Generating Functions and applications to Structured and Derivative Credit Securities
Chapter
This chapter is devoted to the analysis of the time evolution of random vectors. The first section presents the generalization to the multivariate case of the univariate time series models studied in the previous chapter. Modern accounts of time series analysis increasingly rely on the formalism and the techniques developed for the analysis of general stochastic systems. Even though financial applications have remained mostly immune to this evolution, because of its increased popularity and its tremendous potential, we decided to include this alternative approach in this chapter. The tone of the chapter will have to change slightly as we discuss concepts and theories which were introduced and developed in engineering fields far remote from financial applications.
Chapter
Time series are ubiquitous in everyday manipulations of financial data. They are especially well suited to the nature of financial markets, and models and methods have been developed to capture time dependencies and produce forecasts. This is the main reason for their popularity. This chapter is devoted to a general introduction to the linear theory of time series, restricted to the univariate case. Later in the book, we will consider the multivariate case, and we will recast the analysis of time series data in the framework of state space models in order to consider and analyze nonlinear models.
Chapter
Although multiple linear regression is ubiquitous in economic and in econometric applications, simple linear regression does not play a very central role in quantitative finance. Nevertheless, its mathematical theory and inferential results are important enough to compel us to present them, even if the conclusions drawn from its use in practical financial applications are not always earth-shattering. As in earlier chapters, we choose particular data sets to illustrate the statistical concepts and techniques which we introduce. In the first part of this chapter we choose to work with the values of an energy index, as they relate to the values of several utilities, but as a disclaimer, it is important to emphasize that this choice was made for illustration purposes only. Most financial data come naturally in the form of time series, and the serial dependencies contained in the data may not be appropriate for some forms of regression analysis. With this in mind, the reader should be prepared to have a critical attitude toward the results produced by the algorithms introduced in this chapter. Case in point, after examining residual patterns, we abandon the original form of our first regression, and switch to a form of the data more amenable to regression. The reason for our choice is to understand the theoretical underpinnings of a regression model, so it is purely didactic.
Chapter
Most financial time series exhibit nonlinear features which cannot be captured by the linear models seen in the previous two chapters. In this last chapter, we present the elements of a theory of nonlinear time series adapted to financial applications. We review a set of standard econometric models which were first introduced in the discrete time setting. They include the famous, ARCH, GARCH, … models, but we also discuss stochastic volatility models and we emphasize the differences between these concepts which are too often confused. However, because of the growing influence of the theoretical developments of continuous time finance in the everyday practice, we spend quite a significant part of the chapter analyzing the time series models derived from the discretization of continuous time stochastic differential equations.
Chapter
The first part of the chapter gives a quick review of the classical parametric families of probability distributions and the statistical estimation of their parameters. We also review nonparametric density estimation, but our interest in financial data and heavy tail distributions prompts us to focus on quantile comparison. We introduce Q-Q plots as the main graphical tool to detect the presence of heavy tails. Because random simulation will be used throughout the book, the last part of the chapter presents the basics of Monte Carlo computations.
Chapter
This chapter extends some of the exploratory data analysis techniques introduced in the case of univariate samples to several variables. In particular, we discuss multivariate versions of kernel density estimators. Then we review the properties of the most important multivariate distribution of all, the normal or Gaussian distribution. For jointly Gaussian random variables, dependence can be completely captured by the classical Pearson correlation coefficient. In general however, the situation can be quite different.
Chapter
Although we have already worked in the regression framework for an entire chapter, we thought it would be useful to review once more the general setup of a regression problem, together with the notation used to formalize it. This will give us a chance to stress the main differences between the parametric point of view of Chap. 4 and the nonparametric approach of this chapter.
Chapter
This appendix gives a streamlined introduction to the basics of R. Following the prescriptions given in this appendix will take you through an introductory session intended for readers who are first time users of R. We do not expect that such a session will turn R-novices into experts. However, it should help beginners feel comfortable enough with the language to start practicing with the examples given in the book.
Chapter
Motivated by the instances of extreme events and heavy tail distributions encountered in the first chapter, we present the most important theoretical results underpinning the estimation of the probabilities of these extreme and rare events. The basics of extreme value theory are presented as they pertain to estimation and risk management of extremes observed in financial applications.
Article
Full-text available
Default dependency structure is crucial in pricing multi-name credit derivatives as well as in credit risk management. In this paper, we extend the first passage model for one name with stochastic volatility (Fouque-Sircar-Sølna, Applied Mathematical Finance 2006) to the multi-name case. Correlation of defaults is generated by correlation between the Brownian motions driving the individual names as well as through common stochastic volatility factors. A numerical example for the loss distribution of a portfolio of defaultable bonds is examined after stochastic volatility is incorporated.
Article
Full-text available
Monte Carlo simulation is widely used to measure the credit risk in portfolios of loans, corporate bonds, and other instruments subject to possible default. The accurate measurement of credit risk is often a rare-event simulation problem because default probabilities are low for highly rated obligors and because risk management is particularly concerned with rare but significant losses resulting from a large number of defaults. This makes importance sampling (IS) potentially attractive. But the application of IS is complicated by the mechanisms used to model dependence between obligors, and capturing this dependence is essential to a portfolio view of credit risk. This paper provides an IS procedure for the widely used normal copula model of portfolio credit risk. The procedure has two parts: One applies IS conditional on a set of common factors affecting multiple obligors, the other applies IS to the factors themselves. The relative importance of the two parts of the procedure is determined by the strength of the dependence between obligors. We provide both theoretical and numerical support for the method.
Article
This article empirically tests five structural models of corporate bond pricing: those of Merton (1974), Geske (1977), Longstaff and Schwartz (1995), Leland and Toft (1996), and Collin-Dufresne and Goldstein (2001). We implement the models using a sample of 182 bond prices from firms with simple capital structures during the period 1986–1997. The conventional wisdom is that structural models do not generate spreads as high as those seen in the bond market, and true to expectations, we find that the predicted spreads in our implementation of the Merton model are too low. However, most of the other structural models predict spreads that are too high on average. Nevertheless, accuracy is a problem, as the newer models tend to severely overstate the credit risk of firms with high leverage or volatility and yet suffer from a spread underprediction problem with safer bonds. The Leland and Toft model is an exception in that it overpredicts spreads on most bonds, particularly those with high coupons. More accurate structural models must avoid features that increase the credit risk on the riskier bonds while scarcely affecting the spreads of the safest bonds.
Article
The payoff of a barrier option depends on whether or not a specified asset price, index, or rate reaches a specified level during the life of the option. Most models for pricing barrier options assume continuous monitoring of the barrier; under this assumption, the option can often be priced in closed form. Many (if not most) real contracts with barrier provisions specify discrete monitoring instants; there are essentially no formulas for pricing these options, and even numerical pricing is difficult. We show, however, that discrete barrier options can be priced with remarkable accuracy using continuous barrier formulas by applying a simple continuity correction to the barrier. The correction shifts the barrier away from the underlying by a factor of exp(bet sig sqrt dt), where bet approx 0.5826, sig is the underlying volatility, and dt is the time between monitoring instants. The correction is justified both theoretically and experimentally.