ArticlePDF Available

Calculation Aspects of the European Rebalanced Basket Option using Monte Carlo Methods: Valuation

Authors:

Abstract and Figures

Extra premiums may be charged to a client to guarantee a minimum payout of a contract on a portfolio that gets rebalanced back to fixed proportions on a regular basis. The valuation of this premium can be seen as that of the pricing of a European put option with underlying rebalanced portfolio. This paper finds the most efficient estimators for the value of this path-dependent multi-asset put option using different Monte Carlo methods. With the help of a refined method, computational time of the value decreased significantly. Furthermore, variance reduction techniques and Quasi-Monte Carlo methods delivered more accurate and faster converging estimates as well. Key words: Simulation, stochastic programming, asset pricing, finance, insurance.
Content may be subject to copyright.
Volume 28 (1), pp. 1–18
http://www.orssa.org.za
ORiON
ISSN 0529-191-X
c
2012
Calculation aspects of the European
rebalanced basket option using
Monte Carlo methods: Valuation
CJ van der MerweWJ Conradie
Received: 11 August 2011; Revised: 4 January 2012; Accepted: 12 April 2012
Abstract
Extra premiums may be charged to a client to guarantee a minimum payout of a contract on
a portfolio that gets rebalanced back to fixed proportions on a regular basis. The valuation
of this premium can be seen as that of the pricing of a European put option with underlying
rebalanced portfolio. This paper finds the most efficient estimators for the value of this
path-dependent multi-asset put option using different Monte Carlo methods. With the help
of a refined method, computational time of the value decreased significantly. Furthermore,
variance reduction techniques and Quasi-Monte Carlo methods delivered more accurate and
faster converging estimates as well.
Key words: Simulation, stochastic programming, asset pricing, finance, insurance.
1 Introduction
A wide variety of products exist in life insurance and pension fund companies. Some of
these products offer the holder of the product a minimum payout guarantee by charging
them an extra premium. This extra guarantee forms a liability to the insurer that needs
to be managed in terms of risks and must be valued daily. Due to the implementation
of Solvency II across the European Union (for more information see Financial Services
Authority, 2011) most nations have started adopting the same principles, with South
Africa adopting the Solvency Assessment and Management (SAM) programme (Financial
Services Board, 2010). According to Pillar I of these programmes, all Solvency Capital
Requirements (SCR) need to be accurately measured, and kept as a reserve. A stan-
dard formula provided by the regulators may be used to help calculate the SCR, or an
internal model may be used to estimate these requirements. Adopting an internal model
Corresponding author: Department of Statistics and Actuarial Science, University of Stellenbosch,
Private bag X1, Matieland, South Africa, 7602, email: carelvdmerwe@gmail.com
Department of Statistics and Actuarial Science, University of Stellenbosch, Private bag X1, Matieland,
South Africa, 7602.
1
2CJ van der Merwe & WJ Conradie
brings forth the advantage of more accurate valuation and therefore, mostly a smaller SCR
reserve.
The product considered in this paper is a portfolio, consisting of aassets, that is rebalanced
back to fixed proportions, vi, every τyears. Rebalancing is done by selling the better
performing assets and buying the poorer performing assets. A minimum payout guarantee
is offered to the client on this product and this forms the main focus of this paper.
Given that the client will receive a payout of ΠTat the end of the life of the contract
(the value of the portfolio at time T), this could be guaranteed to be a minimum of K.
Therefore at time Tthe payout of this contract, which would have been ΠT, becomes
max{ΠT, K}. That is ΠT+ max{KΠT,0}, with the second part of this expression
exactly the payoff of a European put option. This problem may therefore be seen as that
of the valuation of a European put option with underlying Π.
In this paper the price (value) of this put option is estimated using different Monte Carlo
(MC) methods in order to find the most efficient method. Due to the large and increasing
computational power of corporations’ clusters of servers, simulation becomes a feasible
numerical method for calculating aspects of options where no closed-form solution or
formulae exist — therefore the focus of this paper will only be MC numerical methods.
Although general methods to apply MC simulations to path-dependent and multi-asset
options exist, currently no literature on this specific type of option exists. As such, this
new option will from here on be referred to as the European Rebalanced Basket Call/Put
Option (ERBCO / ERBPO). Only the ERBPO will be considered, but the put-call parity
for the European Rebalanced Basket Option (ERBO) is given in the concluding section
and may be used to calculate the value of the ERBCO once the price of the ERBPO
is found. Only a put option with an underlying portfolio that consists of two assets is
considered in the results sections, but can easily be changed to that of an a-asset ERBO.
The focus of the next section is the valuation of the ERBPO using general MC methods.
First, a simplistic method is used which is then substantially improved by using a new
formula to simulate the value of the underlying portfolio. These are then compared in
terms of computational time.
This is followed by a section which improves the error of the estimates with the help of
Variance Reduction Techniques (VRTs). These methods are compared in terms of the
estimates’ standard error (SE) given a fixed simulation time. The fourth section improves
the convergence of the estimates using Quasi-Monte Carlo (QMC) techniques, and the
different methods are also compared in terms of different performance criteria.
2 General Monte Carlo
When used to value options, MC simulation uses the risk-neutral valuation result — the
premium that needs to be charged for an option can be estimated by sampling paths
for its underlying distribution to obtain the expected payoff in a risk-neutral world, and
then discounting this payoff at the risk-free rate. The literature in this section is from
Glasserman (2004, pp. 39–95) which builds on the work done by Boyle (1977).
Calculation sspects of the ERBO using MC Methods: Valuation 3
A brief overview of the general MC Framework for option valuation will be provided in
the following section. This will be followed by the derivation of the Simplistic MC (SMC)
approach to valuing the ERBPO, which will be refined in the subsection that follows. This
section will be concluded with a comparison of the two different methods in terms of value,
error, and computational times.
2.1 Monte Carlo Framework for option valuation
Let Xbe a given random variable with E[X] = λ, where the true value is unknown, and
V(X) = σ2. In MC simulation, given the distribution of X,nindependent observations
of X,i.e. {Xi:i= 1, . . . , n}, are generated. The parameter λis estimated by ˆ
λ(n) =
1
nPn
i=1 Xi, which is the sample mean of {X1, . . . , Xn}. This implies that E[ˆ
λ(n)] =
E[X] = λ, hence ˆ
λ(n) is an unbiased estimator of E[X]. Furthermore, V(ˆ
λ(n)) = σ2/n.
As the number of simulations, n, increases, ˆ
λ(n) becomes a better estimator of λas a
consequence of the Law of Large Numbers (LLN) (Rice 2007, pp. 175).
The payoff at time Tfor the ERBPO is max{KΠT,0}. This needs to be discounted
back to time T= 0 to obtain the value today. That is, in terms of the previous paragraph:
X=erT max{KΠT,0}with E[X] = α, the price of the option, and rthe zero-coupon
risk-free rate. It is important to note that throughout this paper it will be assumed that
the term structure of risk-free rates is flat. It is, however, not difficult to incorporate a
non-flat term structure of interest rates, as one simply need to calculate the forward rates,
rt,t+∆ = ((t+ ∆) ×rt+∆ t×rt)/∆ (with ∆ the length of the time step being simulated)
when simulating between time steps.
Therefore, the problem changes to simulating the variate ΠT. Once this has been simu-
lated, the payoff can easily be discounted. The simulation of ΠTforms the focus of the
next two subsections.
2.2 Simplistic Monte Carlo
Before considering the simulation of an a-asset option, the process followed by the un-
derlying stocks needs to be discussed. Each stock has a continuous dividend yield, qj,
with j= 1, . . . , a. For multi-asset options the correlated underlying stocks are assumed to
follow the Geometric Brownian motion process (considering that in risk-neutral valuation
all assets are assumed to have the same return, r) resulting in
dSj
Sj
= (rqj) dt+σjdzj,
with ˆ
E[dzjdzk] = ρjk dt,ˆ
Ethe expected value in the risk-neutral world, and ρjk the
correlation between assets jand k. In sampling the paths of these assets (for j= 1, . . . , a
with correlation matrix Σ), the following well-known result
Sj,t+∆t=Sj,t exp " rqjσ2
j
2!t+σjjt#,for j= 1, . . . , a, with
4CJ van der Merwe & WJ Conradie
1
.
.
.
a
MV Na(0; Σ) and Σ =
1ρ1,2·· · ρ1,a
ρ2,1....
.
.
.
.
.....
.
.
ρa,1·· · · ·· 1
is obtained. This can be simulated with the use of Cholesky factorization. It will be
explained in terms of generating acorrelated normally distributed variables 1,2,. . .,a. A
sequence of auncorrelated normally distributed variables Z1, Z2, . . . , Zacan be generated
and transformed with =MZ, where T= (1, . . . , a) and ZT= (Z1, . . . , Za) are column
vectors. The matrix M:a×amust satisfy MMT= Σ, with Σ : a×athe correlation
matrix. This can be confirmed by taking the expectation of  T=M Z Z TMTas E[ T] =
ME[Z Z T]MT=M M T= Σ.
Therefore, to simulate paths for the underlying stocks, only auncorrelated N(0,1) vari-
ables are needed. These are simulated by generating UU(0,1)aand applying the Inverse
Probability Integral Transform (IPIT) (Rice 2007, pp. 352–358). It is important to note
that the whole simulation process originates from UU(0,1)a. Note, however, that the
dimensionality increases as more time steps are simulated: say 5 jumps for each asset
needs to be simulated and there are 4 assets, then the dimensionality of one simulation
(that is the portfolio value at time T) depends on 4 independent UU(0,1)5, or simply
UU(0,1)5·4=20.
The portfolio Π gets rebalanced every τyears, with the option expiring at T. The under-
lying assets need to be simulated on times {t=:`= 1,...,bTc, T /τ}which implies
that the number of jumps for each stock that need to be simulated, is dT/τ e. Thus, the
dimension of the problem changes to a·dT ewith athe number of assets underlying the
portfolio.
To simulate the value of the portfolio (in order to calculate the discounted payoff ) the
following needs to be considered first: the value of the portfolio, at any given time t, is
expressed as Πt=w1,`τ S1,t +. . . +wa,`τ Sa,t, with `the timestamp of the rebalancing
prior to time tand wj,`τ the number of units of asset jheld in Π at that point in time.
Further, note that Π δt = Πwhen δt 0 — that is, the value of the portfolio before
rebalancing is exactly the same as after rebalancing.
By using the jumps of the underlying stocks {Sj,`τ :j= 1, . . . , a;`= 1,...,bTc, T /τ}
and the fact that Πδt = Πfor δt 0 we may use
wj,`τ =vjΠδt
Sj,`τδt
=vjΠ
Sj,`τ
,(1)
with `= 0,1,...,T
τand j= 1, . . . , a, and vjthe proportion of the portfolio value
invested in asset jto calculate {wj,`τ :j= 1, . . . , a;`= 1,...,bT/τ c}.
Values for {wj,bT cτ:j= 1, . . . , a}are found by means of equation (1) and a possible
value
ΠT=
a
X
j=1 wj,(bTc)τSj,T
Calculation sspects of the ERBO using MC Methods: Valuation 5
can be simulated. This will be used to simulate nindependent values of X,{Xi:i=
1, . . . , n}, such that the estimator for the price of the ERBPO is given by
ˆαSM C =1
n
n
X
i=1
Xi
=1
n
n
X
i=1
erT max{KΠi,T ,0}.
2.3 Refined Monte Carlo
It can be proved by means of mathematical induction that the value of ΠTthat consists of
two non-dividend paying assets, with ∆tT= (τ , . . . , τ, T mod τ), can be calculated with
ΠT= Π0×
dTe
Y
`=1
2
X
j=1
vjexp rσ2
j
2!t`+σjjpt`!
.(2)
For a portfolio that consists of adividend paying assets, with dividend yield qi, (2) can
be generalised to
ΠT= Π0×
dTe
Y
`=1
a
X
j=1
vjexp rqjσ2
j
2!t`+σjjpt`!
.
This formula may be seen as the initial portfolio value, Π0, growing proportionally to the
growth rates of the different underlying correlated assets.
Hence, to simulate the price of this portfolio one only needs an observed value of U
U(0,1)d, with d=a· dT e, to obtain dT ecorrelated normally distributed vectors,
where MV Na(0; Σ). This method delivers exactly the same results, but in consider-
ably less computational time.
The final Refined Monte Carlo (RMC) estimator is given by
ˆαRMC =1
n
n
X
i=1
Xi
=1
n
n
X
i=1
erT max{KΠi,T ,0},
with
Πi,T = Π0×
dTe
Y
`=1
a
X
j=1
vjexp rqjσ2
j
2!t`+σji,jpt`!
,
for n·dT eindependently identically distributed (i.i.d.) iMV Na(0; Σ). The vectors
are obtained by generating UiU(0,1)a, using the IPIT to obtain ZiMV Na(0, I),
and finally transforming it to i=MZi.
6CJ van der Merwe & WJ Conradie
Ruses the function Random in its base package (R Development Core Team and contribu-
tors worldwide 2011) as the Random Number Generator (RNG). This function randomly
chooses which algorithm to use to generate the random uniformly distributed variables.
The seed consists of a vector of different integers, and the length depends on the methods
chosen, and would therefore be cumbersome to include here. Note that, whenever sim-
ulations are performed for a specific result, Renables the user to keep that initial seed
constant, and this was done throughout all simulations.
2.4 Results for the comparison of the simplistic vs. the refined Monte
Carlo approach
This section provides a comparison of the SMC to the RMC approach. Simulations were
performed over 9 combinations of ρ∈ {-0.5, 0, 0.5}, Π0∈ {500, 1 000, 1 500}and n
{500, 50 000, 500 000}, for a two-asset ERBPO. The other parameters were held fixed as
follows: τ= 1, v1=v2= 0.5, T= 10, S1,0= 15, S2,0= 20, r= 0.03, σ1=σ2= 0.3,
q1=q2= 0 and K= 1 000. The initial seed for the RNG was also fixed so that results
from different methods could be compared. The Price and SE were found to be exactly
the same for both approaches with the only difference being the computational time for
the two methods.
Note that, the combinations of the inputs used were chosen arbitrarily — any other pa-
rameter could have been chosen as part of the different combinations. The most important
parameter that is incorporated here is the number of simulations, n, this would signifi-
cantly increase the computational time for the different combinations, while the other
inputs simply group each simulation.
Figure 1 shows the different methods in terms of computational time on a logarithmic scale,
with distinction made between n∈ {500, 50 000, 500 000}. Note that these differences will
increase/decrease as the dimensionality of the problem increases/decreases.
Figure 1: A graph of the computational times for the simplistic and refined MC methods for
three different number of simulations.
Calculation sspects of the ERBO using MC Methods: Valuation 7
The computational time is considerably less for the refined method. It is interesting to
note that, in nominal terms, the computational times are considerably reduced. For n=
500 000 the simplistic approach took approximately 340 seconds, where the refined method
only took approximately 8 seconds yielding a reduction of approximately 51
2minutes. In
real terms, the refined method only took approximately 2% of the computational time
of the simplistic method resulting in approximately a 98% reduction. The decrease in
computational time becomes especially important in practice when valuing a large number
of contracts at the same time.
3 Variance reduction techniques
In this section three different VRTs are applied to the RMC simulation of the ERBPO.
All the methodology behind each VRT in terms of the ERBPO will briefly be described
in each subsection, after which the estimator of the ERBPO will be given. The section
concludes with a comparison of results, facilitating a choice with regard to the superior
method. A brief discussion of the methodology behind VRTs is supplied first.
In MC simulation, λ=E[X] is estimated by generating a sample {Xi:i= 1, . . . , n}and
then determining b
λ(n) = 1
nPn
i=1 Xi. Furthermore, the SE of the estimator is σ/nwith
σ2the variance of X. Note that there are two elements that influence the SE, namely
nand σ. The first element can easily be interpreted: the more simulations that are
performed, the smaller the SE will become, and the more accurate the estimate will be.
The other element is the square root of the variance of the simulated variable X. Therefore,
to make the SE smaller, the variance of Xshould be reduced.
The SE is directly related to the width of the confidence intervals (CIs) constructed after
simulations are performed. If the SE can be reduced, then smaller CIs can be obtained.
In this section methods are thus introduced to reduce the size of σ. The best method will
be chosen on the basis of a fixed computational time and size of the SE. That is, given a
fixed amount of time, which method yields the smallest SE and therefore the smallest CI?
Some VRTs increased the computational time of the simulation, but this was brought into
consideration since simulations across all VRTs where performed over a fixed amount of
time. A VRT that increases computational time will thus be simulated a smaller number
of times.
3.1 Antithetic variables
Antithetic variables (AVs) attempt to reduce the variability of the simulations by pro-
ducing a set of simulations with the help of uniformly distributed random variables and
then a second set of simulations are performed with a set of perfectly negative correlated
uniformly distributed variables to the first set.
Let X=H(U) = erT max{KΠU,T ,0}and Y=H(V) = erT max{KΠV,T ,0}
with UU(0,1)a·dTeand V= (1 U)U(0,1)a·dT e. Then E[X] = E[Y] = αand
V(X) = V(Y) = σ2.
Note that since, theoretically, Cov(U, V ) = 1/12, it implies that Corr(U, V ) = 1.
8CJ van der Merwe & WJ Conradie
Thus, using these in a monotonic function, H(·) would cause
Cov(H(U), H(V)) <0.
Therefore, let XAV =X+Y
2with
E[XAV ] = α
and
V(XAV ) = σ2(1 + ρX,Y ).
Generate n/2 observations to obtain the average, that is
ˆαAV =1
n/2
n/2
X
i=1
XAV,i
=1
n/2
n/2
X
i=1 Xi+Yi
2
=1
n/2
n/2
X
i=1 H(Ui) + H(Vi)
2,
with Xiand Yi(i.i.d.) simulations of Xand Yas given above (using negatively correlated
underlying uniformly distributed variables). Hence
E[ˆαAV ] = α
and
V(ˆαAV ) = σ2
n(1 + ρX,Y ).
It is clear that the SE reduces when ρX,Y <0.
3.2 Control variates
Control Variates (CVs) incorporates a variate into the simulation process, of which the true
value is known. This subsection will start by mathematically explaining why this could
possibly reduce the variablility of the simulated variables. A more detailed summary of
CVs may be found in Chan and Wong (2006, pp. 104–109) and will be applied here to the
pricing of the ERBPO.
When α=E[X] is estimated with MC simulation a CV, Y, may be introduced. This
variable has a known mean µY=E[Y]. For any given constant c, the quantity XCV =
X+c(YµY) can be used to construct an unbiased estimator of α,i.e. E[XCV ] = α. By
taking the derivative of V(XC V ) = σ2
X+c2σ2
Y+2X,Y and setting it equal to 0, the optimal
c, called c, can be found as c=σX,Y 2
Y. This cis estimated as ˆc=ˆσX,Y /ˆσ2
Y,
with ˆσX,Y the sample covariance and ˆσ2
Ythe sample variance.
Calculation sspects of the ERBO using MC Methods: Valuation 9
The CV, Y, will be chosen as a basket of aoptions, each being a plain vanilla European
put option (this changes to call options when working with the ERBCO) on asset j, with
j= 1, . . . , a. That is YCV =Pa
j=1 Yjwith Yjthe discounted payoff from each of these put
options. For simplicity, all assets are assumed to have an initial value of Π0such that
Yi=erT max{KSj,T ,0},
with Sj,T the value of the single dividend paying stock at time T.
The true value for each of these aoptions can be found with the help of the Black-Scholes
(BS) option pricing formulae (Black and Scholes 1973) since all volatility surfaces for
each of the underlying asset are known. The true value is denoted by µYj=E[Yj] such
that µCV =Pa
i=jµYj. The observations of the CVs, Yj, are computed from the already
simulated observations of ΠTby splitting the calculation of ΠT.
If
ΠT= Π0×
dTe
Y
`=1
a
X
j=1
vjexp rqjσ2
j
2!t`+σjjpt`!
then let
Θj,` = exp rqjσ2
j
2!t`+σjjpt`!,
with j= 1, . . . , a and `= 1,...,dT e, such that
ΠT= Π0×
dTe
Y
`=1
a
X
j=1
vjΘj,`
.
Using Θj,`, the prices of dividend paying assets with initial prices, Π0, is easily computed
with
Sj,T = Π0×
dTe
Y
`=1
Θj,`, j = 1, . . . , a,
such that Sj,T , with j= 1, . . . , a, may be used in the simulation of the prices of the vanilla
puts Yj. Hence the true values are calculated with µYj=KerT Φ(d2,j )Π0Φ(d1,j )
where
d1,j =
ln Π0
K+rqj+σ2
j
2T
σjT, d2,j =d1,j σjT
and Φ(·) is the cumulative standard normal distribution function (Black and Scholes 1973).
The final estimator of the price of the ERBPO using CVs is given by
ˆαCV =1
n
n
X
i=1
(Xi+ ˆc(YiµCV )) ,
10 CJ van der Merwe & WJ Conradie
with Xjas i.i.d. observations of the discounted simulated payoffs of the ERBPO, Yjas
i.i.d. observations of Yi=erT max{KSj,T ,0},µCV as the true value of the sum of the
aput options found using the BS pricing formulae, and
ˆc=
1
n1
n
X
k=1 Xk¯
XYk¯
Y
1
n1
n
X
k=1 Yk¯
Y2
,
with ¯
X=1
nPn
i=1 Xiand ¯
Y=1
nPn
i=1 Yi, the sample averages of the simulated observa-
tions.
3.3 Latin hypercube sampling
Latin Hypercube Sampling (LHS) is the method of systematic sampling for higher dimen-
sions. It was first introduced by McKay, Conover, and Beckman (1979) and further anal-
ysed in Stein (1987). LHS treats all coordinates equally and avoids the exponential growth
in sample size, resulting from full stratification, by stratifying only the one-dimensional
marginals of a multi-dimensional joint distribution. The method helps with the reduction
of variance by sampling systematically (evenly spread out) throughout the unit hypercube.
The package lhs (Carnell 2009) for the statistical program R(R Development Core Team
2009), has a built-in function for generating an LHS of size Bfor dimension d. This was
used to generate the underlying U U(0,1)dfor the simulations.
If nsamples are generated from the unit hypercube (0,1)dusing LHS, it might as well
have been a realisation of nsamples from the unit hypercube (0,1)dusing normal pseudo-
random numbers. The only difference being that the LHS samples are chosen evenly
through the unit hypercube. Therefore, the SE of this method’s estimate will be almost
exactly the same as for the normal refined MC method. No variance reduction will be
visible.
Glasserman (2004, p. 242) suggests generating i.i.d. estimators ˆα1(B),...,ˆαm(B), each
based on an LHS of size B, such that the estimator, with n=m·B, becomes
ˆαLHS =1
m
m
X
i=1
ˆαi(B)
=1
m
m
X
i=1 1
B
B
X
k=1
Yi,k!,
with {Yi,k}generated from Buniformly distributed variables on (0,1)d, generated using m
different LHSs. Note that E[ˆαLHS] = α, and that the SE of this estimator is the standard
deviation of ˆα1(B),..., ˆαm(B) divided by m.
3.4 Results for measuring the efficiency of different VRTs
To determine the efficiency of VRTs, the various methods were compared with each other.
In total, 1 620 two-asset ERBPO problem instances were constructed through all combina-
Calculation sspects of the ERBO using MC Methods: Valuation 11
tions of the following parameters: the correlation ρ∈ {−1,0.5,0,0.5,1}; the initial value
of the portfolio Π0∈ {500, 1 000, 1 500}; the time between rebalancing τ∈ {0.25, 0.5, 1};
the proportion invested in the first asset v1∈ {0.1, 0.5}; the time till maturity T∈ {2,
10, 15}; the risk-free rate r∈ {0.01, 0.03, 0.08}; the standard deviation of the first asset
σ1∈ {0.05, 0.3}; the standard deviation of the second asset σ2= 0.3; and strike price K=
1 000. Each of these inputs used in the combinations affect the price of the ERBO. As
an example, if the option were far out-of-the-money (Π0>> K) most of the simulations
will return a result of 0, and therefore would not return a significant SE. Furthermore, no
variance reduction effect will be visible.
The ERBPO was priced using the RMC (Normal) method as well as the RMC method
with AV, CV and LHS as VRTs, using all possible combinations of the above parameters.
The number of repetitions for each of the 1 620 combinations were adjusted so that the
computational time only lasted approximately one second. That is, given one second,
which method will yield, relative to the other methods, the smallest SE and therefore the
smallest CI? Due to the extent of the results, only a summary is provided here.
Figure 2 gives the percentage of times a certain VRT outperformed others, i.e. produced
the smallest SE in one second. The second column (Smallest) counts the percentage of
times a VRT outperformed other methods. The NA (Not Available) row indicates the
situation where there was no clear winner. The next five columns indicate the percentage
of times a certain VRT outperformed the VRT that performed second best relative to a
certain threshold. These thresholds are 0.05, 0.1, 0.2, 0.5 and 0.75. For example, in 29%
of the 1 620 instances the CV method produced an SE that was more than 0.05 smaller
than the second best method.
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
96.7% 99.4%
2.8%
0.4% 0.5%
0.1%
Smallest >0.05 >0.1>0.2>0.5>0.75
2.5%
12.1%
2.7%
33.8%
49%
46.2%
2.3%
29.0%
22.5%
0.0% 0.0% 0.0% 0.0% 0.0%
0.0% 0.0%
59.3%
0.7%
24.6%
15.4%
76.5%
0.6%
15.4%
7.4%
96.7%
2.8%
0.4%
99.4%
0.5%
0.1%
NA
Normal
LHS
CV
AV
Percentage of times VRT performed the best
Difference between best and second best method’s standard error
46.2%
29.0%
2.3%
2.5%
2.7%
12.1%
33.8% 59.3%
0.7%
24.6%
76.5%
0.6%
15.4%
49%
22.5% 15.4% 7.4%
Figure 2: Graphical representation of comparison of VRTs over 1 620 problem instances.
12 CJ van der Merwe & WJ Conradie
Initially, AV outperformed all of the other VRTs. However, considering by how much AV
reduces the variance with respect to other methods, it soon does not hold up to its initial
reputation. The number of times AV performed the best reduces much faster than that of
CV, when looking at the margin by which it outperformed the second most efficient VRT.
CVs will be chosen as the better VRT. Therefore, when applying this VRT, the SE of the
estimate should mostly be smaller than for other methods.
In the next section, QMC techniques will be used to price the options.
4 Quasi-Monte Carlo techniques
In this section the application of QMC techniques to the normal RMC method are con-
sidered. QMC techniques attempt to accelerate convergence of the method by using low
discrepancy sequences (LDSs) which are deterministically chosen sequences. The method-
ology behind the implementation of QMC in the ERBPO problem is first discussed, fol-
lowed by a method to evaluate the performance of these methods — one with fixed di-
mensionality and the other over increasing dimensionality.
4.1 LDS and RLDS
Several QMC techniques exist that are implemented in the pricing of the ERBO. These
may be split up into two different sequences, namely normal LDSs and randomised LDSs
(RLDSs). Randomising QMC points opens the possibility of measuring error through a
CI while preserving much of the accuracy of pure QMC. Randomised QMC (RQMC),
which uses RLDSs, thus seeks to combine the best feature of ordinary MC and QMC.
Randomisation sometimes improves accuracy as well.
The three LDSs that were implemented was the Halton (HS), Faure (FS) and Sobol’ (SS)
sequences (Halton 1960, Hammersley 1960, Faure 1982, Sobol’ 1967). The algorithm used
for pricing the ERBPO was exactly the same as for the RMC method except that LDSs
were used as the UU(0,1)d, with d=a·dTe, instead of generating them with a RNG.
Four RLDSs were also implemented, all based on the SS, since this is the best performing
sequence of the three normal LDSs discussed. In the first RLDS, UU(0,1)dis generated
with the RNG, Random (R Development Core Team and contributors worldwide 2011) and
then added to each point of the SS modular 1 (Sobol’ R) (Glasserman 2004, pp. 320–
321). The next three RLDSs are built into R, and include: Owen type scrambling (Sobol’
R1); Faure-Tezuka type scrambling (Sobol’ R2); and both Owen and Faure-Tezuka type
scrambling (Sobol’ R3) (Owen 1998, Tezuka and Faure 2003). These can all be found in
the package fOptions (Wuertz 2010) in R(R Development Core Team 2009).
4.2 Measuring efficiency and results for QMC techniques
The results section will be divided into two parts. The first will compare the different
LDSs and RLDSs over different inputs for the ERBPO with the dimension kept constant
at 4 and the number of points used to estimate the price as the factor over which they
will be compared. The second will compare the different LDSs and RLDSs over different
Calculation sspects of the ERBO using MC Methods: Valuation 13
inputs for the ERBPO with the number of points used to estimate the price kept constant
and the dimension being the factor over which the sequences will be compared.
The first method uses the Root Mean Squared Error (RMSE) to compare the different
values for the number of points used. After this the Relative RMSE (RRMSE) will be
used to compare the different values for the dimension.
4.2.1 Constant dimensionality
The first results will be obtained with the equation
RMSE(n) = v
u
u
t1
m
m
X
i=1
(bαi(n)αi)2,
with mERBPO problems with true values α1, . . . , αmand n-point approximations denoted
by bα1(n),...,bαm(n). Unfortunately, the true values are not known and will have to be
estimated with MC. That is, let the true values be denoted by αiand be estimated with
bαMC,i (N) with N→ ∞. The LLN implies that this will be arbitrarily close to the true
value (given Nis sufficiently large). The RMSE equation now changes to
RMSE(n) = v
u
u
t1
m
m
X
i=1
(bαi(n)bαMC,i(N))2
=v
u
u
t1
m
m
X
i=1
(bαi(n)αi+αibαMC,i(N))2
=sA
|{z}
0
+B
|{z}
dm,N
+ crossproduct
| {z }
0
where
A=1
m
m
X
i=1
(bαi(n)αi)2
B=1
m
m
X
i=1
(αibαMC,i (N))2.
Thus, RMSE(n)dm,Nas n→ ∞ and RM SE(n)0 as N→ ∞. In reality,
the RMSE will always converge to a number dm,Ndue to the fact that N→ ∞ is
computationally impossible. Nevertheless, the different methods may still be compared
on how fast they converge to this value. Figure 3 provides this comparison for the different
LDSs together with some RLDSs. Note that αiwas estimated with bαMC,i(N) using the
normal refined MC algorithm with N= 107. The other parameters were chosen as
follows: ρ∈ {−1,0.5,0,0.5,1}, Π0∈ {500, 1 000, 1 500},v1∈ {0.1, 0.25, 0.5},r∈ {0.01,
0.03, 0.08},σ1∈ {0.05, 0.1, 0.3, 0.5},K= 1 000, T= 10, τ= 2 and σ2= 0.3. The values
for nwere chosen carefully to optimally fill the unit hypercube. They are n∈ {44, 45, 46,
14 CJ van der Merwe & WJ Conradie
0.1 1 10 100 1 0000.001 0.01
Normal MC
Halton
Faure
Sobol’
Sobol’ R
Sobol’ R1
Sobol’ R2
Sobol’ R3
RMSE
0.01
0.1
1
10
100
Number of simulations in 1 000’s (logarithmic scale)
Figure 3: RMSE for different RLDSs over increasing values of n.
47, 48, 49}. Here, the most important input to the combinations is once again the number
of simulations, n. The other parameters were simply chosen to group the simulations.
Figure 3 displays the graph for the first part of the results for this section. From the
graph, it is clear that, the best performing sequence is one of the randomised SSs (R1, R2
or R3) as they converge much faster to the value dm,N. This gives two advantages: faster
convergence, and because this is a randomised sequence, the construction of a CI.
From this result it is interesting to note that, to obtain the same RMSE of 0.9, the normal
MC simulation has to use approximately 100000 simulations compared to the Sobol’ R3
method that only needs approximately 1 000. That is approximately a 99% reduction in
the number of simulations.
4.2.2 Increasing dimensionality
The second part of this results section will state the results on how the LDSs and RLDSs
performed over different values for the dimensions. Glasserman (2004, p. 327) suggests
using the RRMSE to compare the different sequences when considering increasing dimen-
sionality. The formula is given by
RRM SE(n) = v
u
u
t1
m
m
X
i=1 bαi(n)αi
αi2
,
with mERBPO problems with true values α1, . . . , αmand n-point approximations denoted
by bα1(n),...,bαm(n). Unfortunately, the true values are not known, and will have to be
estimated with MC. That is, let the true values be denoted by αiand be estimated by
bαM C,i(N) with N→ ∞.
In the results given in Figure 4, nwas chosen as 5120 and Nas 900 000. The time between
rebalancing was carefully chosen such that the dimension of the problems changes from 20,
Calculation sspects of the ERBO using MC Methods: Valuation 15
Normal MC
Halton
Faure
Sobol’
Sobol’ R
Sobol’ R1
Sobol’ R2
Sobol’ R3
Dimension of problem (logarithmic scale)
0.01
0.1
1
10
Relative RMSE
10 100 1 000
Figure 4: RRMSE for different RLDSs over increasing sizes of the dimension.
40, 80 up to 200. The other parameters were chosen as follows: ρ∈ {−1,0.5,0,0.5,1},
Π0∈ {500, 1 000, 1 500},v1∈ {0.1, 0.25, 0.5},r∈ {0.01, 0.03, 0.08},σ1∈ {0.05, 0.1,
0.3, 0.5},K= 1 000, T= 10, τ= 2 and σ2= 0.3. The values for τwere chosen to
obtain the desired dimension of the problems. They are τ∈ {2, 1, 0.25, 0.1}. Here, the
most important input to the combinations is the time between rebalancing (τ). The other
parameters were simply chosen to group the simulations.
The FS and HS do not perform well at all due to the nature of the sequences (nhas to
be chosen carefully to obtain better results). Comparing the other LDSs it is clear that
the other methods all produce smaller RRMSEs than that of the normal MC method.
However, the efficiency decreases as the dimension increases. For example, when d= 10
the Sobol’ R1 method produced an RRMSE which was significantly smaller than the
normal MC method, but when d= 200 they produced almost the same RRMSE.
5 Conclusion
Monte Carlo techniques may be used as a method to price a variety of different exotic
options. This article aimed to find the best Monte Carlo technique to price the ERBO.
A simplistic approach was refined using a mathematical proof after which different VRT
were applied to help reduce the size of the error. Convergence was then increased with
the help of different Quasi-Monte Carlo and Randomised Quasi-Monte Carlo techniques.
The final combined algorithm for optimal pricing of the ERBO is given in the Appendix.
This can be programmed in any mathematical/statistical package. It would be advised to
program it in R(R Development Core Team 2009) as the Randomised Quasi-Monte Carlo
techniques are readily available. Thus, by using the Refined Monte Carlo method with
option prices as Control Variates together with Owen and Faure-Tezuke type randomised
Sobol’ Sequences as a Quasi-Monte Carlo method, more efficient methods to price this
option are obtained.
Only the pricing of the ERBPO was discussed in this paper, the algorithm was adapted
16 CJ van der Merwe & WJ Conradie
to price the ERBCO. Note, however, that the price of the ERBCO can be found with the
help of the put-call parity that can easiliy be derived using the normal arguments for the
plain vanilla put-call parity. That is
ct+Ker(Tt)=pt+ Πt+ Πt
a
X
j=1 vj1eqj(Tt),
with ctand ptthe prices at time tof the ERBCO and ERBPO respectively.
Although the initial research question was answered, there still exist some open ques-
tions which can serve as future research topics. Further research could be performed on
combining different VRT to possibly find an improvement on the classical VRT. Another
possibility includes (a) determining whether different input parameters may be consid-
ered and (b) a method on how to predict which VRT would reduce the SE the most
can be found. This could be done with Linear Discrimenant Analysis, Classification and
Regression Trees or other Data-mining techniques on previous simulations.
Finally, this article showed that the Refined Monte Carlo method decreased the computa-
tional time of the value of the ERBO by approximately 98% (compared to the Simplified
Monte Carlo method); the error of the estimates was smaller than it was for normal Monte
Carlo approximately 95% of the time using different Variance Reduction Techniques; and
by applying Quasi-Monte Carlo methods, the number of simulations needed to obtain
the same accuracy than normal Monte Carlo decreased by approximately 99%. Hence,
advanced simulation procedures are worthwhile to implement when pricing exotic type
derivatives using Monte Carlo simulation.
References
[1] Black F & Scholes M, 1973, The pricing of options and corporate liabilities, Journal of Political
Economy, 81(3), pp. 637–654.
[2] Boyle P, 1977, Options: A Monte Carlo approach, Journal of Financial Economics, 4, pp. 323–338.
[3] Carnell R, 2009, LHS 0.5 Edition, [Online], [Cited April 25th 2012], Available from: http://cran.
r-project.org/web/packages/lhs/index.html.
[4] Chan NH & Wong HY, 2006, Simulation Techniques in Financial Risk Management, John Wiley
& Sons, Inc., New Jersey (NJ).
[5] Faure H, 1982, Discr´epance des suites associ´ees `a un syst´eme de num`eration, Acta Arithmetica, 41,
pp. 337–351.
[6] Financial Services Authority, 2011, Solvency II, [Online], [Cited April 25th 2012], Available from:
http://www.fsa.gov.uk/pages/About/What/International/solvency/index.shtml.
[7] Financial Services Board, 2010, Solvency Assessment and Management (SAM) Roadmap,
[Online], [Cited April 25th 2012], Available from: ftp://ftp.fsb.co.za/public/media/
SAMROADMAP03112010.pdf.
[8] Glasserman P, 2004, Monte Carlo Methods in Financial Engineering, Springer Science and Business
Media, New York (NY).
[9] Halton JH, 1960, On the efficiency of certain quasi-random sequences of points in evaluating multi-
dimensional integrals, Numerische Mathematic, 2, pp. 84–90.
[10] Hammersley JM, 1960, Monte Carlo methods for solving multivariable problems, Annals of the New
York Academy of Sciences, 86, pp. 844–874.
Calculation sspects of the ERBO using MC Methods: Valuation 17
[11] McKay MD, Conover WJ & Beckman RJ, 1979, A comparison of three methods for selecting
input variables in the analysis of output from a computer code, Technometrics, 21, pp. 239–245.
[12] Owen AB, 1998, Scrambling Sobol’ and Niederreiter-Xing points, Journal of Complexity, 14(4), pp.
466–489.
[13] R Development Core Team, 2009, R: A language and environment for statistical computing, [On-
line], [Cited April 25th 2012], Available from: http://www.R-project.org.
[14] R Development Core Team, 2011, The R base package, 2.12.1 Edition, [Online], [Cited April 25th
2012], Available from: http://www.R-project.org.
[15] Rice JA, 2007, Mathematical statistics and data analysis, Thomson Higher Education, Belmont (CA).
[16] Sobol’ IM, 1967, On the distribution of points in a cube and the approximate evaluation of integrals,
USSR Journal of Computational Mathematics and Mathematical Physics, 7, pp. 784–802.
[17] Stein M, 1987,
Large sample properties of simulations using Latin hypercube sampling, Technomet-
rics, 29, pp. 143–151.
[18] Tezuka S & Faure H, 2003, I-binomial scrambling of digital nets and sequences, Journal of Com-
plexity, 19, pp. 744–757.
[19] Wuertz D, 2010, fOptions, [Online], [Cited April 25th 2012], Available from: http://cran.
r-project.org/web/packages/fOptions/index.html.
Appendix: Programmable algorithm to value the ERBO
The pricing algorithm, which may be implemented in any capable statistical sotftware
program, for the ERBO becomes:
Algorithm 1: Price of the ERBO
Input :n, Π0,τ,v,T,r,K,σ, Σ, type,q.
Output :¯
OCV – the estimated price of the ERBO, SE – the SE of the estimate, CI – 95% CI.
MAIN(n, Π0,τ,v,T,r,K,σ, Σ, type,q);
1
¯
OCV Pn2
i=1 PCV [i]/n2;2
SPqPn2
i=1(PC V [i]¯
OCV )2/(n21);
3
SE SP/n2;4
CI [¯
OCV 1.96 ·SE, ¯
OCV + 1.96 ·SE];5
18 CJ van der Merwe & WJ Conradie
Algorithm 2: MAIN
Input :n, Π0,τ,v,T,r,K,σ, Σ, type,q.
Output : All scalars, vectors and matrices generated in this algorithm can be used in future
computations.
if type =call then1
I← −1;2
end3
else if type =put then4
I1;5
end6
n1← b0.05 ·nc,n2nn1,P0 [vector of length n1], CV 0 [vector of length n1];7
if (Tmod τ= 0) then8
t[τ, τ , . . . , τ] (length T );9
end10
else11
t[τ, τ , . . . , τ, T mod τ] (length dT e);12
end13
Generate Aa· dTe × n1matrix containing the SS;14
Zj, j0 [dTe × n1matrices for j= 1,...,a];15
for i= 1 to n1do16
ΠΠ0,CVjΠ0for j= 1,...,a;17
for `= 1 to dT edo18
Calculate Zj[`, i] using the IPIT and transform with the Cholesky decomposition to obtain19
j[`, i] with the ith column of A, for j= 1,...,a;
Θjexp rqjσ2
j
2t[`] + σjj[`, i]pt[`]for j= 1,...,a;
20
ΠΠ·Pa
j=1 vjΘj,CVjC Vj·Θjfor j= 1,...,a;
21
end22
P[i]max{I·(KΠ),0}erT ,CV [i]Pa
j=1 max{I·(KCVj),0}er T ;
23
end24
¯
PPn1
i=1 P[i]/n1,SPqPn1
i=1(P[i]¯
P)2/(n11);
25
¯
CV Pa
j=1 BSj(S0= Π0, T =T , r =r, σ =σj, K =K, q =qj, type =type);
26
σP,CV Pn1
i=1(P[i]¯
P)(CV [i]¯
CV )/n1,c← −σP ,CV /S2
P;27
P0 [vector of length n2], PCV 0 [vector of length n2], CV 0 [vector of length n2];28
Generate AadTe × n2matrix containing the SS, Zj, j0 [dT/τ e × n2matrices for29
j= 1,...,a];
for i= 1 to n2do30
ΠΠ0,CVjΠ0for j= 1,...,a;31
for `= 1 to dT edo32
Calculate Zj[`, i] using the IPIT and transform with the Cholesky decomposition to obtain33
j[`, i] with the ith column of A, for j= 1,...,a;
Θjexp rqjσ2
j
2t[`] + σjj[`, i]pt[`]for j= 1,...,a;
34
ΠΠ·Pa
j=1 vjΘj,CVjC Vj·Θjfor j= 1,...,a;
35
end36
ΠT[i]Π, P[i]max{I·(KΠ,0),0}erT ,CV [i]Pa
j=1 max{I·(KCVj),0}er T ,
37
PCV [i]P[i] + c(CV [i]¯
CV );
end38
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Hybrids of equidistribution and Monte Carlo methods of integration can achieve the superior accuracy of the former while allowing the simple error estimation methods of the latter. In particular, randomized (0, m, s)-nets in basebproduce unbiased estimates of the integral, have a variance that tends to zero faster than 1/nfor any square integrable integrand and have a variance that for finitenis never more thane≐2.718 times as large as the Monte Carlo variance. Lower bounds thaneare known for special cases. Some very important (t, m, s)-nets havet>0. The widely used Sobol' sequences are of this form, as are some recent and very promising nets due to Niederreiter and Xing. Much less is known about randomized versions of these nets, especially ins>1 dimensions. This paper shows that scrambled (t, m, s)-nets enjoy the same properties as scrambled (0, m, s)-nets, except the sampling variance is guaranteed only to be belowbt[(b+1)/(b−1)]stimes the Monte Carlo variance for a least-favorable integrand and finiten.
Article
Full-text available
Two types of sampling plans are examined as alternatives to simple random sampling in Monte Carlo studies. These plans are shown to be improvements over simple random sampling with respect to variance for a class of estimators that includes the sample mean and the empirical distribution function. 6 figures.
Article
The computational complexity of the integration problem in terms of the expected error has recently been an important topic in Information-Based Complexity. In this setting, we assume some sample space of integration rules from which we randomly choose one. The most popular sample space is based on Owen's random scrambling scheme whose theoretical advantage is the fast convergence rate for certain smooth functions.This paper considers a reduction of randomness required for Owen's random scrambling by using the notion of i-binomial property. We first establish a set of necessary and sufficient conditions for digital (0,s)-sequences to have the i-binomial property. Then based on these conditions, the left and right i-binomial scramblings are defined. We show that Owen's key lemma (Lemma 4, SIAM J. Numer. Anal. 34 (1997) 1884) remains valid with the left i-binomial scrambling, and thereby conclude that all the results on the expected errors of the integration problem so far obtained with Owen's scrambling also hold with the left i-binomial scrambling.