Conference PaperPDF Available

Transfer and Generalisation of Financial Risk Metrics to Discrete Event Simulation

Authors:

Abstract and Figures

Quantitative Finance is one of the numerous application fields of discrete event simulation. Because of special requirements in this area, typically domain specific simulation tools are applied, instead of general purpose simulators. It appears fruitful and beneficial to provide some of the risk metrics common in quantitative finance for discrete event simulation in general, in order to make use of them in generalised versions in further domains. In this paper we describe transfer and generalisation of risk metrics from quantitative finance to general purpose simulators with regard to Semi-Variance, Value at Risk, Expected Shortfall and Drawdown.
Content may be subject to copyright.
TRANSFER AND GENERALISATION OF FINANCIAL RISK METRICS
TO DISCRETE EVENT SIMULATION
Arne Koors(a), Bernd Page (b)
(a) Department of Informatics, University of Hamburg, Germany
(b) Department of Informatics, University of Hamburg, Germany
(a) koors@informatik.uni-hamburg.de, (b) page@informatik.uni-hamburg.de
ABSTRACT
Quantitative Finance is one of the numerous application
fields of discrete event simulation. Because of special
requirements in this area, typically domain specific
simulation tools are applied, instead of general purpose
simulators. It appears fruitful and beneficial to provide
some of the risk metrics common in quantitative finance
for discrete event simulation in general, in order to
make use of them in generalised versions in further
domains. In this paper we describe transfer and
generalisation of risk metrics from quantitative finance
to general purpose simulators with regard to Semi-
Variance, Value at Risk, Expected Shortfall and
Drawdown.
Keywords: risk metrics, discrete event simulation
1. INTRODUCTION
The field of Quantitative Finance (also called
Computational Finance or Financial Engineering) deals
with computer-supported analysis of price histories of
asset values and the support of investment decisions in
financial markets. Next to Monte-Carlo-Simulation (i.e.
mathematical method, that solves complex problems
from probability theory numerically, based on repetitive
random experiments following the Law of large
Numbers, see Metropolis and Ulam (1949)) and related
methods, discrete event simulation is mainly applied in
two areas:
On the one hand, simulating the performance
of financial markets on micro level, i.e. down
to the level of single market participants
(Arthur, Holland, LeBaron, Palmer 1997; Lux
and Marchesi 2000; Levy, Levy, and Solomon
2000; Hommes 2006; LeBaron 2006).
On the other hand, evaluating particular
financial market trading strategies by
simulating, assessing and optimising them in
different historical market environments
(Chande 1997, Kocur 1999, van Tharp 2007).
In the context of this paper, we focus on the second
application area.
For the evaluation of trading strategies, special
purpose simulators are applied, so-called back testers.
Back testers differ from general purpose simulators in
the following aspects:
1. Instead of common random number generators,
historical time series are used as data sources
for security prices.
2. Entities in the sense of classical simulation
objects are not required, as only the behaviour
of defined trading strategies in the context of
inflowing market data is analysed. From a
conceptual point of view, these strategies do
not necessarily have to be represented as
entities.
3. Waiting queues and higher modelling
components such as processing stations or
transport stations are not explicitly required for
modelling, due to the immaterial nature of
financial strategies and their market orders.
Likewise, synchronization mechanisms for
different entities are usually not needed.
4. However, there are extensive requirements on
the characterisation of trading strategies, in
particular related to profitability and the risk
taken. Here, computation of manifold special
key figures developed in quantitative finance is
required for an extended reporting. To our
knowledge, most of these key figures and their
underlying concepts have not been regarded in
general purpose simulation so far.
* Event Modelling
* Activity Modelling
* Process Modelling
* Simulation Clock
* Scheduler
* Standard Statistics
* Reporting
* Experiments
* Optimization
* Entities
* Queues
* Stochastic
Distributions
* Extended Reporting
* Historical Timeseries
Discrete Event Simulators Back Testers
Figure 1: Commonalities and differences of general
purpose discrete event simulators and back testers
Proceedings of The International Workshop on Applied Modeling & Simulation, 2012
978-88-97999-07-2; Bruzzone, Buck, Cayirci, Longo, Eds.
100
Commonalities and differences between general
purpose discrete event simulators and back testers are
shown in the figure above.
In spite of the differences mentioned, back testers
and general purpose simulators are widely comparable
in structural terms. Further, the modelling and
simulation cycle as well as experiments are processed
equivalently. Back testers can be understood as a
special case of general purpose discrete event
simulators and therefore implemented by these, see e.g.
Golombek (2010) or Koors and Page (2011).
From a historical point of view, back testers have
developed concurrently to general purpose simulators
since the nineties, with rather limited mutual exchange
of ideas into both directions.
Risk metrics are an advanced aspect of back
testers, both serving for quantification of risk of a
particular trading strategy and for comparisons of
different trading strategies amongst each other. To us,
risk metrics do appear potentially useful for other
application domains as well.
In this paper, we aim at the transfer and
generalisation of established risk metrics from
quantitative finance into the world of general purpose
discrete event simulators.
This paper is structured as follows: In section 2, we
deal with the character of risk, seen from the
quantitative finance point of view. Parallels to
application fields of simulation are shown. We advance
to the concept of downside risk and motivate that the
idea of transferring financial risk metrics to general
application domains of simulation could be beneficial.
In section 3, we describe four central risk metrics of
quantitative finance in their original context first, and
then illustrate them by simulation queues. Advancing to
observation variables, we generalise the concepts and
transfer them into the field of general purpose discrete
event simulation. We outline the modifications and
enhancements we have carried out and discuss certain
implementation aspects. Section 4 summarises and
concludes the paper.
2. THE CONCEPT OF RISK IN QUANTITA-
TIVE FINANCE
2.1. Expected Value as Characteristic and Variance
as Risk
Yield and risk are the central concepts when evaluating
financial trading strategies by means of back testers.
Here, yield is understood as expected value of the
return of a trading strategy during a defined time span.
The trading strategy may carry out a number of
investment decisions during the simulated time frame,
so-called trades. The compounded return of all single
trades is the overall return of the strategy at the end of
the simulation. Its expected value is the yield of the
strategy.
The second central characteristic of trading
strategies is risk. Risk is defined as volatility, i.e.
variation of return around the expected average return,
following the fundamental thought pattern called Mean-
Variance-Framework introduced by Markowitz (1952)
into finance. Volatility mathematically corresponds to
the standard deviation of return.
General purpose simulation of discrete event
systems operates with mean and empirical standard
deviation as well, e.g. regarding queue length or
concerning the state space of observation variables in
general.
Attention should be paid to a shift in connotation
of the aforementioned concepts in finance: While the
expected value of return is considered as given and
characteristic for a strategy, variance always has a
negative connotation, in the sense of risk.
From this point of view, an expected queue length
x of a standard M/M/1-queuing system would be
considered merely a characteristic of the system. With
increasing variance of queue length (at a constant
expected value) the model would be estimated
increasingly risky, in the sense of higher uncertainty and
precariousness.
In this sense, risk can be understood as a metric for
the potential of a strategy or a model to leave a stable
equilibrium state into an undesired direction.
In many typical application fields of simulation,
the departure from an equilibrium state or from an
interval of tolerable states is also seen as critical, e.g. in
Queuing systems and production systems, if
queues run empty and machine utilisation sinks
towards 0, resp. conversely, if the available
waiting room capacity is exceeded and
therefore client orders are lost
Ecological systems, if necessary population
sizes or quantities of substance are fallen
below or exceeded, and the system collapses
Physical systems, if material strains are too
high, resulting in damages. Physiological
systems may suffer from underutilisation as
well, thus becoming inoperative in
consequence of non-use.
2.2. Downside Risk as Asymmetric Risk Conception
Deviations from the mean may be uncritical into one
direction, while undesired into the other direction. In
finance, only below-average returns (resp. above-
average losses) pose a risk for an investment, while
excess returns are welcome and may be ignored in
terms of risk assessment. Quantitative finance has
elaborated an asymmetrical risk metrics category called
downside risk, where only one-sided variations of return
in the sense of underperformance are considered as risk.
Asymmetric risk perceptions can also be found in
the application fields of simulation, with regard to
desired resp. undesired deviations from means or
system equilibrium states. Thus longer queues in
production, higher pollutant concentrations in
ecological systems or stronger physical strains will
generally be considered as more risky and less
desirable, while this is usually not true for the opposite
Proceedings of The International Workshop on Applied Modeling & Simulation, 2012
978-88-97999-07-2; Bruzzone, Buck, Cayirci, Longo, Eds.
101
cases. On this background of comparable asymmetric
evaluation preferences, downside risk metrics from
quantitative finance should be more suitable for risk
assessment in simulation application fields than
conventional symmetric standard statistics.
2.3. Practical Implementation
We would like to provide modellers of general discrete
event systems with additional tools, which allow them
to assess inherent “risks” of models more adequately,
following the concepts of quantitative finance. This can
help in understanding model dynamics more
appropriately and can deliver new fruitful approaches
and deeper insight concerning analysis and adaptation
of undesired model behaviour.
For this purpose, the four risk metrics from
quantitative finance discussed below are transferred into
our general purpose discrete event simulation
framework Desmo-J (www.desmo-j.de, Page and
Kreutzer 2005) as statistical extensions. This work is
currently carried out in the context of a bachelor thesis
in our working group Modelling and Simulation (MBS)
in the Department of Informatics at University of
Hamburg.
3. RISK METRICS
A risk metric is a concept to assess risk. In comparison,
a risk measure is the implementation of a computational
process, employed to calculate a certain risk
measurement. As we focus on the conceptual side of
risk, the term risk metric is used in this paper.
In this section we describe four central risk metrics
of quantitative finance in their original context first and
afterwards exemplarily illustrate them by simulation
queues. Advancing to observation variables, we
generalise the concepts and transfer them into the field
of general purpose discrete event simulation. We outline
the modifications and enhancements we have carried
out and discuss certain implementation aspects.
Formal definitions of the mentioned risk metrics in
their original financial context can be found in e.g.
Yang, Yu, and Zhang (2009); Lohre, Neumann, and
Winterfeldt (2009) or Giorgi (2002).
3.1. Semi-Variance
As stated above, only those trades yielding a below-
average return actually contribute to the downside risk
of a trading strategy. By contrast, trades with above-
average returns are welcome and do not increase
downside risk. Insofar, only those undesired return
deviations below expected return are accounted for in
the Semi-Variance concept. The computation is carried
out as for the standard variance, but observations above
the mean are skipped.
In the characterization of a queue, we can as well
assume that only one of the two possible deviation
directions from the mean queue length is preferable,
depending on the context. This means that in computing
Semi-Variance only time spans are to be considered,
where the average queue length is exceeded or fallen
below, respectively, depending on the preferred point of
view. (Commonly, there will be a preference for shorter
queue lengths.)
By this metric, we can gain a first impression and a
basis for comparison, with regard to the size of
undesired variations of queue length.
For the implementation of further risk metrics
described below, it is required to store all single
observations as time series, until the simulation has
ended. Thus the implementation of Semi-Variance
accesses the total sample collected at the end of a
simulation run, in contrast to the stepwise online
computation of standard statistics, as normally applied
in Desmo-J (Page, Lechler, and Claassen 2000).
In order to provide general applicability
concerning the direction of deviation perceived as risk,
we compute negative as well as positive Semi-Variance
and provide both of them separately on simulation
reports.
3.2. Value at Risk
The Value at Risk (VaR) of an open trade quantifies the
maximum loss (in absolute currency units) that will not
be exceeded at a given confidence level of 1 , at the
end of a set period. In other words, VaR is equivalent to
the -quantile of the probability distribution of the
returns expected in the set period.
Probability Distribution
of Returns expected
in the Set Period
Value at Risk 0
Figure 2: Illustration of the Value at Risk metric
The basic return distribution for computation can
be determined by Historical Simulations, Monte-Carlo-
Simulation or the Variance-Covariance method
(Linsmeier and Pearson 2000).
Value at Risk is an important key figure in
banking: Under the Basel II accord, banks are legally
obligated to compute market risk in terms of the Value
at Risk metric on a daily basis, in order to ensure that
pre-set maximum losses won’t be exceeded within
certain time horizons.
Transferred to queues in general purpose discrete
event simulators, VaR indicates the minimum or
maximum queue length expected after a given
simulation time interval and at a set confidence level,
starting from the current queue length. This becomes an
important measurand, if certain queue lengths must not
be exceeded or fallen below, e.g. because of cost
restraints or capacity limits of the waiting room. Thus,
VaR gives a formative indication how to dimension a
waiting room at a given initial state, a set confidence
level and a designated time horizon, in order to meet
specific restrictions.
Proceedings of The International Workshop on Applied Modeling & Simulation, 2012
978-88-97999-07-2; Bruzzone, Buck, Cayirci, Longo, Eds.
102
In simulation practice, it should be avoided to refer
to the current queue length resp. the present value of an
observation variable, as these values permanently
change during simulation runs and therefore are not
eligible as fixed states of reference for the VaR
measure. Instead, we implement the VaR concept by
only considering the relative change of observation
variables compared to their previous states and call this
Delta at Risk.
In case of bounded state spaces, there is a risk of
distortion at boundary states and extreme states, such as
the length of a queue cannot fall below zero. In this
context, no further decrease of the queue length will be
observable next. In contrast, if the state of an
observation variable is far away from boundary states
and extreme states, their impact on the next
observations will be much smaller.
Without differentiation, this could lead to
overestimating the risk of increase of queue length in
cases of lengths > 0, as more length increment
observations starting from length = 0 would be regarded
than appropriate in the normal case.
Conversely, observations in the context of a queue
length > 0 would distort the representative basis of
future states for length = 0, as unrealistic length
decrement observations were included in the sample,
though for length = 0 a decrease of queue length below
zero is conceptually impossible.
The stated danger of reduced significance due to
insufficient consideration of marginal or extreme
contexts exists in the practical use of Value at Risk in
financial institutions as well. Often the present state of
financial markets is abstracted from, and the risk of loss
is calculated without consideration of the current
context. For example, the risk of high losses intuitively
is lower at the end of a financial market decline than at
the beginning of the same period, as most fearful
investors have already left the market at an earlier stage,
thus selling pressure eases. Nevertheless, the current
market environment normally is not considered when
calculating VaR.
As long as state change probabilities are
determined regardless of the context of boundary states
and extreme states, the VaR metric consequently runs
into danger of diminished significance.
We address this problem by calculating four
different Delta at Risks in simulation reports, according
to four contexts: On the one hand we determine the
Delta at Risk related to the most frequent and the
median state of all states observed during the simulation
run. On the other hand we compute two more Delta at
Risk measures, corresponding to the minimum and
maximum states observed. Using the example of
queues, output is generated for the expected alteration
of queue length considering empty, frequent, median
and maximal length queues.
The choice of the median state in the sense of a
representative average state is motivated by the
approach to analyse a state as far away from boundary
states and extreme states as possible, in order to provide
a largely unaffected Delta at Risk representing
intermediate states.
In case of non-symmetrical state distributions, the
most frequent state is situated closer to boundary states
or extreme states than the median state. Even though it
might be under (partial) influence of boundary states
and extreme states, the most frequent state may be
regarded as a better basis for significant conclusions in
certain contexts, as statements concerning this state may
have higher empirical correspondence.
The description above deals with the original and
probably most frequent application of VaR as a risk
metric for one-dimensional discrete state spaces (here:
currency units). In principle, the Delta at Risk concept
is canonically extendable to multi-dimensional or
continuous state spaces as well. In order to keep
simulation reports manageable, the mapping of sets of
multi-dimensional states or intervals of states to a one-
dimensional discrete state space should be considered,
though.
With this in mind, we continue to describe Delta at
Risk in terms of one-dimensional discrete state spaces,
as we expect this to be the most common use case.
A naive implementation of Delta at Risk of an
observation variable could compute and store the delta
of state size divided by the simulation time passed since
the last state change, as a quotient, at every change of
the observation variable. By sorting these rates of
change in ascending order, accumulating their
frequencies and normalising these, the distribution
function F(x) could be constructed, describing the
distribution of rates of change per reference time unit.
However, rates of change computed in this way
would base on variable length time intervals containing
only one actual change event, being scaled to a
reference time unit afterwards. Real consecutive
observations within real reference time intervals would
be ignored. Thus, extrapolations of short time intervals
could lead to excessive distortions when dealing with
longer time intervals.
Instead, we determine and store the size of state
change at every modification of an observation variable,
as compared to the state the variable had a fixed time
interval earlier. For this purpose the state history for (at
least) the time interval under consideration has to be
stored within a time series during the simulation run.
Subsequently, the recordings of these actual
relative state changes within the set time span, are
sorted in ascending order, accumulated and normalised
in frequency, yielding to a more realistic distribution
function F(x). This procedure takes into account that
subsequent state changes may neutralise each other
partially or entirely over longer time frames, as often
observed in practice.
For flexibility, n time spans of interest may be
passed as input parameters, leading to a risk analysis for
each of the time frames given, regarding the cumulative
outcome of all multiple state changes actually observed
within that time frame, recorded at every simulation
event concerning the observation variable.
Proceedings of The International Workshop on Applied Modeling & Simulation, 2012
978-88-97999-07-2; Bruzzone, Buck, Cayirci, Longo, Eds.
103
Beyond the original application in quantitative
finance, we extend the initially asymmetrical concept of
downside risk to both ends of the state space, as it
cannot be assumed that risk always is represented at the
left end of the state space. Thus, we appraise the
potential risk at the right end of the distribution
likewise. As a consequence the -quantiles for = 1%,
2.5%, 5%, 10% as well as for 90%, 95%, 97.5% and
99% are determined from F() and output on the
simulation report.
In sum, the Delta at Risk metric derived from
Value at Risk quantifies the maximum size of change
expected (i.e. risk, in terms of quantitative finance) with
regard to an observation variable, at a given confidence
level , after a set period, and according to four well-
defined reference states.
Typical conclusions based on the simulation report
were “At a confidence level of 97.5% and starting from
the observed median m, the queue length will
maximally increase by x entities and maximally
decrease by y entities after 10 minutes of simulated wall
clock time” or “Starting with an empty queue and given
a confidence level of 99%, the queue length will not
exceed z entities after 1 hour of simulated wall clock
time”.
3.3. Expected Shortfall
Value at Risk quantifies the maximum loss at a given
confidence level of 1 , nevertheless a loss exceeding
VaR is not impossible, as long as > 0. The
shortcoming of the VaR concept is that it does not make
a statement about the amount of loss to be expected, if
the limit of Value at Risk is exceeded in critical cases.
This gap is filled by the metric Expected Shortfall
(also referred to as Conditional Value at Risk or
Expected Tail Loss, Rockafellar and Uryasev 2000). It
expresses the expected amount of loss for the fraction
of cases where VaR is exceeded. Hence, Expected
Shortfall is a metric to assess the potential extent of
damage for unlikely but possible cases of extreme
events (in terms of the choice of ). Expected Shortfall
is an important key figure used to describe the state
space beyond VaR when structuring finance products
with insurance nature.
Probability Distribution
of Returns expected
in the Set Period
Expected Shortfall 0
Value at Risk
Figure 3: Illustration of the Expected Shortfall metric
Here too, we generalise the quantitative finance
metric with regard to three aspects, for the purpose of
transferring the concept to general simulation
application domains: Firstly, we move from expected
absolute loss to expected relative state change of an
observation variable. Secondly, we consider minimum,
median, most frequent and maximum states as
references. Thirdly, both ends of the probability
distribution are regarded likewise, to remain flexible
with respect to where to attribute risk, depending on the
special application area.
In order to avoid confusion, the modified risk
metric is called Conditional Delta at Risk.
Conditional Delta at Risk is based on the same data
as Delta at Risk introduced above. In the course of
calculating Delta at Risk, the Conditional Delta at Risk
simply can be computed as the expected value of the
empirical probability density below (resp. above) the -
quantile of all observations.
Referring to queues, Conditional Delta at Risk
indicates the expected growth or contraction of queue
length for the remaining fraction of cases beyond the
confidence level. If a waiting room was dimensioned
taking account of the Delta at Risk metric, its overload
in the remaining fraction of cases is now appraisable.
A typical conclusion based on the simulation
report would be “If, starting with an empty queue and
given a confidence level of 99%, the queue length
exceeds the Delta at Risk of z entities after 1 hour of
simulated wall clock time, then an average queue length
of z + c entities can be expected”.
3.4. Drawdown Phases
The term Drawdown of a trading strategy relates to an
interim loss of asset value, after a new peak of asset
value was reached beforehand. Drawdown may be
given in absolute currency units or as a percentage of
the preceding peak asset value. A Drawdown Phase
often extends over several consecutive (mis-)trades and
thus cumulates their effects.
Drawdown Recovery starts at the point of
maximum interim loss. It lasts until the previous peak
asset value is reached again or exceeded.
-10%
0%
-20%
Jul 2011 Mar 2012Nov 2011
Drawdown
Drawdown Time Drawdown Recovery Time
Drawdown Phase
Figure 4: Drawdown Phase of the L’Oréal Share from
July 2011 to April 2012
Drawdown and Drawdown Time give an
impression of extent and speed at which the state of
observation variables may move into an undesirable
direction. Therefore, these key figures allow an
assessment of undesirable system dynamics in terms of
Proceedings of The International Workshop on Applied Modeling & Simulation, 2012
978-88-97999-07-2; Bruzzone, Buck, Cayirci, Longo, Eds.
104
vulnerability or susceptibility to disturbances. By
contrast, the Drawdown Recovery Time provides an
indication of the regenerative capacity of the analysed
system.
A financial trading strategy may experience a
multitude of Drawdown Phases over time. Especially
the Maximum Drawdown ever undergone is of
particular interest with regard to trading futures
contracts in financial markets, as this key figure
determines the required minimum margin of a trading
account, to withstand the highest Drawdown
encountered so far in the strategies history.
Drawdowns and their recoveries can only be
quantified ex post, when a Drawdown Phase is
completed and a new peak in asset value has been
reached. Moreover, a trading strategy is almost always
in a Drawdown, except from new peaks in asset value.
Hence it is of vital interest to analyse the structure of
Drawdowns to gain insight into the dynamics of
undesirable behaviour.
In the context of queues, a queue length of 0 may
be set as base level, corresponding to the peak asset
value in financial context. Then Drawdown, Drawdown
Time and Recovery Time characterise the dynamics of
formation and reduction of queues, regarding the queue
length as observation variable.
Since a multitude of Drawdown Phases per
observation variable is to be expected in simulation
runs, we extend the quantitative finance Drawdown
concept, as it originally focuses only on the extreme
case Maximum Drawdown resp. Average Drawdown.
To provide a quick overview of the total dynamics
of the system modelled, we classify all Drawdowns
according to their absolute extent and display their
distribution in a histogram. The number of histogram
bins is determined according to the rule of Freedman
and Diaconis (1981), after the simulation has ended.
Two additional histograms visualise the
distribution of Drawdown Times and Drawdown
Recovery Times in a similar manner.
For further orientation, we introduce a Drawdown
scatter plot, encoding Drawdown Time as x-coordinate,
Drawdown Recovery Time as y-coordinate and the
Drawdown extent as colour of a data point. Hereby
character and distribution of all Drawdown Phases
during the simulation run can be seen at a single glance.
Moreover, all Drawdown Pathways per
observation variable are superimposed in a joint
coordinate system. Thus, a good overview of the typical
and most severe Drawdown Phases is given, including
the Recovery sub-phases.
A second diagram visualises the superimposed
time series only of the Recovery sub-phases per
observation variable, providing a quick overview of the
regenerative properties of the system modelled.
Beyond the specified extended analysis of the
Drawdown concept itself, we generalise this risk metric
in three ways, in order to support its flexible and
unrestricted utilisation in simulation application
domains:
We consider both the setbacks and recoveries
on the way towards peak states (“classical”
Drawdowns) and complementarily the ascent
and descent phases on the way towards bottom
states. By this means, we again take into
account that the interpretation of a certain state
development direction as risky or preferable
cannot be predetermined for the manifold
application areas of simulation.
Furthermore, after determination of the median
state at the end of a simulation run, the time
series of observed variable states is divided
into phases below and above the median.
These phases are treated separately as
Drawdowns and Recoveries concerning states
below the median resp. as ascents and descents
concerning states above the median. This
supports the alternative point of view of
striving for a central state of equilibrium and
considering deviations from this balanced state
as risk. Since phases below and above the
median are treated separately, it remains free
whether risk is attributed to one or both
directions of deviation.
For non-symmetrical empirical distributions,
the same handling as above is applied, but this
time with reference to the state with the highest
frequency instead of the median state.
Accordingly, all advanced statistical and graphical
analysis mentioned (3 histograms, 1 scatter plot, 2 time
series diagrams) are provided for all six use cases of the
generalised Drawdown concept described above.
4. SUMMARY
In quantitative finance, specialized discrete event
simulators called back testers are utilized, in order to
evaluate financial market trading strategies. Here,
strategies are simulated in different historical market
environments and evaluated, compared and optimised
by means of a wide range of assessment criteria. A
significant assessment category is related to the risk
taken in following a particular trading strategy. In this
context, risk in terms of volatility is understood as a
metric for the potential to deviate from a characteristic
average rate of return. Additionally, quantitative finance
has elaborated the concept of downside risk in the form
of asymmetrical risk metrics, where only negative
deviations in the sense of underperformance are
regarded.
We propose to introduce the four most accepted
financial risk metrics of back testers into general
purpose discrete event simulators. We think that these
metrics open up new and fruitful views on model
dynamics in general and may specifically support the
evaluation and possibly optimisation of undesired
model behaviour. In particular, dimensioning of waiting
rooms as well as planning of processing capacities
should benefit from the generalised risk key figures.
Proceedings of The International Workshop on Applied Modeling & Simulation, 2012
978-88-97999-07-2; Bruzzone, Buck, Cayirci, Longo, Eds.
105
In order to support a preferably wide field of
application domains in discrete event simulation, we
extend the transferred metrics Value at Risk and
Expected Shortfall in three aspects: Firstly, we advance
from expected absolute loss of currency units to
expected relative changes of observation variables, to
allow deriving general statements independently from
particular current states. Secondly, we consider the
minimum, median, most frequent and maximum state of
observation variables in order to handle boundary and
extreme states separately. In this sense, the median of
an observation variable represents a state as far as
possible from extreme situations. For non-symmetrical
empirical distributions, the most frequent state is
regarded as well, as a maybe better basis for significant
conclusions. Thirdly, we account for both ends of state
distributions, since depending on the application
domain, risk may be regarded as deviation into different
directions, possibly also into both directions.
The third aforementioned generalisation is also
applied when transferring Semi-Variance to discrete
event simulation.
The second and third extension mentioned above
concern Drawdown Phases, too.
We aim at providing a concrete tool for the
modeller of general discrete event models, in order to
convey an impression of the value of transferring
quantitative finance risk metrics into other domains. For
this reason, our general purpose simulation framework
Desmo-J is extended by these concepts in a Bachelor
thesis at the working group of Modelling and
Simulation in the Department of Informatics at
University of Hamburg. We expect to provide a more
sophisticated risk estimation in the various application
domains of discrete event simulation as compared to
conventional standard statistics.
REFERENCES
Arthur, W.B., Holland, J.H., LeBaron, B., Palmer, R.,
Tayler, P., 1997. Asset pricing under endogenous
expectations in an artificial stock market. In:
Arthur, W.B., Durlauf, S., Lane, D. (Eds.). The
Economy as an Evolving Complex System II.
Reading, Mass.: Addison-Wesley, 1544.
Chande, T.S., 1997. Beyond technical analysis. How to
develop and implement a winning trading system.
New York: Wiley.
Freedman, D., Diaconis, P., 1981. On the histogram as a
density estimator: L 2 theory. Zeitschrift für
Wahrscheinlichkeitstheorie und Verwandte
Gebiete 57 (4), 453476.
Giorgi, E.G. de, 2002. A Note on Portfolio Selection
under Various Risk Measures. 19th August 2002.
Available from: http://ssrn.com/abstract=762104
[accessed 25th July 2012].
Golombek, O., 2010. Entwurf und Implementation eines
simulationsbasierten Frameworks zur Analyse von
Finanzmarkt-Handelsstrategien. Diploma Thesis.
University of Hamburg.
Hommes, C.H., 2006. Heterogeneous agent models in
economics and finance. In: Tesfatsion, L., Judd,
K.L. (Eds.). Handbook of Computational
Economics. Volume 2. Agent-Based
Computational Economics. Amsterdam: Elsevier
North-Holland, 11091186.
Kocur, M., 1999. System-Konzeptionen. Systeme von
Handelsstrategien an Futures-Märkten.
Konzeption und Performance. Rosenheim: TM-
Börsenverlag.
Koors, A., Page, B., 2011. A Hierarchical Simulation
Based Software Architecture for Back-Testing and
Automated Trading. Proceedings of 25th
European Conference on Modelling and
Simulation, 275282. 7th-10th June 2011, Krakow
(Poland).
LeBaron, B., 2006. Agent-based computational finance.
In: Tesfatsion, L., Judd, K.L. (Eds.). Handbook of
Computational Economics. Volume 2. Agent-
Based Computational Economics. Amsterdam:
Elsevier North-Holland, 11871233.
Levy, M., Levy, H., Solomon, S., 2000. The
microscopic simulation of financial markets. From
investor behavior to market phenomena. San
Diego: Academic Press.
Linsmeier, T.J., Pearson, N.D., 2000. Value at Risk.
Financial Analysts Journal 56 (2), 4767.
Lohre, H., Neumann, T., Winterfeldt, T., 2009.
Portfolio Construction with Downside Risk.
Working Paper. 18th March 2009. Available from:
http://ssrn.com/abstract=1112982 [accessed 25th
July 2012].
Lux, T., Marchesi, M., 2000. Volatility clustering in
financial markets. A microsimulation of
interacting agents. International Journal of
Theoretical and Applied Finance 3 (4), 675702.
Markowitz, H., 1952. Portfolio selection. The journal of
finance 7 (1), 7791.
Metropolis, N., Ulam, S., 1949. The Monte Carlo
Method. Journal of the American Statistical
Association 44 (247), 335341.
Page, B., Kreutzer, W., 2005. The Java simulation
handbook. Simulating discrete event systems with
UML and Java. Aachen: Shaker.
Page, B., Lechler, T., Claassen, S., 2000.
Objektorientierte Simulation in Java mit dem
Framework DESMO-J. Hamburg: Libri Books on
Demand.
Rockafellar, R.T., Uryasev, S., 2000. Optimization of
Conditional Value-at-Risk. Journal of Risk 2, 21
41.
van Tharp, K., 2007. Trade your way to financial
freedom. New York: McGraw-Hill.
Yang, D., Yu, M., Zhang, Q., 2009. Downside and
drawdown risk characteristics of optimal portfolios
in continuous time. In: Ciarlet, P.G. (Ed.).
Mathematical Modelling and Numerical Methods
in Finance. Amsterdam: Elsevier North-Holland,
189226.
Proceedings of The International Workshop on Applied Modeling & Simulation, 2012
978-88-97999-07-2; Bruzzone, Buck, Cayirci, Longo, Eds.
106
AUTHOR BIOGRAPHIES
Bernd Page holds degrees in Applied Computer
Science from Technical University of Berlin, Germany,
and from Stanford University, USA. As professor for
Applied Computer Science at University of Hamburg he
researches and teaches in the field of Discrete Event
Simulation as well as in Environmental Informatics.
Arne Koors obtained his master degree in Computer
Science from University of Hamburg, Germany. Since
then he has been working as a software developer and
management consultant in the manufacturing industry,
primarily in the field of forecasting and demand
planning. Meanwhile he works as a research associate
and on his PhD thesis in the field of financial
simulations in the simulation group led by Prof. Page.
Proceedings of The International Workshop on Applied Modeling & Simulation, 2012
978-88-97999-07-2; Bruzzone, Buck, Cayirci, Longo, Eds.
107
... To us, risk metrics appear useful for application in other domains beyond finance as well. In a preceding WAMS paper, we proposed generalisation of the four most established risk metrics in quantitative finance in order to utilise them in discrete event simulation (Koors and Page 2012). Here, we report on refinement, further development, implementation and visualisation of these metrics. ...
... graphical visualisation in simulation reports, followed by an interpretation in the application context of a simple queuing system. The focus of this section is on conceptual enhancements, further development and experiences gained, carrying on the more conceptual introduction by Koors and Page (2012). ...
Conference Paper
Full-text available
We propose a conceptual procedure to integrate methods of specific application domains into general purpose discrete event simulation. This procedure is applied on quantitative finance, a field that deals with computer-assisted analysis of asset prices and supports investment decisions in financial markets. One of quantitative finance’s most significant assessment categories relates to the term risk. We generalise the concept of risk, regarding risk as observation of undesired state dynamics of a modelled system. We outline various interpretations and application fields for this generalised risk notion, demonstrating its wide scope. Special attention is paid to risk types and reference states of risk. We report in detail on implementation of four established risk metrics into our discrete event simulation framework DESMO-J. Special focus is given to realised conceptual extensions and risk metric visualisation. We point out further development options and conclude that this approach may benefit and inspire a wide range of application fields.
... The Volume at Price approach has been generalised and comprehensively extended in this work, towards the Analysis by State concept described hereafter. For further finance-inspired discrete event simulation analysis and visualisation techniques see (Koors and Page 2012;Koors 2013;Koors and Page 2013). The remainder of this paper is structured as follows: The following section describes the basic procedure for Analysis by State. ...
Conference Paper
Full-text available
In discrete event simulation experiments, state variables' values are recorded and further processed to explore the dynamics of the modelled system. This paper introduces a family of so-called Analysis by State methods for exploration of relationships between two discrete event simulation output time series. Here, state intervals of a primary time series are visually augmented with information gained by processing corresponding time intervals of a secondary time series, e.g. by displaying interval-wise correlation, distribution, sample aggregates or sample parameters in form of background histograms or heat maps. The desired benefit is to further support and comfortably enhance identification of characteristics and relationships in pairs of discrete-event time series.
... how fast the state of an observation variable shifts from the median observed state to an extreme state or how typical pathways of fluctuations in steady state phases can be characterized. In order to give better insight into the potential and risk of model dynamics, the four most accepted risk metrics from the application field of Quantitative Finance have been generalized and transferred to discrete event simulation (Koors and Page 2012). Namely, these metrics are Semi-Variance, Value at Risk, Expected Shortfall and Drawdown. ...
Conference Paper
Full-text available
This review paper focusses on DESMO-J, a comprehensive and stable Java-based open-source simulation library. DESMO-J is recommended in numerous academic publications for implementing discrete event simulation models for various applications. The library was integrated into several commercial software products. DESMO-J’s functional range and usability is continuously improved by the Department of Informatics of the University of Hamburg (Germany). The paper summarizes DESMO-J’s core functionality and important design decisions. It also compares DESMO-J to other discrete event simulation frameworks. Furthermore, latest developments and new opportunities are addressed in more detail. These include a) improvements relating to the quality and applicability of the software itself, e.g. a port to .NET, b) optional extension packages like visualization libraries and c) new components facilitating a more powerful and flexible simulation logic, like adaption to real time or a compact representation of production chains and similar queuing systems. Finally, the paper exemplarily describes how to apply DESMO J to harbor logistics and business process modeling, thus providing insights into DESMO J practice.
Conference Paper
This paper introduces a generalized deviation concept, inspired by quantitative finance. Standard risk metrics like volatility or downside risk are deconstructed into five general sub-functions for reference, selection, penalization, normalization and re-dimensioning. The advantage of this approach is its flexibility, allowing modeling a wide range of risk perceptions in numerous application fields of discrete event simulation. Several further risk types like upside risk, outside risk, transition risk, critical state risk or countermovement risk are describable and embeddable as special cases of generalized deviation. These risk types are presented with respect to motivation, specification of relevant generalized deviation components, description of application classes, application examples and graphical illustrations. In particular, various options for determining reference states, reference selection and penalty functions are discussed. Implementation features of the generalized deviation metric in the discrete event simulation framework DESMO-J are outlined. Moreover, possible structural extensions as well as additionally implementable risk types are delineated, indicating further application potential and the flexible scope of this approach. It is proposed to complement descriptive standard statistics in discrete event simulation domains by additionally employing risk measurement in terms of generalized deviation as explicated here, to facilitate assessment of undesired simulation dynamics in various application fields.
Conference Paper
Full-text available
Financial markets are highly complex adaptive systems. This paper deals with the application of simulators in software architectures for back-testing and automating financial market trading strategies. It characterizes traits and problems of algorithmic trading and describes the established use of simulators in back-testing and automated trading. A new approach in the form of a hierarchical software architecture is introduced, containing simulators as integral parts in all layers, using them both during back-testing and automated trading. In addition to the software architecture the opening objects of investigation are outlined. Finally, the potential of generalizing the application domains of our approach beyond financial market trading strategies is pointed out.
Article
Full-text available
The finding of clustered volatility and ARCH effects is ubiquitous in financial data. This paper presents a possible explanation of this phenomenon within a multi-agent framework of speculative activity. In the model, both chartist and fundamentalist strategies are considered with agents switching between both behavioural variants according to observed differences in pay-offs. Price changes are brought about by a market maker reacting on imbalances between demand and supply. Most of the time, a stable and efficient market results. However, its usual tranquil performance is interspersed by sudden transient phases of destabilisation. Outbreak of volatility occurs if the fraction of agents using chartist techniques surpasses a certain threshold value, but such phases are quickly brought to an end by stabilising tendencies. Formally, this pattern can be understood as an example of a new type of dynamic behaviour denoted on-off intermittency in physics literature. Statistical analysis of simulated data shows that the main stylised facts (unit roots in levels together with heteroscedasticity and leptokurtosis of returns) can be found in this artificial market.
Article
Downside risk and drawdown risk measures are two important measures that qualify the risk characteristics of a portfolio. In this chapter, we consider three well-known optimal dynamic strategies and examine in detail their risk characteristics in long-term investments and portfolio frontiers under various downside and drawdown risk measures. We determine which strategy among the three performs best in various parameter regions for a given downside or drawdown risk measure. An investigation on the correlation among different risk measures has also been carried out.
Article
Summary The main readership are students in the field of informatics. In most parts, other interested scientists from environmental disciplines get some insights in the field of environmental modelling and software. The book can be recommended broadly. With the last chapter ‘Simulation in Practice’, this work may also be useful for commercial application. With chapter 14, the authors contribute to the increasingly important topic of E-Learning.
Article
A new approach to optimizing or hedging a portfolio of financial instruments to reduce risk is presented and tested on applications. It focuses on minimizing Conditional Value-at-Risk (CVaR) rather than minimizing Value-at-Risk (VaR), but portfolios with low CVaR necessarily have low VaR as well. CVaR, also called Mean Excess Loss, Mean Shortfall, or Tail VaR, is anyway considered to be a more consistent measure of risk than VaR. Central to the new approach is a technique for portfolio optimization which calculates VaR and optimizes CVaR simultaneously. This technique is suitable for use by investment companies, brokerage firms, mutual funds, and any business that evaluates risks. It can be combined with analytical or scenario-based methods to optimize portfolios with large numbers of instruments, in which case the calculations often come down to linear programming or nonsmooth programming. The methodology can be applied also to the optimization of percentiles in contexts outside of finance.
Article
Portfolio construction seeks an optimal trade-off between a portfolio's mean return and its associated risk. Since risk may not be properly described by return volatility we optimize portfolios with respect to various measures of downside risk in an empirical out-of-sample setting. These optimizations are successful for most of the investigated measures when assuming perfect foresight of expected returns, moreover, these findings still hold when using more naive return estimates. The reductions in downside risk are most convincing for semivariance, semideviation, CVaR and loss penalty while value at risk and measures related to skewness appear rather useless for portfolio construction purposes.