Article

Copula based hierarchical risk aggregation through sample reordering

Authors:
  • OCO Präparate GmbH
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

For high-dimensional risk aggregation purposes, most popular copula classes are too restrictive in terms of attainable depen-dence structures. These limitations aggravate with increasing dimension. We study a hierarchical risk aggregation method which is flexible in high dimensions. With this method it suffices to specify a low dimensional copula for each aggregation step in the hierarchy. Copulas and margins of arbitrary kind can be combined. We give an algorithm for numerical approximation which introduces dependence between originally independent marginal samples through reordering.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The hierarchical risk aggregation approach recently adopted by [1] developed by [2]. The hierarchical aggregation procedure, is based on rooted trees that include branching and leaf nodes, and uses the elliptical copula family for each aggregation step. ...
... Hierarchical copula models draw on results from graph theory on rooted trees [18]. Following the notation used in [2], a rooted tree τ is composed by leaf nodes and branching nodes where one of the branching nodes is the root. The subset of branching nodes is denoted by B (τ), the subset of leaf nodes is denoted by L (τ), and the root node by ∅. ...
... Each leaf node corresponds to a business line and each branching node corresponds to the sum of the variables associated with its children leaf nodes. As in [2], we assume that each branching node has two children for simplicity, although the results on rooted trees used in this paper are valid for branching nodes with any number of children (see [2]). By assuming that each branching node has only two children, we can simplify the construction and estimation of the model as only bivariate copulas are involved. ...
Preprint
Full-text available
Insurance companies need to calculate solvency capital requirements in order to ensure that they can meet their future obligations to policyholders and beneficiaries. The solvency capital requirement is a risk management tool essential for, when extreme catastrophic events occur, resulting in a high number of possibly interdependent claims. This paper studies the problem of aggregating the risks coming from several insurance business lines and analyses the effect of reinsurance in the level of risk. Our starting point is to use a Hierarchical Risk Aggregation method, which was initially based on 2-dimensional elliptical copulas. We then propose the use of copulas from the Archimedean family and a mixture of different copulas. Our results show that a mixture of copulas can provide a better fit to the data than an individual copula and consequently avoid over or underestimating of the capital requirement of an insurance company. We also investigate the significance of reinsurance in reducing the insurance company’s business risk and its effect on diversification. The results show that reinsurance does not always reduce the level of risk, but can also reduce the effect of diversification for insurance companies with multiple business lines
... This topic has had an increasing interest from the 2000s and several distributions have been studied obtaining models for the sums of dependent and independent random variables. Some recent references include: [2,5,9,16,21,25,26,34,36,41,43] , among others. Recently, some results have been obtained for the limiting behavior of the hazard rate functions of these sums in Block et al. [7,8] . ...
... In the case of aggregation of risks assuming dependence, we have available some results using different copula structures (see, e.g., [2,15,24,45] ). For Farlie-Gumbel-Morgenstern (FGM) copulas and mixed Erlang marginal distributions, Cossette et al. [16] have obtained closed expressions for the distribution of the aggregated risk and for capital allocation problems. ...
... If we set θ = 0 in (4.5), we obtain the usual exponential convolution. • AMH copula with Pareto marginals P (1, 1) with pdf (2.2), 2 , t ≥ 0. ...
Article
The study of the distributions of sums of dependent risks is a key topic in actuarial sciences, risk management, reliability and in many branches of applied and theoretical probability. However, there are few results where the distribution of the sum of dependent random variables is available in a closed form. In this paper, we obtain several analytical expressions for the distribution of the aggregated risks under dependence in terms of copulas. We provide several representations based on the underlying copula and the marginal distribution functions under general hypotheses and in any dimension. Then, we study stochastic comparisons between sums of dependent risks. Finally, we illustrate our theoretical results by studying some specific models obtained from Clayton, Ali-Mikhail-Haq and Farlie-Gumbel-Morgenstern copulas. Extensions to more general copulas are also included. Bounds and the limiting behavior of the hazard rate function for the aggregated distribution of some copulas are studied as well.
... A rank-based hierarchical copula method dealing with multiple property and casualty insurance lines with different parametric copula families is used and compared to a nested Archimedean copula in Côté et al. [9]. To perform simulations out of the hierarchical copula model, Arbenz et al. [3] establishes rigorous mathematical foundations and adapts the Iman-Conover reordering algorithm used in the current paper. ...
... This aggregation approach is consistent for instance with the work of Arbenz et al. [3]. For a complete specification of the dependence model, their work includes a conditional independence assumption, meaning that given the aggregate scaled innovation at a given node, children of this node are independent from any node that is not a child of that given node. ...
... for all accident semester and development lag combinations (i, j) that are yet unobserved, representing future observations. The dependence structure in the HCM is achieved through the Iman-Conover reordering algorithm proposed in Iman and Conover [17] and adapted by Arbenz et al. [3]. Subsequently, the covariance structure of residuals across development lags is included in simulated innovations by applying a linear transformation, which provides scaled innovations. ...
Preprint
Full-text available
We propose a stochastic model allowing property and casualty insurers with multiple business lines to measure their liabilities for incurred claims risk and calculate associated capital requirements. Our model includes many desirable features which enable reproducing empirical properties of loss ratio dynamics. For instance, our model integrates a double generalized linear model relying on accident semester and development lag effects to represent both the mean and dispersion of loss ratio distributions, an autocorrelation structure between loss ratios of the various development lags, and a hierarchical copula model driving the dependence across the various business lines. The model allows for a joint simulation of loss triangles and the quantification of the overall portfolio risk through risk measures. Consequently, a diversification benefit associated to the economic capital requirements can be measured, in accordance with IFRS 17 standards which allow for the recognition of such benefit. The allocation of capital across business lines based on the Euler allocation principle is then illustrated. The implementation of our model is performed by estimating its parameters based on a car insurance data obtained from the General Insurance Statistical Agency (GISA), and by conducting numerical simulations whose results are then presented.
... First, the proposed algorithm operates on irregular positive supports with the size of up to 300 points and treats atoms (point probability masses at minimum and maximum loss) separately. Second, positive correlations between pairs of risks are modeled by a mixture of Split-Atom convolution in Wojcik et al. (2016) and comonotonic distribution (Dhaene et al. 2002) of the sum of risks using several risk aggregation schemes based on copula trees in Arbenz et al. (2012);Côté and Genest (2015). High computing speed of our procedure stems from the fact that, by design, we aim at reproducing only the second order moments of the aggregate risk. ...
... If unique description of the joint distribution of individual risks is not crucial and the focus is solely on obtaining an easily interpretable model for the total risk, the individual risks can be aggregated in a hierarchical way. Such process involves specification of partial dependencies between the groups of risks in different aggregation steps (Arbenz et al. 2012). For pairwise accumulation, we first select the two risks X i , X j and construct a copula model for that pair. ...
... To answer the research question posed in Section 2.1, this non-uniqueness is not critical. Conversely, in situations where, e.g., capital allocation is of interest (see Côté and Genest 2015), an extra conditional independence assumption in Arbenz et al. (2012) is needed. For instance, the aggregation scheme in the middle panel of Figure 1 would require: ...
Article
Full-text available
We present several fast algorithms for computing the distribution of a sum of spatially dependent, discrete random variables to aggregate catastrophe risk. The algorithms are based on direct and hierarchical copula trees. Computing speed comes from the fact that loss aggregation at branching nodes is based on combination of fast approximation to brute-force convolution, arithmetization (regriding) and linear complexity of the method for computing the distribution of comonotonic sum of risks. We discuss the impact of tree topology on the second-order moments and tail statistics of the resulting distribution of the total risk. We test the performance of the presented models by accumulating ground-up loss for 29,000 risks affected by hurricane peril.
... 3, a nested Archimedean copula model is fitted, along the same lines as [1]. As this model imposes many constraints on the dependence structure and the choice of copulas, a more flexible approach considered in [4,11] is implemented in Sect. 4. Risk capital calculations and allocations for the two models are compared in Sect. 5, and Sect. ...
... As this model imposes many constraints on the dependence structure and the choice of copulas, a more flexible approach considered in [4,11] is implemented in Sect. 4. Risk capital calculations and allocations for the two models are compared in Sect. ...
... In this section, a hierarchical approach to loss triangle modeling is considered. It appears to have been originally proposed by Swiss reinsurance practitioners [9,35] but was formalized in [4]. Estimation and validation procedures for this class of models are described in [10,11], where rank-based clustering techniques are also proposed for selecting an appropriate structure. ...
Article
Full-text available
In order to determine the risk capital for their aggregate portfolio, property and casualty insurance companies must fit a multivariate model to the loss triangle data relating to each of their lines of business. As an inadequate choice of dependence structure may have an undesirable effect on reserve estimation, a two-stage inference strategy is proposed in this paper to assist with model selection and validation. Generalized linear models are first fitted to the margins. Standardized residuals from these models are then linked through a copula selected and validated using rank-based methods. The approach is illustrated with data from six lines of business of a large Canadian insurance company for which two hierarchical dependence models are considered, i.e., a fully nested Archimedean copula structure and a copula-based risk aggregation model.
... Thus rank correlations of H are "injected" into the synthetic sample. This procedure is equivalent to plugging empirical margins (obtained from asynchronous observations) into the rank based empirical copula of a sample of H (Arbenz et al., 2012). Moreover, it turned out that the Iman-Conover method allows to introduce not only the rank correlations of H into the synthetic samples, but the entire copula of H (cf. Arbenz et al., 2012;Mildenhall, 2005). ...
... This procedure is equivalent to plugging empirical margins (obtained from asynchronous observations) into the rank based empirical copula of a sample of H (Arbenz et al., 2012). Moreover, it turned out that the Iman-Conover method allows to introduce not only the rank correlations of H into the synthetic samples, but the entire copula of H (cf. Arbenz et al., 2012;Mildenhall, 2005). In somewhat weaker sense, these results are related to the approximation of stochastic dependence by deterministic functions and to the pioneering result by Kimeldorf and Sampson (1978). ...
... It is implemented in various software packages, and it serves as a standard tool in dependence modelling and uncertainty analysis. The reordering algorithm allows even to construct synthetic samples with hierarchical dependence structures that meet the needs of risk aggregation in insurance and reinsurance companies (Arbenz et al., 2012). The distribution of the aggregated risk is estimated by the empirical distribution of the component sums X d ) for k = 1, . . . ...
Article
This paper studies convergence properties of multivariate distributions constructed by endowing empirical margins with a copula. This setting includes Latin Hypercube Sampling with dependence, also known as the Iman--Conover method. The primary question addressed here is the convergence of the component sum, which is relevant to risk aggregation in insurance and finance. This paper shows that a CLT for the aggregated risk distribution is not available, so that the underlying mathematical problem goes beyond classic functional CLTs for empirical copulas. This issue is relevant to Monte-Carlo based risk aggregation in all multivariate models generated by plugging empirical margins into a copula. Instead of a functional CLT, this paper establishes strong uniform consistency of the estimated sum distribution function and provides a sufficient criterion for the convergence rate O(n1/2)O(n^{−1/2}) in probability. These convergence results hold for all copulas with bounded densities. Examples with unbounded densities include bivariate Clayton and Gauss copulas. The convergence results are not specific to the component sum and hold also for any other componentwise non-decreasing aggregation function. On the other hand, convergence of estimates for the joint distribution is much easier to prove, including CLTs. Beyond Iman--Conover estimates, the results of this paper apply to multivariate distributions obtained by plugging empirical margins into an exact copula or by plugging exact margins into an empirical copula
... The ability to adequately model risks is crucial for insurance companies. The method of "Copula-based hierarchical risk aggregation" (Arbenz et al. [1]) offers a flexible way in doing so and has attracted much attention recently. We briefly introduce the aggregation tree model as well as the sampling algorithm proposed by they authors. ...
... In recent years, a number of papers have been published on this subject. Arbenz et al. [1] were the first to provide a sound mathematical foundation for the copula-based hierarchical risk aggregation approach. They described the structure of the model in graph-theoretical terms and identified a condition under which it leads to a unique multivariate distribution. ...
... The subject of the first question is the sampling algorithm proposed by Arbenz et al. [1]. As mentioned before, the samples can be used to approximate the distribution of the total aggregate X ∅ of the individual risks. ...
Article
Full-text available
The ability to adequately model risks is crucial for insurance companies. The method of "Copula-based hierarchical risk aggregation" by Arbenz et al. offers a flexible way in doing so and has attracted much attention recently. We briefly introduce the aggregation tree model as well as the sampling algorithm proposed by they authors. An important characteristic of the model is that the joint distribution of all risk is not fully specified unless an additional assumption (known as "conditional independence assumption") is added. We show that there is numerical evidence that the sampling algorithm yields an approximation of the distribution uniquely specified by the conditional independence assumption. We propose a modified algorithm and provide a proof that under certain conditions the said distribution is indeed approximated by our algorithm. We further determine the space of feasible distributions for a given aggregation tree model in case we drop the conditional independence assumption. We study the impact of the input parameters and the tree structure, which allows conclusions of the way the aggregation tree should be designed.
... There has been much work on the distribution of sums of random variables especially in the context of insurance and risk. We mention Dolati et al. (2017), Arbenz, Hummel, and Mainik (2012), Gijbels and Herrmann (2014), Sarabia et al. (2016), Bolviken and Guillen (2017), Gijbels and Herrmann (2018), Sarabia et al. (2018) and Navarro and Sarabia (2020). ...
... Copulas have received applications in many areas. Some applications related to the theme of this paper are: Dolati et al. (2017) used copulas to assess dependence of the distribution of functions of random variables; Arbenz, Hummel, and Mainik (2012) used copulas for hierarchical risk aggregation through sample reordering; Domma and Giordano (2012) used copulas to measure household financial fragility; Domma and Giordano (2013) used copulas to account for dependence in stress-strength models; Gijbels and Herrmann (2014) used copulas to induce dependence on the distribution of sums of random variables; Gijbels and Herrmann (2018) used copulas to induce dependence in optimal expected-shortfall portfolio election; Navarro and Sarabia (2020) derived copula representations for aggregation of dependent risks. Value at risk (VaR) and expected shortfall (ES) are the two most popular financial risk measures. ...
Article
There has been much work on the distribution of independent or dependent random variables. But we are not aware of any work giving exact results for the distribution of the sum of randomly weighted random variables. In this paper, we derive exact results for the randomly weighted sum of two dependent random variables. The derived expressions are for the cumulative distribution function, conditional expectation, moment generating function, value at risk, expected shortfall and the limiting tail behavior of the randomly weighted sum of two dependent random variables. Two numerical illustrations are given.
... Another technical complication is that for some combinations of parametric marginals and parametric copulas, estimation of the pdf of S entails a multivariate convolution integral, which is computationally cumbersome for large portfolios. An avant garde workaround referred to as copula-based hierarchical risk aggregation has been proposed recently (Arbenz et al. 2012;Bruneton 2011;Côté and Genest 2015;Derendinger 2015;Joe and Sang 2016) to alleviate the aforementioned bottlenecks. This approach eliminates the need to parameterize one copula for all the risks as it defines the joint, usually bivariate, dependence between partial sums of risks in the subsequent loss accumulation steps. ...
... Such a hierarchical pathway of adding dependent risks is represented as a bivariate aggregation tree with copulas describing summation nodes. The original implementation in Arbenz et al. (2012) uses Monte Carlo sampling. Significantly faster convolution-based implementation has been discussed in our previous contribution on ground-up loss estimation (Wójcik et al. 2019). ...
Article
Full-text available
We propose several numerical algorithms to compute the distribution of gross loss in a positively dependent catastrophe insurance portfolio. Hierarchical risk aggregation is performed using bivariate copula trees. Six common parametric copula families are studied. At every branching node, the distribution of a sum of risks is obtained by discrete copula convolution. This approach is compared to approximation by a weighted average of independent and comonotonic distributions. The weight is a measure of positive dependence through variance of the aggregate risk. During gross loss accumulation, the marginals are distorted by application of insurance financial terms, and the value of the mixing weight is impacted. To accelerate computations, we capture this effect using the ratio of standard deviations of pre-term and post-term risks, followed by covariance scaling. We test the performance of our algorithms using three examples of complex insurance portfolios subject to hurricane and earthquake catastrophes.
... The key of the hierarchical copula method is to determine the aggregation "tree", that is, how to carry out hierarchical aggregation. Bürgi et al. (2008), Bruneton (2011) and Arbenz et al. (2012) gave different methods to determine the ensemble tree. In terms of application, Abadi (2015) gives the specific steps of how to use the hierarchical copula method proposed by Arbenz et al. (2012). ...
... Bürgi et al. (2008), Bruneton (2011) and Arbenz et al. (2012) gave different methods to determine the ensemble tree. In terms of application, Abadi (2015) gives the specific steps of how to use the hierarchical copula method proposed by Arbenz et al. (2012). Gaisser et al. (2011) integrated the risk of German banks with the hierarchical copula. ...
Chapter
We divide the process of bank risk aggregation into four key aspects, and systematically review the research on bank risk aggregation under correlation from three levels: the correlation of bank risk aggregation, the typical characteristics of correlation, the method of bank risk aggregation and the risk data in bank risk aggregation. The correlation relationships of bank risk aggregation are complicated. There are correlations among banks, among different types of risks within banks, and between levels and elements within risks. After determining the correlation relationship between different bank risks, the characteristics of the correlation structure between risks need to be consider. There are many complex characteristics in the correlation of bank risk, such as nonlinearity, tail correlation, structural asymmetry and so on. Finally, the risk aggregation method is selected to integrate the bank risk. Different risk aggregation methods have different abilities to describe various complex characteristics of risk correlation. Only by fully capturing these typical characteristics, can the risk aggregation method accurately describe the correlation between risks and obtain accurate results of bank risk aggregation.
... Patchwork copulas in the context of risk management have been treated in detail in [1,5,15,[24][25][26]30], among others. In several of the cited papers the question of an unfavourable, i.e. superadditive VaR estimate for a portfolio of aggregated risks was in particular emphasized, see also [27]. ...
... The situation is more simple in the two-dimensional case with identical margins (see [10,Section 2]). A numerical approach to a constructive solution to the general problem is given, e.g., by the rearrangement algorithm (see, e.g., [1,10,20]). From a practical point of view, simpler and yet explicit constructions for unfavourable but not necessarily worst VaR estimates by appropriate copula constructions seem to be a useful alternative. ...
Article
Full-text available
The central idea of the paper is to present a general simple patchwork construction principle for multivariate copulas that create unfavourable VaR (i.e. Value at Risk) scenarios while maintaining given marginal distributions. This is of particular interest for the construction of Internal Models in the insurance industry under Solvency II in the European Union. Besides this, the Delegated Regulation by the European Commission requires all insurance companies under supervision to consider different risk scenarios in their risk management system for the company’s own risk assessment. Since it is unreasonable to assume that the potential worst case scenario will materialize in the company, we think that a modelling of various unfavourable scenarios as described in this paper is likewise appropriate. Our explicit copula approach can be considered as a special case of ordinal sums, which in two dimensions even leads to the technically worst VaR scenario.
... Taking the structural changes of dependence into account, Ji et al. (2019) measured the upside and downside CoVaRs between WTI crude oil and the exchange rates of the United States and China by using six time-varying copula models. Arbenz et al. (2012) specified a multivariate dependence structure by sample reordering for high-dimensional risk aggregation, which deployed different bivariate copulas. Accordingly, Zhou et al. (2016) proposed a copula-based grouped model to characterise the intragroup and intergroup dependence of the financial system constituted by two industries. ...
... Thus, the flexibility of describing the dependence of variables within and among groups is greatly improved. Third, based on the simulation algorithm of Arbenz et al. (2012), the vine copula grouped model introduces dependence into originally independent marginal samples through reordering. Therefore, this model successfully combines intra-dependence and interdependence. ...
Article
Full-text available
This paper investigates systemic risk in Chinese financial industries by constructing a vine copula grouped CoVaR model, which accounts for the fact that various sub-industries are comprised of multiple financial institutions. The backtesting results indicate that the vine copula grouped model performs better in measuring the systemic risk in comparison to the vine copula model, which in turn validates the accuracy and effectiveness of the former. Moreover, the results indicate that banking is a major systemic risk contributor, even though it has a strong ability to resist risk. Additionally, the potential loss faced by the securities industry is big, but its systemic risk contribution is small. These results are of significance to investment decision and risk management.
... This limitation aggravates with increasing dimension (Yang et al., 2015;Okhrin and Tetereva, 2017). So the copula-based hierarchical risk aggregation method proposed by Arbenz et al. (2012) is introduced to aggregate the remaining n − m risks (Risk m + 1 , Risk m + 2 , ..., Risk n ) for its flexibility in high dimensions. ...
... Then the correlation relationship between the risks is measured by a commonly used correlation coefficient Kendall's tau τ (Arbenz et al., 2012). The most correlated X and Y are firstly aggregate to attain the subtotal risk S. Then the subtotal risk S is further aggregated with the remaining risk Z to obtain the total risk T. As shown in Eq. (7) and Eq. ...
Article
There are many types of bank risks and their basic characteristics vary greatly differently, therefore, how to effectively aggregate three or more risks is still a great challenge. This paper proposes a novel two-stage bank risk aggregation approach to solve this problem. The risks are firstly divided based on whether they have common factors or not, then the top-down approach and bottom-up approach are reasonably combined to aggregate them. It is applied to aggregate credit, market and operational risks of the Chinese banking industry. The comparison with other popular approaches shows that this approach leads to much larger diversification benefits.
... Such hierarchical constructions have been studied in the literature using certain collapsing functions. Examples include the hierarchical Kendall copula in [8] and the hierarchical aggregation models in [3] and [10]. ...
... Another approach to constructing rank-based versions of canonical correlation is given in [1]. In addition to the canonical correlation approach, hierarchical aggregation modeling techniques such as in [3] utilize the sum collapsing function. ...
Article
A framework for quantifying dependence between random vectors is introduced. Using the notion of a collapsing function, random vectors are summarized by single random variables, referred to as collapsed random variables. Measures of association computed from the collapsed random variables are then used to measure the dependence between random vectors. To this end, suitable collapsing functions are presented. Furthermore, the notion of a collapsed distribution function and collapsed copula are introduced and investigated for certain collapsing functions. This investigation yields a multivariate extension of the Kendall distribution and its corresponding Kendall copula for which some properties and examples are provided. In addition, non-parametric estimators for the collapsed measures of association are provided along with their corresponding asymptotic properties. Finally, data applications to bioinformatics and finance are presented along with a general graphical assessment of independence between groups of random variables.
... Hierarchical risk aggregation models fit the partial independent substructures that we considered under Assumption (I p ). As an illustration we consider the hierarchical aggregation model described in Section 5 of Arbenz et al. (2012). This model is a subset of the hierarchical aggregation structure given in Section SCR.1.1. ...
... In this framework we deal with a risk portfolio consisting of d = 33 random variables which we assume to be homogeneous with common marginal distribution LogN(1,1). The assumption of identical LogNormal distribution is Figure 3: Reduction of the aggregation structure used in the example in Section 5 of Arbenz et al. (2012) to a risk portfolio with partial independent substructures. Non-proportional reinsurance is denoted with NonProp Re and MAT is an abbreviation of Marine, aviation and transport. ...
Article
Full-text available
We derive lower and upper bounds for the Value-at-Risk of a portfolio of losses when the marginal distributions are known and independence among (some) subgroups of the marginal components is assumed. We provide several actuarial examples showing that the newly proposed bounds strongly improve those available in the literature that are based on the sole knowledge of the marginal distributions. When the variance of the joint portfolio loss is small enough, further improvements can be obtained.
... Hierarchical risk aggregation models fit the partial independent substructures that we considered under Assumption (I p ). As an illustration we consider the hierarchical aggregation model described in Section 5 of Arbenz et al. (2012). This model is a subset of the hierarchical aggregation structure given in Section SCR.1.1. ...
... While the lower bound VaR α is left essentially unchanged, the partial independence assumption (I p ) allows for a reduction of VaR α of 22.6%. Figure 3: Reduction of the aggregation structure used in the example in Section 5 of Arbenz et al. (2012) to a risk portfolio with partial independent substructures. Non-proportional reinsurance is denoted with NonProp Re and MAT is an abbreviation of Marine, aviation and transport. ...
Article
Full-text available
We derive lower and upper bounds for the Value-at-Risk of a portfolio of losses when the marginal distributions are known and independence among (some) subgroups of the marginal components is assumed. We provide several actuarial examples showing that the newly proposed bounds strongly improve those available in the literature that are based on the sole knowledge of the marginal distributions. When the variance of the joint portfolio loss is small enough, further improvements can be obtained.
... There is a typical case that is of important interest to us: the internal modeling in Non-life (re)-insurance modeling. See, e.g.,[2,24] for details on this case and on why a convolutional structure makes sense. However, such datasets are usually private. ...
Preprint
Full-text available
Multivariate generalized Gamma convolutions are distributions defined by a convolutional semi-parametric structure. Their flexible dependence structures, the marginal possibilities and their useful convolutional expression make them appealing to the practitioner. However, fitting such distributions when the dimension gets high is a challenge. We propose stochastic estimation procedures based on the approximation of a Laguerre integrated square error via (shifted) cumulants approximation, evaluated on random projections of the dataset. Through the analysis of our loss via tools from Grassmannian cubatures, sparse optimization on measures and Wasserstein gradient flows, we show the convergence of the stochastic gradient descent to a proper estimator of the high dimensional distribution. We propose several examples on both low and high-dimensional settings.
... The complexity of quantitative risk models arises from the potential highdimension and stochastic dependence of risk factors (e.g. Denuit et al., 2005;Arbenz et al., 2012), as well as the non-linearity of the aggregation function (e.g. Hong, 2009;Tsanakas and Millossovich, 2016), which may itself be numerically demanding in its evaluation at particular simulated scenarios (Risk and Ludkovski, 2016;Floryszczak et al., 2016). ...
Article
We introduce an approach to sensitivity analysis of quantitative risk models, for the purpose of identifying the most influential inputs. The proposed approach relies on a change of measure derived by minimising the χ2-divergence, subject to a constraint (‘stress’) on the expectation of a chosen random variable. We obtain an explicit solution of this optimisation problem in a finite space, consistent with the use of simulation models in risk management. Subsequently, we introduce metrics that allow for a coherent assessment of reverse (i.e. stressing the output and monitoring inputs) and forward (i.e. stressing the inputs and monitoring the output) sensitivities. The proposed approach is easily applicable in practice, as it only requires a single set of simulated input/output scenarios. This is demonstrated by application on a simple insurance portfolio. Furthermore, via a simulation study, we compare the sampling performance of sensitivity metrics based on the χ2- and the Kullback-Leibler divergence, indicating that the former can be evaluated with lower sampling error.
... Consider the situation of a model user or reviewer, who has only partial access to the model specifications. It is typical in risk management applications for models to be high dimensional, with calculation of the model's output distribution proceeding by Monte Carlo simulation (Arbenz et al., 2012;Choe et al., 2018;Risk and Ludkovski, 2018). A model user will often be supplied with a set of simulated scenarios from variables of interest (model inputs and outputs), without easy access to either (a) the distributional assumptions of inputs (which may themselves be outputs from sub-models) or (b) the model function mapping inputs to outputs (which may be highly non-linear and computationally expensive to evaluate). ...
Article
Full-text available
In risk analysis, sensitivity measures quantify the extent to which the probability distribution of a model output is affected by changes (stresses) in individual random input factors. For input factors that are statistically dependent, we argue that a stress on one input should also precipitate stresses in other input factors. We introduce a novel sensitivity measure, termed cascade sensitivity, defined as a derivative of a risk measure applied on the output, in the direction of an input factor. The derivative is taken after suitably transforming the random vector of inputs, thus explicitly capturing the direct impact of the stressed input factor, as well as indirect effects via other inputs. Furthermore, alternative representations of the cascade sensitivity measure are derived, allowing us to address practical issues, such as incomplete specification of the model and high computational costs. The applicability of the methodology is illustrated through the analysis of a commercially used insurance risk model.
... To reflect the diversification benefit, aggregate capital is often set to be less than the sum of standalone capitals. The most wellknown example is the variance-covariance approach, which has been further studied in the literature; see, for example, Dhaene et al. (2005), Pfeifer and Strassburger (2008), Filipović (2009), and Arbenz et al. (2012). There are several other approaches of risk aggregation in the literature, such as modular approaches in Perli and Nayda (2004) and Bølviken and Guillen (2017), scenario aggregation approaches, model uncertainty approaches in Embrechts et al. (2013), Bernard et al. (2014), Sarabia et al. (2016), andDi Lascio et al. (2018), as well as moment-based approaches in Cossette et al. (2016), Miles et al. (2020), . ...
Article
Full-text available
Risk aggregation and capital allocation are of paramount importance in business, as they play critical roles in pricing, risk management, project financing, performance management, regulatory supervision, etc. The state-of-the-art practice often includes two steps: (i) determine standalone capital requirements for individual business lines and aggregate them at a corporate level; and (ii) allocate the total capital back to individual lines of business or at more granular levels. There are three pitfalls with such a practice, namely, lack of consistency, negligence of cost of capital, and disentanglement of allocated capitals from standalone capitals. In this paper, we introduce a holistic approach that aims to strike a balance of optimality by taking into account competing interests of various stakeholders and conflicting priorities in a corporate hierarchy. While unconventional in its objective, the new approach results in an allocation of diversification benefit, which conforms to the diversification strategy of many risk management frameworks including regulatory capital and economic capital. The holistic capital setting and allocation principle provides a remedy to aforementioned problems with the existing two-step industry practice.
... To reflect the diversification benefit, aggregate capital is often set to be less than the sum of standalone capitals. The most well-known example is the variancecovariance approach, which has been further studied in the literature; see, for example, Dhaene et al. (2005), Pfeifer & Strassburger (2008), Filipović (2009), and Arbenz et al. (2012). There are several other approaches of risk aggregation in the literature, such as modular approaches in Perli & Nayda (2004) and Bølviken & Guillen (2017), scenario aggregation approaches, model uncertainty approaches in Embrechts et al. (2013), Bernard et al. (2014), Sarabia et al. (2016), andDi Lascio et al. (2018), as well as moment-based approaches in Cossette et al. (2016), Miles et al. (2020), and Furman, Hackmann, & Kuznetsov (2020). ...
Preprint
Full-text available
Risk aggregation and capital allocation are of paramount importance in business, as they play critical roles in pricing, risk management, project financing, performance management, regulatory supervision, etc. The state-of-the-art practice often includes two steps: (i) determine standalone capital requirements for individual business lines and aggregate them at a corporate level; and (ii) allocate the total capital back to individual lines of business or at more granular levels. There are three pitfalls with such a practice, namely, lack of consistency, negligence of cost of capital, and disentanglement of allocated capitals from standalone capitals. In this paper, we introduce a holistic approach that aims to strike a balance between competing interests for various stakeholders and conflicting priorities in a corporate hierarchy. In spite of the unconventional strategy, the new approach leads to the allocation of diversification benefits, which is common in many risk capital frameworks including regulatory capital and economic capital. The resulting "all-in-on"' capital setting and allocation principle provides a remedy to many problems with the existing two-step practice in the financial industry.
... In particular, Ben Taieb, Taylor, and Hyndman (2017) construct a coherent probabilistic forecast in a bottom-up fashion where the dependency between nodes at each level is modelled by reordering quantile forecasts as suggested by Arbenz, Hummel, and Mainik (2012). The method we propose is distinct from Ben Taieb, Taylor, and Hyndman (2017) in two ways. ...
Article
Full-text available
New methods are proposed for adjusting probabilistic forecasts to ensure coherence with the ag-gregation constraints inherent in temporal hierarchies. The different approaches nested within this framework include methods that exploit information at all levels of the hierarchy as well as a novel method based on cross-validation. The methods are evaluated using real data from two wind farms in Crete and electric load in Boston. For these applications, optimal decisions related to grid operations and bidding strategies are based on coherent probabilistic forecasts of energy power. Empirical evidence is also presented showing that probabilistic forecast reconciliation improves the accuracy of the probabilistic forecasts.
... Copula is a flexible way to express joint probability distributions and consequent dependence structure between random variables. For non-normal distributions, copula approach is of great use (Aas et al. 2009;Arbenz, Hummel, and Mainik 2012;Cote and Genest 2015;Lee 2018) particularly to quantify tail dependence. For instance, actuaries in the insurance sector, apply copula to model joint distribution between losses because accidents and events in life are rarely independent. ...
Article
This study explores the post-Cold War era by investigating geopolitical risks (GPRs) from the Middle East to the Korean Peninsula. Geopolitics is a fleeting reality and is a matter of a few top decision makers while ordinary people catch a glimpse of it by the press. Due to the relative inaccessibility of key information, geopolitics is hard to study even if it is a crucial element to shape our era. To fill the gap, we adopt a copula approach to surmise a joint probability distribution between the GPR in the world and several countries. This method could capture tail dependence. The highest upper tail dependence with the world’s GPR has been that of Israel; as one moves from the Cold War to the post-Cold War period, the increasing cases of upper tail dependence are China, Korea, Russia, and Ukraine while decreasing cases are Israel, Saudi Arabia, and Turkey. It implies that the world’s flashpoints might have been shifting from the Middle East to Asia as our eras have gone through the Cold War and the post-Cold War periods. Seemingly self-centered Make America Great Again could be Make the World Great Again. The best is yet to come.
... Consider the situation of a model user or reviewer, who has only partial access to the model specifications. It is typical in risk management applications for models to be high dimensional, with calculation of the model's output distribution proceeding by Monte Carlo simulation (Arbenz et al., 2012;Choe et al., 2018;Risk and Ludkovski, 2018). A model user will often be supplied with a set of simulated scenarios from variables of interest (model inputs and outputs), without easy access to either (a) the distributional assumptions of inputs (which may themselves be outputs from sub-models) or (b) the model function mapping inputs to outputs (which may be highly non-linear and computationally expensive to evaluate). ...
... The top-level PDF is a joint probability distribution of two bottom-level PDFs. Now given forecasted PDFs for the bottom-level, the top-level PDF can be generated using copulas [19]- [21]. Copula is a multivariate probability distribution function which is used to describe the dependence between two variables and form the joint probability function. ...
... The first approach to tackling hierarchical forecasting in the probabilistic setting is the paper of Ben Taieb, Taylor, and Hyndman (2017). After carrying out reconciliation on the mean, they construct a coherent probabilistic forecast in a bottom up fashion where the dependency between nodes at each level is modelled by reordering quantile forecasts as suggested by Arbenz, Hummel, and Mainik (2012). The method we propose is distinct from Ben Taieb, Taylor, and Hyndman (2017) in two ways. ...
Preprint
Full-text available
New methods are proposed for adjusting probabilistic forecasts to ensure coherence with the aggregation constraints inherent in temporal hierarchies. The different approaches nested within this framework include methods that exploit information at all levels of the hierarchy as well as a novel method based on cross-validation. The methods are evaluated using real data from two wind farms in Crete, an application where it is imperative for optimal decisions related to grid operations and bidding strategies to be based on coherent probabilistic forecasts of wind power. Empirical evidence is also presented showing that probabilistic forecast reconciliation improves the accuracy of both point forecasts and probabilistic forecasts.
... risk measures like Value-at-Risk (VaR) have been investigated recently completely by Embrechts, Puccetti and Rüschendorf (2013) (for further references, see e.g. McNeil et al.(2015), Chapter 8.4, Arbenz et al. (2012) or Mainik (2015)). In practical applications, such extremely unfavourable situations are typically achieved by the so-called Rearrangement Algorithm on Monte Carlo simulation output data. ...
Article
Full-text available
In this paper we discuss a natural extension of infinite discrete partition-of-unity copulas to continuous partition of copulas which were recently introduced in the literature, with possible applications in risk management and other fields. We present a general simple algorithm to generate such copulas on the basis of the empirical copula from high-dimensional data sets. In particular, our constructions also allow for positive tail dependence which sometimes is a desirable property of data-driven copula modelling, in particular for internal models under Solvency II.
... Under Solvency 2 project, Arbenz et al. (2012) provide a rigorous mathematical foundation for the risk aggregation method for the solvency capital requirements estimate. They suggest a hierarchical risk aggregation approach, which is flexible method in high dimensions. ...
... Albeit it can be easily obtained for independent risks, this assumption is in most cases too restrictive, thus being crucial to specify more general models that allow for dependence between different risks. In the recent statistical and actuarial literature, several results about risk aggregation under dependence have been obtained, which deploy different copula structures (see, e.g., Arbenz et al. (2012), Coqueret (2014), Gijbels and Herrmann (2014)). Cossette et al. (2013) consider risk aggregation and capital allocation problems for a portfolio of dependent risks, modeling the multivariate distribution with the Farlie-Gumbel-Morgenstern (FGM) copula and mixed Erlang distribution marginals. ...
Article
The distribution of the sum of dependent risks is a crucial aspect in actuarial sciences, risk management and in many branches of applied probability. In this paper, we obtain analytic expressions for the probability density function (pdf) and the cumulative distribution function (cdf) of aggregated risks, modeled according to a mixture of exponential distributions. We first review the properties of the multivariate mixture of exponential distributions, to then obtain the analytical formulation for the pdf and the cdf for the aggregated distribution. We study in detail some specific families with Pareto (Sarabia et al, 2016), Gamma, Weibull and inverse Gaussian mixture of exponentials (Whitmore and Lee, 1991) claims. We also discuss briefly the computation of risk measures, formulas for the ruin probability (Albrecher et al., 2011) and the collective risk model. An extension of the basic model based on mixtures of gamma distributions is proposed, which is one of the suggested directions for future research.
Article
Full-text available
This study explores the evolving probability distribution of the two closely monitored macroeconomic phenomena, i.e., employment growth rate and inflation rate in the U.S. economy from 1971 to 2023 by constructing copula. Employment growth rate rather than unemployment rate was chosen to overcome the gray area of disappointed unemployment. We find it hard to perceive the economy where we live in the objective way, and we often try to figure it out or to forecast like narrow-sighted little men touching a colossal elephant. Copulas for the U.S. employment growth rate and inflation rate for each decade and each crisis help us to visualize the macroeconomic shapes although copula does not show a one-to-one correspondence with theoretical relationships. Copula shows a history and allows us to compare different eras in a succinct way by maximum likelihood based on realized data. In the end, final data points contain countless stories and circumstances, and we need to see them at one glance to capture essential features, sometimes. This simplification allows us to see concordant(discordant) relationship and tail dependence revealing the intensity of a tendency. During the tapering period before the Covid-19 pandemic crisis(Nov. 2014 ~ Feb. 2020) in the U.S. economy, we find high likelihood of high employment growth rate and low inflation rate. It gives us a promising sign that dismal macroeconomic states are not necessarily inevitable.
Chapter
This chapter proposes a novel two-stage bank risk aggregation approach to aggregate three main bank risks (credit, market and operational risks). Compared with previous risk aggregation approaches, the proposed approach can effectively aggregate multiple bank risks more accurately than previous risk aggregation approaches. Specifically, credit and market risks that have common risk factors are aggregated in the first stage by collecting risk data from financial statements. The aggregate risk obtained in the first stage and the operational risk with no common risk factors are aggregated in the second stage to arrive at the total risk. The data of credit and market risks are collected from financial statements. The data of operational risk are collected from an external loss database. Thus, the proposed two-stage bank risk aggregation approach is based on data from financial statements and the external loss database. The proposed approach is empirically compared with three commonly used risk aggregation approaches by applying them to the Chinese banking system to aggregate credit, market and operational risks.
Article
We propose a stochastic model allowing property and casualty insurers with multiple business lines to measure their liabilities for incurred claims risk and calculate associated capital requirements. Our model includes many desirable features which enable reproducing empirical properties of loss ratio dynamics. For instance, our model integrates a double generalized linear model relying on accident semester and development lag effects to represent both the mean and dispersion of loss ratio distributions, an autocorrelation structure between loss ratios of the various development lags, and a copula-based aggregation of risks model driving the dependence across the various business lines. Our work is the first in the literature to combine all such advantageous features within a loss triangle model. The model allows for a joint simulation of loss triangles and the quantification of the overall portfolio risk through risk measures. Consequently, a diversification benefit associated with the economic capital requirements can be measured, in accordance with IFRS 17 standards which allow for the recognition of such benefit. The allocation of capital across business lines based on the Euler allocation principle is then illustrated. The implementation of our model is performed by estimating its parameters based on a car insurance data obtained from the General Insurance Statistical Agency (GISA), and by conducting numerical simulations whose results are then presented.
Article
To achieve ambitious international climate goals, an increase of energy efficiency investments is necessary and, thus, a growing market potential arises. Concomitantly, the relevance of managing the risk of financing and insuring energy efficiency measures increases continuously. Energy Efficiency Insurances encourage investors by guaranteeing a predefined energy efficiency performance. However, literature on quantitative analysis of pricing and diversification effects of such novel insurance solutions is scarce. This paper provides a first approach for the analysis of diversification potential on three levels: collective risk diversification, cross product line diversification, and financial hedging. Based on an extensive real-world data set for German residential buildings, the analysis reveals that underwriting different Energy Efficiency Insurance types and constructing Markowitz Minimum Variance Portfolios halves overall risk in terms of standard deviation. We evince that Energy Efficiency Insurances can diversify property insurance portfolios and reduce regulatory capital for insurers under Solvency II constraints. Moreover, we show that Energy Efficiency Insurances potentially supersede financial market instruments such as weather derivatives in diversifying property insurance portfolios. In summary, these three levels of diversification effects constitute an additional benefit for the introduction of Energy Efficiency Insurances and may positively impact their market development.
Article
Decisions regarding the supply of electricity across a power grid must take into consideration the inherent uncertainty in demand. Optimal decision-making requires probabilistic forecasts for demand in a hierarchy with various levels of aggregation, such as substations, cities and regions. The forecasts should be coherent in the sense that the forecast of the aggregated series should equal the sum of the forecasts of the corresponding disaggregated series. Coherency is essential, since the allocation of electricity at one level of the hierarchy relies on the appropriate amount being provided from the previous level. We introduce a new probabilistic forecasting method for a large hierarchy based on UK residential smart meter data. We find our method provides coherent and accurate probabilistic forecasts, as a result of an effective forecast combination. Furthermore, by avoiding distributional assumptions, we find that our method captures the variety of distributions in the smart meter hierarchy. Finally, the results confirm that, to ensure coherency in our large-scale hierarchy, it is sufficient to model a set of lower-dimension dependencies, rather than modeling the entire joint distribution of all series in the hierarchy. In achieving coherent and accurate hierarchical probabilistic forecasts, this work contributes to improved decision-making for smart grids.
Article
Straightforward methods to evaluate risks arising from several sources are specially difficult when risk components are dependent and, even more if that dependence is strong in the tails. We give an explicit analytical expression for the probability distribution of the sum of non-negative losses that are tail-dependent. Our model allows dependence in the extremes of the marginal beta distributions. The proposed model is flexible in the choice of the parameters in the marginal distribution. The estimation using the method of moments is possible and the calculation of risk measures is easily done with a Monte Carlo approach. An illustration on data for insurance losses is presented.
Conference Paper
Full-text available
Purpose – solvency II framework regulates how much capital the European Union insurance companies must hold. The amount of necessary capital can be calculated using a standard formula or an internal model. On the basis of the review of other authors’ empirical research, the present paper aim at identifying factors that influence necessary capital and propos-ing necessary areas of improvement for the methodology of an internal capital model. Research methodology – to conduct the paper, the authors have used the extended literature review. Analytical methods and comparative methods have been used for the Baltic non-life insurance market analysis. Findings – the Baltic market does not use an internal model even for a major risk – premium and reserve risks. A review of the current literature findings shows that the main weakness of the standard formula is risk aggregation. Research limitations – identified factors apply to non-life insurance companies under the Solvency II framework with a focus on reserve risk. Practical implications – factors are identified that should be implemented in the internal model methodology. The paper will help avoid using internal models as only a modern risk management tool and improve risk profile accuracy. Originality/Value – improvements of the internal model methodology are proposed based on a literature review. The au-thors have identified the main directions, issues and improvement possibilities for reaching modern risk management.
Article
The problem of establishing reliable estimates or bounds for the (T)VaR of a joint risk portfolio is a relevant subject in connection with the computation of total economic capital in the Basel regulatory framework for the finance sector as well as with the Solvency regulations for the insurance sector. In the computation of total economic capital, a financial institution faces a considerable amount of model uncertainty related to the estimation of the interdependence amongst the marginal risks. In this paper, we propose to apply a clustering procedure in order to partition a risk portfolio into independent subgroups of positively dependent risks. Based on available data, the portfolio partition so obtained can be statistically validated and allows for a reduction of capital and the corresponding model uncertainty. We illustrate the proposed methodology in a simulation study and two case studies considering an Operational and a Market Risk portfolio. A rule of thumb stems from the various examples proposed: in a mathematical model where the risk portfolio is split into independent subsets with comonotonic dependence within, the smallest VaR-based capital estimate (at the high regulatory probability levels typically used) is produced by assuming that the infinite-mean risks are comonotonic and the finite-mean risks are independent. The largest VaR estimate is instead generated by obtaining the maximum number of independent infinite-mean sums.
Article
The purpose of this paper is to provide an extension to recent contributions in the field of quantitative risk management by modeling non-life insurance risks in a multivariate framework. This contribution examines the impact of explicit dependence modeling among non-life insurance losses on capital requirement. First, we focus on the modeling of dependence structure using copulas when the losses from the different business lines are dependent in some sense. Second, we concentrate on Value-at-Risk and Tail-Value-at-Risk as popular risk measures combined with D-Vine copulas model for the total risk capital estimates. For copula calibration, we use claims data from four lines of business of a Tunisian insurance company. Finally, we have conducted a comparative study of different methods under the two hypotheses of dependency and independency. Using Monte-Carlo simulation, our results reveal the advantages of D-Vine copula in modeling inhomogeneous structures of dependency due to its flexibility of use in a simulation context.
Article
This paper deals with the problem of risk measurement under mixed operation. For this purpose, we divide the basic risks into several groups based on the actual situation. First, we calculate the bounds for the subsum of every group of basic risks, then we obtain the bounds for the total sum of all the basic risks. For the dependency relationships between the basic risks in every group and all of the subsums, we give different copulas to describe them. The bounds for the aggregated risk under mixed operation and the algorithm for numerical simulation are given in this paper. In addition, the convergence of the algorithm is proved and some numerical simulations are presented.
Article
A general multivariate distributional approach, with conditional independence given aggregation variables, is presented to combine group-based submodels when variables are naturally divided into several non-overlapping groups. When the distributions are all multivariate Gaussian, the dependence among different groups is parsimonious based on conditional independence given linear combinations of variables in each group. For the case of multivariate t distributions in each group, a grouped t distribution is obtained. The approach can be extended so that the copula for each group is based on a skew-t distribution, and an application of this is given to financial returns of stocks in several different sectors. Another example of the modeling approach is given with variables separated into groups based on their units of measurements.
Article
This paper proposes a method for ranking a set of alternatives evaluated using multiple and conflicting criteria that are organised in a hierarchical structure. The hierarchy permits the decision maker to identify different intermediate sub-problems of interest. In that way, the analysis of the criteria is done according to the subsets defined in the hierarchy, and following the precedence relations in a bottom-up approach. To deal with this type of hierarchical structures, an extension of the ELECTRE-III method, called ELECTRE-III-H, is presented. As all methods of ELECTRE family, this one also relies on building a binary outranking relation on the set of alternatives on the basis of concordance and discordance tests. The exploitation of this outranking relation generates a partial pre-order, establishing an indifference, preference or incomparability relation for each pair of alternatives. The idea of a bottom-up application of the classical ELECTRE-III method to sub-problems involving subsets of criteria at the intermediate levels of the hierarchy is infeasible because the evaluations of alternatives by criteria aggregating some sub-criteria have the form of partial pre-orders, and not complete pre-orders. Thus, we propose a new procedure for building outranking relations from a set of partial pre-orders, as well as a mechanism for propagating these pre-orders upwards in the hierarchy. With this method, the decision maker is able to analyse the problem in a decomposed way and gain information from the outputs obtained at intermediate levels. In addition, ELECTRE-III-H gives the decision maker the possibility to define a local preference model at each node of the hierarchy, according to his objectives and sub-problem characteristics. We show an application of this method to rank websites of tourist destination brands evaluated using a hierarchy with 4 levels.
Article
This paper applies the distinct copula model specifications with time-invariant and time-varying dependence structures to investigate whether American depository receipts (ADRs) co-move more with the industry indexes of home country or the U.S. The evidence shows that ADR returns are more significantly linked with the industry returns of parent country than those of the U.S., supporting the hypothesis that the ADR industry co-movement is regionalized. Next, using the co-movement measures as dependent variables, we explore whether the ADR factors influence the industry co-movement. ADR fundamental and financial factors are the key variables that influence ADR industry co-movements with the U.S. and home country. As for ADR ownership factors, only mutual fund and invest adviser holdings influence ADR industry co-movement with home country.
Article
We propose a new model for the aggregation of risks that is very flexible and useful in high dimensional problems. We propose a copula-based model that is both hierarchical and hybrid (HYC for short), because: (i) the dependence structure is modeled as a hierarchical copula, (ii) it unifies the idea of the clusterized homogeneous copula-based approach (CHC for short) and its limiting version (LHC for short) proposed in Bernardi and Romagnoli (2012, 2013). Based on this, we compute the loss function of a world-wide sovereign debt portfolio which accounts for a systemic dependence of all countries, in line with a global valuation of financial risks. Our approach enables us to take into account the non-exchangeable behavior of a sovereign debts’ portfolio clustered into several classes with homogeneous risk and to recover a possible risks’ hierarchy. A comparison between the HYC loss surface and those computed through a pure limiting approach, which is commonly used in high dimensional problems, is presented and the impact of the concentration and the granularity errors is appreciated. Finally the impact of an enlargement of the dependence structure is discussed, in the contest of a geographical area sub-portfolios analysis now relevant to determine the risk contributions of subgroups under the presence of a wider dependence structure. This argument is presented in relation to the evaluation of the insurance premium and the collateral related to the designed project of an euro-insurance-bond.
Article
A flexible approach for risk aggregation is considered. The model consists of a tree structure, bivariate copulas, and marginal distributions. The construction relies on a conditional independence assumption whose implications are studied. A procedure for selecting the tree structure is developed using hierarchical clustering techniques, along with a distance metric based on Kendall's tau. Estimation, simulation, and model validation are also discussed. The approach is illustrated using data from a Canadian property and casualty insurance company. The Canadian Journal of Statistics 43: 1–22; 2015 © 2014 Statistical Society of CanadaRésuméLes auteurs considèrent une approche flexible pour l'agrégation de risques basée sur un modèle formé d'une arborescence, de copules bivariées et de lois marginales. La construction s'appuie sur un postulat d'indépendance conditionnelle dont ils étudient les ramifications. Ils montrent comment choisir l'arborescence au moyen de techniques de classification et d'une métrique définie à partir du tau de Kendall et abordent également l'estimation, la simulation et l'adéquation du modèle. Enfin, les auteurs illustrent leur méthode à l'aide de données d'une compagnie canadienne en assurance IARD. La revue canadienne de statistique 43: 1–22; 2015 © 2014 Société statistique du Canada
Article
Full-text available
This paper studies convergence properties of multivariate distribu-tions generated from an empirical copula and empirical margins. Such problems arise from Latin Hypercube Sampling with dependence, also known as the Iman–Conover method. The question addressed here is the convergence of the component sum, which is relevant to risk ag-gregation in insurance and finance. The central result is a sufficient criterion guaranteeing that the estimated sum distribution function is strongly uniformly consistent with convergence rate O(n −1/2) in prob-ability. The underlying mathematical problem involves convergence of empirical processes on set classes that are not Vapnik– ˘ Cervonenkis, which goes beyond available results on empirical copulas. The conver-gence results hold for all copulas with bounded densities. Examples with unbounded densities include bivariate Clayton and Gauss copu-las. The results of this paper are not specific to the component sum and hold also for any other componentwise non-decreasing aggrega-tion function.
Article
Full-text available
This paper studies convergence properties of multivariate distribu-tions generated from an empirical copula and empirical margins. Such problems arise from Latin Hypercube Sampling with dependence, also known as the Iman–Conover method. The question addressed here is the convergence of the component sum, which is relevant to risk ag-gregation in insurance and finance. The central result is a sufficient criterion guaranteeing that the estimated sum distribution function is strongly uniformly consistent with convergence rate O(n −1/2) in prob-ability. The underlying mathematical problem involves convergence of empirical processes on set classes that are not Vapnik– ˘ Cervonenkis, which goes beyond available results on empirical copulas. The conver-gence results hold for all copulas with bounded densities. Examples with unbounded densities include bivariate Clayton and Gauss copu-las. The results of this paper are not specific to the component sum and hold also for any other componentwise non-decreasing aggrega-tion function.
Article
Full-text available
A method for inducing a desired rank correlation matrix on a multivariate input random variable for use in a simulation study is introduced in this paper. This method is simple to use, is distribution free, preserves the exact form of the marginal distributions on the input variables, and may be used with any type of sampling scheme for which correlation of input variables is a meaningful concept. A Monte Carlo study provides an estimate of the bias and variability associated with the method. Input variables used in a model for study of geologic disposal of radioactive waste provide an example of the usefulness of this procedure. A textbook example shows how the output may be affected by the method presented in this paper.
Book
Full-text available
The implementation of sound quantitative risk models is a vital concern for all financial institutions, and this trend has accelerated in recent years with regulatory processes such as Basel II. This book provides a comprehensive treatment of the theoretical concepts and modelling techniques of quantitative risk management and equips readers--whether financial risk analysts, actuaries, regulators, or students of quantitative finance--with practical tools to solve real-world problems. The authors cover methods for market, credit, and operational risk modelling; place standard industry approaches on a more formal footing; and describe recent developments that go beyond, and address main deficiencies of, current practice.The book's methodology draws on diverse quantitative disciplines, from mathematical finance through statistics and econometrics to actuarial mathematics. Main concepts discussed include loss distributions, risk measures, and risk aggregation and allocation principles. A main theme is the need to satisfactorily address extreme outcomes and the dependence of key risk drivers. The techniques required derive from multivariate statistical analysis, financial time series modelling, copulas, and extreme value theory. A more technical chapter addresses credit derivatives. Based on courses taught to masters students and professionals, this book is a unique and fundamental reference that is set to become a standard in the field.
Article
Full-text available
One of the central issues in the Solvency II process will be an appropriate calculation of the Solvency Capital Requirement (SCR). This is the economic capital that an insurance company must hold in order to guarantee a one-year ruin probability of at most 0.5%. In the so-called standard formula, the overall SCR is calculated from individual SCRs in a particular way that imitates the calculation of the standard deviation for a sum of normally distributed risks (SCR aggregation formula). However, in order to cope with skewness in the individual risk distributions, this formula must be calibrated accordingly in order to maintain the prescribed level of confidence. In this paper, we want to show that the methods proposed and discussed thus far still show stability problems within the general setup.
Article
Full-text available
This paper presents an algorithm for generating correlated vectors of random numbers. The user need not fully specify the joint distribution function; instead, the user "partially specifies" only the marginal distributions and the correlation matrix. The algorithm may be applied to any set of continuous, strictly increasing distribution functions; the marginal distributions need not all be of the same functional form. The correlation matrix is first checked for mathematical consistency (positive semi-definiteness), and adjusted if necessary. Then the correlated random vectors are generated using a combination of Cholesky decomposition and Gauss-Newton iteration. Applications are made to cost analysis, where correlations are often present between cost elements in a work breakdown structure.
Article
Full-text available
The benefits of diversifying risks are difficult to estimate quantitatively because of the uncertainties in the dependence structure between the risks. Also, the modelling of multidimensional dependencies is a non-trivial task. This paper focuses on one such technique for portfolio aggregation, namely the aggregation of risks within trees, where dependencies are set at each step of the aggregation with the help of some copulas. We define rigorously this procedure and then study extensively the Gaussian Tree of quite arbitrary size and shape, where individual risks are normal, and where the Gaussian copula is used. We derive exact analytical results for the diversification benefit of the Gaussian tree as a function of its shape and of the dependency parameters. Such a "toy-model" of an aggregation tree enables one to understand the basic phenomena's at play while aggregating risks in this way. In particular, it is shown that, for a fixed number of individual risks, "thin" trees diversify better than "fat" trees. Related to this, it is shown that hierarchical trees have the natural tendency to lower the overall dependency with respect to the dependency parameter chosen at each step of the aggregation. We also show that these results hold in more general cases outside the gaussian world, and apply notably to more realistic portfolios (LogNormal trees). We believe that any insurer or reinsurer using such a tool should be aware of these systematic effects, and that this awareness should strongly call for designing trees that adequately fit the business. We finally address the issue of specifying the full joint distribution between the risks. We show that the hierarchical mechanism does not require nor specify the joint distribution, but that the latter can be determined exactly (in the Gaussian case) by adding conditional independence hypotheses between the risks and their sums.
Article
Full-text available
In the renewal risk model, several strong hypotheses may be found too restrictive to model accurately the complex evolution of the reserves of an insurance company. In the case where claim sizes are heavy-tailed, we relax the independence and stationarity assumptions and extend some asymptotic results on finite-time ruin probabilities, to take into account possible correlation crises like the one recently bred by the sub-prime crisis: claim amounts, in general assumed to be independent, may suddenly become strongly positively dependent. The impact of dependence and non-stationarity is analyzed and several concrete examples are given.
Chapter
Full-text available
Despite the fact that the Euler allocation principle has been adopted by many financial institutions for their internal capital allocation process, a comprehensive description of Euler allocation seems still to be missing. We try to fill this gap by presenting the theoretical background as well as practical aspects. In particular, we discuss how Euler risk contributions can be estimated for some important risk measures. We furthermore investigate the analysis of CDO tranche expected losses by means of Euler's theorem and suggest an approach to measure the impact of risk factors on non-linear portfolios.
Article
In Monte Carlo simulation, Latin hypercube sampling (LHS) (McKay et al (1979)) is a well-known variance reduction technique for vectors of independent random variables. The method presented here, Latin hypercube sampling with dependence (LHSD), extends LHS to vectors of dependent random variables. The resulting estimator is shown to be consistent and asymptotically unbiased. For the bivariate case and under some conditions on the joint distribution, a central limit theorem together with a closed formula for the limit variance are derived. It is shown that for a class of estimators satisfying some monotonicity condition, the LHSD limit variance is never greater than the corresponding Monte Carlo limit variance. In some valuation examples of financial payoffs, when compared to standard Monte Carlo simulation, a variance reduction of factors up to 200 is achieved. We illustrate that LHSD is suited for problems with rare events and for high-dimensional problems, and that it may be combined with quasi-Monte Carlo methods.
Article
Motivation. The CAS Research Working Party on Correlation and Dependencies Among All Risk Sources has been charged to "lay the theoretical and experi-mental foundation for quantifying variability when data is limited, estimating the nature and magnitude of dependence relationships, and generating aggregate dis-tributions that integrate these disparate risk sources." Method. The Iman-Conover method represents a straight forward yet powerful approach to working with dependent random variables. We explain the theory behind the method and give a detailed step-by-step algorithm to implement it. We discuss various extensions to the method, and give detailed examples showing how it can be used to solve real world actuarial problems. We also summarize pertinent facts from the theory of univariate and multivariate aggregate loss distributions, with a focus on the use of moment generating functions. Finally we explain how Vitale's Theorem provides a sound theoretical foundation to the Iman-Conover method.
Article
A prudent assessment of dependence is crucial in many stochastic models for insurance risks. Copulas have become popular to model such dependencies. However, estimation procedures for copulas often lead to large parameter uncertainty when observations are scarce. In this paper, we propose a Bayesian method which combines prior information (e.g. from regulators), observations and expert opinion in order to estimate copula parameters and determine the estimation uncertainty. The combination of different sources of information can significantly reduce the parameter uncertainty compared to the use of only one source. The model can also account for uncertainty in the marginal distributions. Furthermore, we describe the methodology for obtaining expert opinion and explain involved psychological effects and popular fallacies. We exemplify the approach in a case study.
Article
In the aftermath of the 2007-2008 financial crisis, there has been criticism of mathematics and the mathematical models used by the finance industry. We answer these criticisms through a discussion of some of the actuarial models used in the pricing of credit derivatives. As an example, we focus in particular on the Gaussian copula model and its drawbacks. To put this discussion into its proper context, we give a synopsis of the financial crisis and a brief introduction to some of the common credit derivatives and highlight the difficulties in valuing some of them. We also take a closer look at the risk management issues in part of the insurance industry that came to light during the financial crisis. As a backdrop to this, we recount the events that took place at American International Group during the financial crisis. Finally, through our paper we hope to bring to the attention of a broad actuarial readership some “lessons (to be) learned” or “events not to be forgotten”.
Article
In this paper we compare the current Solvency II standard and a genuine bottom-up approach to risk aggregation. This is understood to be essential for developing a deeper insight into the possible differences between the diversification assumptions between the standard approach and internal models.
Article
We describe a model that takes into account the tail dependence present in a large set of historical risk factor data using the modern concept of copulas. We extend the popular t-copula to obtain a new grouped t-copula which describes more accurately the dependence among risk factors of different classes. We explain how to estimate the parameters of the grouped t-copula and apply the method to a problem in credit risk management with a large number of risk factors. We measure the downside risk over one month for an internationally diversified credit portfolio and we observe that the new model gives different results to the t-copula and seems better able to capture the risk in a large set of risk factors.
Article
Insurance and reinsurance live and die from the diversification benefits or lack of it in their risk portfolio. The new solvency regulations allow companies to include diversification in their computation of risk-based capital (RBC). The question is how to really evaluate those benefits.To compute the total risk of a portfolio, it is important to establish the rules for aggregating the various risks that compose it. This can only be done through modelling of their dependence. It is a well known fact among traders in financial markets that "diversification works the worst when one needs it the most''. In other words, in times of crisis the dependence between risks increases. Experience has shown that very large loss events almost always affect multiple lines of business simultaneously. September 11, 2001, is an example of such an event: when the claims originated simultaneously from lines of business which are usually uncorrelated, such as property and life, at the same time that the assets of the company were depreciated due to the crisis on the stock markets.In this paper, we explore various methods of modelling dependence and their influence on diversification benefits. We show that the latter strongly depend on the chosen method and that rank correlation grossly overestimates diversification. This has consequences on the RBC for the whole portfolio, which is smaller than it should be when correctly accounting for tail correlation. However, the problem remains to calibrate the dependence for extreme events, which are rare by definition. We analyze and propose possible ways to get out of this dilemma and come up with reasonable estimates.
Article
Using only bivariate copulas as building blocks, regular vine copulas constitute a flexible class of high-dimensional dependency models. However, the flexibility comes along with an exponentially increasing complexity in larger dimensions. In order to counteract this problem, we propose using statistical model selection techniques to either truncate or simplify a regular vine copula. As a special case, we consider the simplification of a canonical vine copula using a multivariate copula as previously treated by Heinen & Valdesogo (2009) and Valdesogo (2009). We validate the proposed approaches by extensive simulation studies and use them to investigate a 19-dimensional financial data set of Norwegian and international market variables. The Canadian Journal of Statistics 40: 68–85; 2012 © 2012 Statistical Society of Canada En utilisant uniquement des copules bidimensionnelles comme unités de base, les copules en arborescence régulière constituent une classe flexible pour modéliser la dépendance pour les grandes dimensions. Toutefois, en grandes dimensions, la flexibilité s'accompagne d'une croissance exponentielle de la complexité. Pour contrecarrer ce problème, nous proposons l'utilisation des techniques de sélection de modèles statistiques afin de tronquer ou encore de simplifier la copule en arborescence régulière. Comme cas particulier, nous considérons la simplification de la copule en arborescence canonique par l'utilisation d'une copule multidimensionnelle telle que présentée dans Heinen et Valdesogo (2009) et Valdesogo (2009). Nous validons les approches proposées par de vastes études de simulation et nous les utilisons pour analyser un jeu de données financières de dimension 19 sur des variables des marchés norvégien et internationaux. La revue canadienne de statistique 40: 68–85; 2012 © 2012 Société statistique du Canada
Article
Efficient sampling algorithms for both Archimedean and nested Archimedean copulas are presented. First, efficient sampling algorithms for the nested Archimedean families of Ali-Mikhail-Haq, Frank, and Joe are introduced. Second, a general strategy how to build a nested Archimedean copula from a given Archimedean generator is presented. Sampling this copula involves sampling an exponentially tilted stable distribution. A fast rejection algorithm is developed for the more general class of tilted Archimedean generators. It is proven that this algorithm reduces the complexity of the standard rejection algorithm to logarithmic complexity. As an application it is shown that the fast rejection algorithm outperforms existing algorithms for sampling exponentially tilted stable distributions involved, e.g., in nested Clayton copulas. Third, with the additional help of randomization of generator parameters, explicit sampling algorithms for several nested Archimedean copulas based on different Archimedean families are found. Additional results include approximations and some dependence properties, such as Kendall's tau and tail dependence parameters. The presented ideas may also apply in the more general context of sampling distributions given by their Laplace-Stieltjes transforms.
Article
Copula modeling has taken the world of finance and insurance, and well beyond, by storm. Why is this? In this article, I review the early start of this development, discuss some important current research, mainly from an applications point of view, and comment on potential future developments. An alternative title of the article would be "Demystifying the copula craze." The article also contains what I would like to call the copula must-reads . Copyright (c) The Journal of Risk and Insurance, 2009.
Article
We introduce a family of copulas which are locally piecewise uniform in the interior of the unit cube of any given dimension. Within that family, the simultaneous control of tail dependencies of all projections to faces of the cube is possible and we give an efficient sampling algorithm. The combination of these two properties may be appealing to risk modellers.
Article
In Monte Carlo simulation, Latin hypercube sampling (LHS) [McKay et al. (1979)] is a well-known variance reduction technique for vectors of independent random variables. The method presented here, Latin hypercube sampling with dependence (LHSD), extends LHS to vectors of dependent random variables. The resulting estimator is shown to be consistent and asymptotically unbiased. For the bivariate case and under some conditions on the joint distribution, a central limit theorem together with a closed formula for the limit variance are derived. It is shown that for a class of estimators satisfying some monotonicity condition, the LHSD limit variance is never greater than the corresponding Monte Carlo limit variance. In some valuation examples of financial payoffs, when compared to standard Monte Carlo simulation, a variance reduction of factors up to 200 is achieved. LHSD is suited for problems with rare events and for high-dimensional problems, and it may be combined with Quasi-Monte Carlo methods.
Thedevilisinthetails: actuarialmath-ematics and the subprime mortgage crisis
  • C Donnelly
  • P Andembrechts
Donnelly,C.andEmbrechts,P.(2010). Thedevilisinthetails: actuarialmath-ematics and the subprime mortgage crisis. ASTIN Bulletin, 40(1):1–33
Basel Committee on Banking Supervision (Bank for International Settlements) Brechmann, E.C., 2012. Hierarchical Kendall copulas: properties and inference Truncated regular vines in high dimensions with application to financial data
  • E C Czado
  • C Aas
BIS, 2010. Developments in modelling risk aggregation, Technical report, Basel Committee on Banking Supervision (Bank for International Settlements) Brechmann, E.C., 2012. Hierarchical Kendall copulas: properties and inference. Technical University Munich. Preprint. Brechmann, E.C., Czado, C., Aas, K., 2012. Truncated regular vines in high dimensions with application to financial data. Canadian Journal of Statistics 40 (1), 68–85.
Risk contributions and performance measurement Capital allocation to business units and sub-portfolios: the Euler principle Pillar II in the New Basel Accord: The Challenge of Economic Capital
  • D Tasche
Tasche, D., 1999. Risk contributions and performance measurement. Technische Universität München. Preprint. Tasche, D., 2008. Capital allocation to business units and sub-portfolios: the Euler principle. In: Resti, A. (Ed.), Pillar II in the New Basel Accord: The Challenge of Economic Capital. Risk Books, London, pp. 423–453.
Using the grouped t-copula
  • S Daul
  • De Giorgi
  • E Lindskog
  • F Mcneil
Daul, S., De Giorgi, E., Lindskog, F., McNeil, A., 2003. Using the grouped t-copula. Risk 16 (11), 73–76.
Copula-based hierarchical aggregation of correlated risks
  • J P Bruneton
Bruneton, J.P., 2011. Copula-based hierarchical aggregation of correlated risks.
An Introduction to Copulas Latin hypercube sampling with dependence and applications in finance
  • R Nelsen
Nelsen, R., 2006. An Introduction to Copulas, second ed. Springer, New York. Packham, N., Schmidt, W., 2010. Latin hypercube sampling with dependence and applications in finance. Journal of Computational Finance 13 (3), 81–111.
Convergence of sum distributions in models based on empirical copulas and empirical margins
  • G Mainik
Mainik, G. (2011). Convergence of sum distributions in models based on empirical copulas and empirical margins. ETH Preprint, available through the author.
Developments in Modelling Risk Aggregation
BIS (2010). Developments in Modelling Risk Aggregation. Technical report, Basel Committee on Banking Supervision (Bank for International Settle- ments).
Capital Allocation to Business Units and Sub-Portfolios: the Euler Principle editor, Pillar II in the New Basel Accord: The Challenge of Economic Capital Weak Convergence and Empirical Processes: With Applications to Statistics
  • D A Tasche
  • J Wellner
Tasche, D. (2008). Capital Allocation to Business Units and Sub-Portfolios: the Euler Principle. In Resti, A., editor, Pillar II in the New Basel Accord: The Challenge of Economic Capital, pages 423–453. Risk Books, London. van der Vaart, A. and Wellner, J. (2000). Weak Convergence and Empirical Processes: With Applications to Statistics. Springer Series in Statistics. Springer, New York.
QIS5 Technical Specifications Committee of European Insurance and Occupational Pensions Supervisors
CEIOPS (2010). QIS5 Technical Specifications. Technical report, Committee of European Insurance and Occupational Pensions Supervisors.
Depencence matters! Paper presented at the 36
  • D Straßburger
  • D Pfeifer
Straßburger, D. and Pfeifer, D. (2005). Depencence matters! Paper presented at the 36. International ASTIN-Kolloquium.
From Principle-Based Risk Management to Solvency Requirements: Analytical Framework for the Swiss Solvency Test
SCOR (2008). From Principle-Based Risk Management to Solvency Requirements: Analytical Framework for the Swiss Solvency Test. Zürich, 2nd edition. http://www.scor.com/en/sgrc/scor-publications/scor-papers.html.