Article

THE LOCALLY LINEAR CAIRNS–BLAKE–DOWD MODEL: A NOTE ON DELTA–NUGA HEDGING OF LONGEVITY RISK

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Although longevity risk arises from both the variation surrounding the trend in future mortality and the uncertainty about the trend itself, the latter is often left unmodeled. In this paper, we address this problem by introducing the locally linear CBD model, in which the drifts that govern the expected mortality trend are allowed to follow a stochastic process. This specification results in median forecasts that are more consistent with the recent trends and more robust relative to changes in the data sample period. It also yields wider prediction intervals that may better reflect the possibilities of future trend changes. The treatment of the drifts as a stochastic process naturally calls for nuga hedging, a method proposed by Cairns (2013) to hedge the risk associated with changes in drifts. To improve the existing nuga-hedging method, we propose a new hedging method which demands less stringent assumptions. The proposed method allows hedgers to extract more hedge effectiveness out of a hedging instrument, and is therefore useful when there are only a few traded longevity securities in the market.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Therefore, some authors have developed stochastic mortality models that explicitly capture the risk of random future mortality trend changes. In their locally linear CBD model, Liu and Li (2016) incorporate 'drift risk' into the widely used RWD setting by assuming the drifts themselves to follow another random walk. Hence, the prevailing mortality trends, which are represented by the drift terms, gradually vary from year to year according to the realizations of this underlying random walk. ...
... Hence, Börger and Schupp (2018) propose a structurally similar model by drawing the absolute trend change magnitudes from a lognormal distribution while assuming the trend change signs to be positive or negative with equal probability. Similarly to Liu and Li (2016), both Sweeting (2011) and Börger and Schupp (2018) demonstrate that their approaches yield wider predictions intervals in the long run compared to the RWD. However, as shown by Börger and Schupp (2018), using normally distributed trend change magnitudes for a trend-stationary model with changing slopes typically results in wider prediction intervals compared to a lognormal distribution for reasons discussed above. ...
... As argued by Börger and Schupp (2018), the above decomposition of the trend change intensities offers some desirable properties. Analogously to the models of Liu and Li (2016) and Sweeting (2011), a symmetric distribution for the trend change intensities assures that the prevailing AMT (even though unobservable) always represents the best-estimate trend for any future point in time. However, using a heavy-tailed lognormal distribution (instead of a normal distribution with significant mass around zero) produces rather significant trend changes, which is arguably in line with the nature of material trend changes in the past. ...
Full-text available
Article
Even though the trend in mortality improvements has experienced several permanent changes in the past, the uncertainty regarding future mortality trends is often left unmodeled when pricing longevity-linked securities. In this paper, we present a stochastic modeling framework for the valuation of longevity-linked securities which explicitly considers the risk of random future changes in the long-term mortality trend. We construct a set of meaningful probability distortions which imply equivalent risk-adjusted pricing measures under which the basic model structure is preserved. Inspired by risk-based capital requirements for (re)insurers, we also establish a cost-of-capital pricing approach which then serves as the appropriate reference framework for finding a reasonable range for the market price of longevity risk. In a numerical application, we demonstrate that our model produces plausible risk loadings and show that a greater proportion of the risk loading is allocated to longer maturities when the risk of random future mortality trend changes is adequately modeled.
... The main contribution of this paper is to demonstrate how mortality models with cohort effects can be formulated, estimated and forecasted under a Bayesian state-space framework. Other works using the state-space approach for mortality modelling include Pedroza (2006), De Jong and Tickle (2006), Kogure et al. (2009) and Liu and Li (2016b). In our view, the state-space approach has three major advantages. ...
... Pricing of longevity instruments based on the maximum entropy principle using a state-space mortality model is studied in Kogure and Kurachi (2010). The flexibility of the state-space approach is a key element in dealing with diverse issues concerning mortality modelling as well as pricing and risk analysis involving longevity risk, see Liu and Li (2016a) and Liu and Li (2016b) for an application of state-space mortality model for longevity hedging . ...
... Given the fact that cohort effects are known to be present in certain countries, the possibility of exploiting cohort features under a state-space framework will undoubtedly enhance an actuary's ability to analyse mortality data. The importance of incorporating cohort effects in state-space setting is also emphasized in Liu and Li (2016b), where the authors "acknowledge that cohort effects are significant in certain populations, and that it is not trivial to incorporate cohort effects in a state-space representation in which the vector of hidden states evolve over time rather than year of birth" (p.66). Therefore, in this paper we focus on addressing this missing piece of model formulation. ...
Full-text available
Article
Cohort effects are important factors in determining the evolution of human mortality for certain countries. Extensions of dynamic mortality models with cohort features have been proposed in the literature to account for these factors under the generalised linear modelling framework. In this paper we approach the problem of mortality modelling with cohort factors incorporated through a novel formulation under a state-space methodology. In the process we demonstrate that cohort factors can be formulated naturally under the state-space framework, despite the fact that cohort factors are indexed according to year-of-birth rather than year. Bayesian inference for cohort models in a state-space formulation is then developed based on an efficient Markov chain Monte Carlo sampler, allowing for the quantification of parameter uncertainty in cohort models and resulting mortality forecasts that are used for life expectancy and life table constructions. The effectiveness of our approach is examined through comprehensive empirical studies involving male and female populations from various countries. Our results show that cohort patterns are present for certain countries that we studied and the inclusion of cohort factors are crucial in capturing these phenomena, thus highlighting the benefits of introducing cohort models in the state-space framework. Forecasting of cohort models is also discussed in light of the projection of cohort factors.
... In the second case study, we perform exploratory data analysis to categorise the eurozone countries so that the mortality forecast will not be distorted by other populations with marked dissimilarities. Although the model proposed by Liu and Li (2017) permits time-varying drifts to model structural changes in time indexes in the CBD model, the difference between our model and theirs is twofold. Firstly, we work under the ACF model setting for multiple populations. ...
Full-text available
Article
Multi-population mortality forecasting has become an increasingly important area in actuarial science and demography, as a means to avoid long-run divergence in mortality projections. This paper aims to establish a unified state-space Bayesian framework to model, estimate, and forecast mortality rates in a multi-population context. In this regard, we reformulate the augmented common factor model to account for structural/trend changes in the mortality indexes. We conduct a Bayesian analysis to make inferences and generate forecasts so that process and parameter uncertainties can be considered simultaneously and appropriately. We illustrate the efficiency of our methodology through two case studies. Both point and probabilistic forecast evaluations are considered in the empirical analysis. The derived results support the fact that the incorporation of stochastic drifts mitigates the impact of the structural changes in the time indexes on mortality projections.
... It is then important to consider the biological reasonableness of such a behavior; see [6]. As noted by [36], this may be interpreted in terms of a limiting behavior of life expectancies. In other terms, the future life expectancy should not exceed a certain range. ...
Full-text available
Article
This article proposes an optimal and robust methodology for model selection. The model of interest is a parsimonious alternative framework for modeling the stochastic dynamics of mortality improvement rates introduced recently in the literature. The approach models mortality improvements using a random field specification with a given causal structure instead of the commonly used factor-based decomposition framework. It captures some well-documented stylized facts of mortality behavior including: dependencies among adjacent cohorts, the cohort effects, cross-generation correlations, and the conditional heteroskedasticity of mortality. Such a class of models is a generalization of the now widely used AR-ARCH models for univariate processes. A the framework is general, it was investigated and illustrated a simple variant called the three-level memory model. However, it is not clear which is the best parameterization to use for specific mortality uses. In this paper, we investigate the optimal model choice and parameter selection among potential and candidate models. More formally, we propose a methodology well-suited to such a random field able to select thebest model in the sense that the model is not only correct but also most economical among all thecorrectmodels. Formally, we show that a criterion based on a penalization of the log-likelihood, e.g., the using of the Bayesian Information Criterion, is consistent. Finally, we investigate the methodology based on Monte-Carlo experiments as well as real-world datasets.
... Liu and Siu (2016a) [91] modify the state-space Cairns-Blake-Dowd (CBD) [21] model to include multi-population all-causes mortality. Liu and Siu (2016b) [92] propose a two-layers latent variable structure for the state-space CBD model to take into account the potential trend change in mortality dynamic. We follow this approach in great adaptability, especially in comparison with the CBD model which is more adapted to capture all-cause mortality with its linear age sensitivity regarding the trend. ...
Thesis
Cette thèse traite de la modélisation de la mortalité par cause de décès. Nous proposons d'aborder ce sujet sous trois angles : l'extrapolation de la mortalité par cause aux âges avancés, le regroupement des causes de décès et la prévision de la mortalité par cause. La première partie traite de l'extrapolation de la mortalité par cause aux âges avancés, qui est un sujet important en raison de l'incertitude des données aux âges avancés. L'objectif est d'extrapoler les forces de mortalité par cause aux âges avancés tout en maintenant la cohérence avec les méthodes habituelles d'extrapolation toutes causes confondues. Nous proposons une méthode top-down que nous adaptons à la mortalité par cause distribuée comme une variable de Poisson. La vraisemblance du modèle est divisée en deux parties, représentant respectivement la mortalité toutes causes confondues et la contribution des causes de décès. L'extrapolation de la mortalité aux âges élevés est obtenue en deux étapes. La première étape consiste à extrapoler la mortalité toutes causes aux âges avancés en utilisant les techniques standards de fermeture des tables de mortalité. La deuxième partie de l'algorithme consiste à extrapoler les contributions des causes de décès à la mortalité globale en utilisant une approche multinomiale P-splines. En recombinant les extrapolations des forces de mortalité aux âges élevés et des contributions des causes à la mortalité globale, nous obtenons l’extrapolation de la mortalité par cause. Dans la deuxième partie, nous proposons un algorithme pour diviser une base de mortalité en plusieurs groupes de telle sorte que l'ajustement obtenu en utilisant un modèle de Lee-Carter sur chaque groupe soit optimal. Le cadre est le suivant : nous disposons de séries temporelles de forces de mortalité calculées pour un ensemble de caractéristiques tels l'âge, la cause de décès ou le pays. Afin d'obtenir un meilleur ajustement et une estimation plus précise de la dynamique de l'ensemble des séries, il peut être utile de diviser la base de données en plusieurs groupes, sur chacun desquels un modèle de Lee-Carter est calibré. Nous proposons un algorithme dérivé des K-centroïdes et adapté au modèle LC que nous appelons le K-LC. A partir d'un algorithme apparemment complexe, nous montrons que la méthode est équivalente à un algorithme de K-centroïdes pour une fonction de distance spécifique. Deux applications sont proposées pour illustrer l'algorithme. La première traite de la division par sexe dans la prévision de la mortalité et la seconde aborde la question du regroupement des séries de mortalité par cause de décès. La troisième partie traite de la projection des taux bruts de mortalité par cause. Nous avons proposé un modèle pour traiter trois problèmes fondamentaux qui se posent lors de la projection de la mortalité par cause : les changements de tendances, le problème de la dépendance temporelle des causes de décès et la présence de biais dans les prévisions de mortalité. Nous introduisons un modèle espace-état de Poisson permettant de contourner ces problèmes au moyen d’une dynamique particulière. Cette dynamique nous permet de saisir la structure de dépendance temporelle entre les variations des causes de décès et d'inclure le potentiel de changement de tendance. La calibration du modèle est réalisée par l'algorithme Espérance-Maximisation. Nous adaptons cette méthode au modèle de Poisson de la mortalité, et montrons que pour certains paramètres, l'estimation peut être obtenue par formule fermée. Une application à la population féminine américaine entre 1979 et 2012 est proposée. Nous détaillons les structures de dépendance obtenues et mesurons leur impact sur la dépendance entre les causes de décès à l'aide de simulations. Nous faisons ensuite des prédictions sur les années 2012 à 2017, que nous comparons à celles d'un modèle LC standard appliqué à chaque cause séparément.
... It is, then, important to consider biological reasonableness of such a behavior, see Cairns et al. (2006). As noted by Liu and Li (2017), this may be interpreted in terms of a limiting behavior of life expectancies. In other terms, the future life expectancy should not exceed a certain range. ...
Full-text available
Preprint
This article proposes an optimal and robust methodology for model selection. The model of interest is a parsimonious alternative framework for modeling the stochastic dynamics of mortality improvement rates introduced by Doukhan et al. (2017). The approach models mortality improvements using a random field specification with a given causal structure instead of the commonly used factor-based decomposition framework. It captures some well documented stylized facts of mortality behavior: dependencies among adjacent cohorts, the cohort effects, cross generations correlations and the conditional heteroskedasticity of mortality. Such a class of models is a generalization of the now widely used AR-ARCH models for univariate processes. The framework being general, Doukhan et al. (2017) investigate and illustrate a simple variant, called the three-level memory model. However, it is not clear which is the best parametrization to use for specific mortality uses. In this paper, we investigate the optimal model choice and parameter selection among potential and candidate models. More formally, we propose a methodology well-suited to such a random field able to select the best model in the sense that the model is not only correct but also most economical among all the correct models. Formally, we show that a criterion based on a penalization of the log-likelihood, e.g. the using the Bayesian Information Criterion, is consistent. Finally, we investigate the methodology based on Monte-Carlo experiments as well as real-world datasets.
... • We could consider stochastic models that offer an alternative to the random walk with constant drift, e.g., we might adopt the approach of Liu and Li (2017). ...
Full-text available
Article
We introduce a new modelling framework to explain socio-economic differences in mortality in terms of an affluence index that combines information on individual wealth and income. The model is illustrated using data on older Danish males over the period 1985–2012 reported in the Statistics Denmark national register database. The model fits the historical mortality data well, captures their key features, generates smoothed death rates that allow us to work with a larger number of sub-groups than has previously been considered feasible, and has plausible projection properties.
... The CBD model has the advantage that no identification constraint is required unlike Lee-Carter type models, and the model is designed for capturing mature-age mortality dynamics which is particularly suitable for our purpose. A statespace approach to the CBD model in a frequentist setting is considered in Liu and Li (2016a) and Liu and Li (2016b). In this paper we develop a Bayesian approach to the CBD model based on Markov Chain-Monte Carlo (MCMC) method to fully capture parameter uncertainty. ...
Article
This Special Issue of Insurance: Mathematics and Economics contains 16 contributions to the academic literature all dealing with longevity risk and capital markets. Draft versions of the papers were presented at Longevity 15: The Fifteenth International Longevity Risk and Capital Markets Solutions Conference that was held in Washington DC on 12-13 September 2019. It was hosted by the Pensions Institute at City, University of London.
Article
This Special Issue of Insurance: Mathematics and Economics contains 16 contributions to the academic literature all dealing with longevity risk and capital markets. Draft versions of the papers were presented at Longevity 15: The Fifteenth International Longevity Risk and Capital Markets Solutions Conference that was held in Washington DC on 12-13 September 2019. It was hosted by the Pensions Institute at City, University of London.
Article
This Special Issue of Insurance: Mathematics and Economics contains 16 contributions to the academic literature all dealing with longevity risk and capital markets. Draft versions of the papers were presented at Longevity 15: The Fifteenth International Longevity Risk and Capital Markets Solutions Conference that was held in Washington DC on 12-13 September 2019. It was hosted by the Pensions Institute at City, University of London.
Full-text available
Article
This Special Issue of the Insurance: Mathematics and Economics contains 16 contributions to the academic literature all dealing with longevity risk and capital markets. Draft versions of the papers were presented at Longevity 15: The Fifteenth International Longevity Risk and Capital Markets Solutions Conference that was held in Washington DC on 12-13 September 2019. It was hosted by the Pensions Institute at City, University of London. Longevity risk and related capital market solutions have grown increasingly important in recent years, both in academic research and in the markets we refer to as the Life Market, i.e., the capital market that trades longevity-linked assets and liabilities. Mortality improvements around the world are putting more and more pressure on governments, pension funds, life insurance companies, as well as individuals, to deal with the longevity risk they face. At the same time, capital markets can, in principle, provide vehicles to hedge longevity risk effectively and transfer the risk from those unwilling or unable to manage it to those willing to invest in this risk in exchange for appropriate risk-adjusted returns or to those who have a counterpoising risk that longevity risk can hedge, e.g., life offices and reinsurers with mortality risk on their books. Many new investment products have been created both by the insurance/reinsurance industry and by the capital markets. Mortality catastrophe bonds are an early example of a successful insurance-linked security. Some new innovative capital market solutions for transferring longevity risk include longevity (or survivor) bonds, longevity (or survivor) swaps, mortality (or q-) forward contracts and reinsurance sidecars. The aim of the International Longevity Risk and Capital Markets Solutions Conferences is to bring together academics and practitioners from all over the world to discuss and analyze these exciting new developments.
Article
Mortality improvements that have recently become apparent in most developing countries have significantly shaped queries on forecast divergent between populations in recent years. Therefore, to ensure a more coherent way of forecasting, previous researchers have proposed multi-population mortality model in the form of independent estimation procedures. However, similar to single-population mortality model, such independent approaches might lead to inaccurate prediction interval. As a result of this inaccurate mortality forecasts, the life expectancies and the life annuities that the mortality model aims to generate is underestimated. In this study, we propose another new extension of the multi-population mortality model in a joint estimation approach by recasting the model into a state-space framework. A combination of augmented Li-Lee and O’Hare-Li methods are employed, before we transform the proposed model into a state-space formulation. In addition, this study incorporates the quadratic age effect parameter to the proposed model to better capture the younger ages mortality. We apply the method to gender and age-specific data for Malaysia. The results show that our latter framework brings a significant contribution to the multi-population mortality model due to the incorporation of joint-estimate and quadratic age effect parameters into the model’s structure. Consequently, the proposed model improves the mortality forecast accuracy.
Article
An article published in the British Medical Journal in 2018 reveals that a number of developed countries have experienced a decline in life expectancy in recent years. Within the classical framework of stochastic mortality modeling, the observed decline in life expectancy may be attributed to noises around the fitted log-linear trends in age-specific death rates. However, the patterns of the mortality heat maps for these countries suggest that it is likely a result of a fading of waves of high mortality improvement, which previously contributed to a linear rise in life expectancy in the developed world. In this paper, we introduce an improved version of the heat wave mortality model, which has the potential to capture the cessation of the waves of high mortality improvement. The proposed model is then used to examine the impact of declines in life expectancy on index-based longevity hedges. It is found that if life expectancy declines, a simple delta hedge still performs reasonably well in the sense that the over-hedging problem is only modest.
Article
By hedging longevity exposures, annuity providers can reduce both the uncertainty in future cash flows and capital charges in a cost efficient manner. We argue that a separate analysis of these two aspects cannot provide a full picture of the implications of longevity hedging, in particular when using index-based instruments. Hence, we propose a stochastic modeling framework for a joint analysis of the risk-reducing effect and the economic impact of longevity hedges in terms of hedge effectiveness and capital efficiency, respectively. In an economic capital model under Solvency II, a wide selection of customized and index-based instruments is analyzed. We show that different hedging objectives require different instruments on different index populations and discuss the accompanying trade-off between hedge effectiveness and capital efficiency. While customized hedges naturally outperform their index-based counterparts in terms of hedge effectiveness, we show that cost efficient index-based designs may be more capital efficient.
Full-text available
Article
This Special Issue of the Insurance: Mathematics and Economics contains 16 contributions to the academic literature all dealing with longevity risk and capital markets. Draft versions of the papers were presented at Longevity 15: The Fifteenth International Longevity Risk and Capital Markets Solutions Conference that was held in Washington DC on 12-13 September 2019. It was hosted by the Pensions Institute at City, University of London. Longevity risk and related capital market solutions have grown increasingly important in recent years, both in academic research and in the markets we refer to as the Life Market, i.e., the capital market that trades longevity-linked assets and liabilities. Mortality improvements around the world are putting more and more pressure on governments, pension funds, life insurance companies, as well as individuals, to deal with the longevity risk they face. At the same time, capital markets can, in principle, provide vehicles to hedge longevity risk effectively and transfer the risk from those unwilling or unable to manage it to those willing to invest in this risk in exchange for appropriate risk-adjusted returns or to those who have a counterpoising risk that longevity risk can hedge, e.g., life offices and reinsurers with mortality risk on their books. Many new investment products have been created both by the insurance/reinsurance industry and by the capital markets. Mortality catastrophe bonds are an early example of a successful insurance-linked security. Some new innovative capital market solutions for transferring longevity risk include longevity (or survivor) bonds, longevity (or survivor) swaps, mortality (or q-) forward contracts and reinsurance sidecars. The aim of the International Longevity Risk and Capital Markets Solutions Conferences is to bring together academics and practitioners from all over the world to discuss and analyze these exciting new developments.
Article
Mortality volatility is crucially important to many aspects of index-based longevity hedging, including instrument pricing, hedge calibration and hedge performance evaluation. This paper sets out to develop a deeper understanding of mortality volatility and its implications on index-based longevity hedging. First, we study the potential asymmetry in mortality volatility by considering a wide range of generalised autoregressive conditional heteroskedasticity (GARCH)-type models that permit the volatility of mortality improvement to respond differently to positive and negative mortality shocks. We then investigate how the asymmetry of mortality volatility may impact index-based longevity hedging solutions by developing an extended longevity Greeks framework, which encompasses longevity Greeks for a wider range of GARCH-type models, an improved version of longevity vega, and a new longevity Greek known as “dynamic Delta”. Our theoretical work is complemented by two real-data illustrations, the results of which suggest that the effectiveness of an index-based longevity hedge could be significantly impaired if the asymmetry in mortality volatility is not taken into account when the hedge is calibrated.
Article
Recently, the actuarial professions in various countries have adopted an innovative two-dimensional approach to projecting future mortality. In contrast to the conventional approach, the two-dimensional approach permits mortality improvement rates to vary with not only age but also time. Despite being an important breakthrough, the currently used two-dimensional mortality improvement scales are subject to several limitations, most notably a heavy reliance on subjective judgments and a lack of measures of uncertainty. In view of these limitations, in this paper we introduce a new model known as the heat wave model, in which short- and long-term mortality improvements are treated respectively as ‘heat waves’ that taper off over time and ‘background improvements’ that always exist. Using the heat wave model, one can derive two-dimensional mortality improvement scales that entail minimal subjective judgment and include measures of the uncertainty.
Article
Many of the existing index-based longevity hedging strategies focus on the reduction in variance. However, solvency capital requirements are typically based on the τ-year-ahead Value-at-Risk, with τ = 1 under Solvency II. Optimizing a longevity hedge using variance minimization is particularly inadequate when the cost of hedging is nonzero and mortality improvements are driven by a skewed and/or heavy-tailed distribution. In this article, we contribute a method to formulate a value hedge that aims to minimize the Value-at-Risk of the hedged position over a horizon of τ years. The proposed method works with all stochastic mortality models that can be formulated in a state-space form, even when a non normal distributional assumption is made. We further develop a technique to expedite the evaluation of a value longevity hedge. By utilizing the generic assumption that the innovations in the stochastic processes for the period and cohort effects are not serially correlated, the proposed technique spares us from the need for nested simulations that are generally required when evaluating a value hedge.
Article
Proposed by Chan, Li, and Li, parametric mortality indexes (i.e., indexes created using the time-varying parameters in a suitable stochastic mortality model) can be used to develop tradable mortality-linked derivatives such as K-forwards. Compared to existing indexes such as the Life and Longevity Markets Association’s LifeMetrics, parametric mortality indexes are richer in information content, allowing the market to better concentrate liquidity. In this article, we further study this concept in several aspects. First, we consider options written on parametric mortality indexes. Such options enable hedgers to create out-of-the-money longevity hedges, which, compared to at-the-money-hedges created with q-/K-forwards, may better meet hedgers’ needs for protection against downside risk. Second, using the properties of the time series processes for the parametric mortality indexes, we derive analytical risk-neutral pricing formulas for K-forwards and options. In addition to convenience, the analytical pricing formulas remove the need for computationally intensive nested simulations that are entailed in, for example, the calculation of the hedging instruments’ values when a dynamic hedge is adjusted. Finally, we construct static and dynamic Greek hedging strategies using K-forwards and options, and demonstrate empirically the conditions under which an out-of-the-money hedge is more economically justifiable than an at-the-money one.
Article
Cohort effects are important factors in determining the evolution of human mortality for certain countries. Extensions of dynamic mortality models with cohort features have been proposed in the literature to account for these factors under the generalised linear modelling framework. In this paper we approach the problem of mortality modelling with cohort factors incorporated through a novel formulation under a state-space methodology. In the process we demonstrate that cohort factors can be formulated naturally under the state-space framework, despite the fact that cohort factors are indexed according to year-of-birth rather than year. Bayesian inference for cohort models in a state-space formulation is then developed based on an efficient Markov chain Monte Carlo sampler, allowing for the quantification of parameter uncertainty in cohort models and resulting mortality forecasts that are used for life expectancy and life table constructions. The effectiveness of our approach is examined through comprehensive empirical studies involving male and female populations from various countries. Our results show that cohort patterns are present for certain countries that we studied and the inclusion of cohort factors are crucial in capturing these phenomena, thus highlighting the benefits of introducing cohort models in the state-space framework. Forecasting of cohort models is also discussed in light of the projection of cohort factors.
Article
This paper explores and develops alternative statistical representations and estimation approaches for dynamic mortality models. The framework we adopt is to reinterpret popular mortality models such as the Lee-Carter class of models in a general state-space modelling methodology, which allows modelling, estimation and forecasting of mortality under a unified framework. Furthermore, we propose an alternative class of model identification constraints which is more suited to statistical inference in filtering and parameter estimation settings based on maximization of the marginalized likelihood or in Bayesian inference. We then develop a novel class of Bayesian state-space models which incorporate apriori beliefs about the mortality model characteristics as well as for more flexible and appropriate assumptions relating to heteroscedasticity that present in observed mortality data. We show that multiple period and cohort effect can be cast under a state-space structure. To study long term mortality dynamics, we introduce stochastic volatility to the period effect. The estimation of the resulting stochastic volatility model of mortality is performed using a recent class of Monte Carlo procedure specifically designed for state and parameter estimation in Bayesian state-space models, known as the class of particle Markov chain Monte Carlo methods. We illustrate the framework we have developed using Danish male mortality data, and show that incorporating heteroscedasticity and stochastic volatility markedly improves model fit despite an increase of model complexity. Forecasting properties of the enhanced models are examined with long term and short term calibration periods on the reconstruction of life tables.
Full-text available
Chapter
Longevity risk—the risk of unanticipated increases in life expectancy—has only recently been recognized as a significant global risk that has materially raised the costs of providing pensions and annuities. We first discuss historical trends in the evolution of life expectancy and then analyze the hedging solutions that have been developed for managing longevity risk. One set of solutions has come directly from the insurance industry: pension buyouts, buy-ins, and bulk annuity transfers. Another complementary set of solutions has come from the capital markets: longevity swaps and q-forwards. This has led to hybrid solutions such as synthetic buy-ins. We then review the evolution of the market for longevity risk transfer, which began in the UK in 2006 and is arguably the most important sector of the broader “life market.” An important theme in the development of the longevity market has been the innovation originating from the combined involvement of insurance, banking, and private equity participants.
Full-text available
Article
We compare quantitatively eight stochastic models explaining improvements in mortality rates in England and Wales and in the United States. On the basis of the Bayes Information Criterion (BIC), we find that, for higher ages, an extension of the Cairns-Blake-Dowd (CBD) model that incorporates a cohort effect fits the England and Wales males data best, while for U.S. males data, the Renshaw and Haberman (RH) extension to the Lee and Carter model that also allows for a cohort effect provides the best fit. However, we identify problems with the robustness of parameter estimates under the RH model, calling into question its suitability for forecasting. A different extension to the CBD model that allows not only for a cohort effect, but also for a quadratic age effect, while ranking below the other models in terms of the BIC, exhibits parameter stability across different time periods for both datasets. This model also shows, for both datasets, that there have been approximately linear improvements over time in mortality rates at all ages, but that the improvements have been greater at lower ages than at higher ages, and that there are significant cohort effects.
Full-text available
Article
Basis risk is an important consideration when hedging longevity risk with instruments based on longevity indices, since the longevity experience of the hedged exposure may differ from that of the index. As a result, any decision to execute an index-based hedge requires a framework for (1) developing an informed understanding of the basis risk, (2) appropriately calibrating the hedging instrument, and (3) evaluating hedge effectiveness. We describe such a framework and apply it to a U.K. case study, which compares the population of assured lives from the Continuous Mortality Investigation with the England and Wales national population. The framework is founded on an analysis of historical experience data, together with an appreciation of the contextual relationship between the two related populations in social, economic, and demographic terms. Despite the different demographic profiles, the case study provides evidence of stable long-term relationships between the mortality experiences of the two populations. This suggests the important result that high levels of hedge effectiveness should be achievable with appropriately calibrated, static, index-based longevity hedges. Indeed, this is borne out in detailed calculations of hedge effectiveness for a hypothetical pension portfolio where the basis risk is based on the case study. A robustness check involving populations from the United States yields similar results.
Full-text available
Article
In recent years the issue of life expectancy has become of upmost importance to pension providers, insurance companies and the government bodies in the developed world. Significant and consistent improvements in mortality rates and hence life expectancy have led to unprecedented increases in the cost of providing for older ages. This has resulted in an explosion of stochastic mortality models forecasting trends in mortality data in order to anticipate future life expectancy and hence quantify the costs of providing for future ageing populations. Many stochastic models of mortality rates identify linear trends in mortality rates by time, age and cohort and forecast these trends into the future using standard statistical methods. The modeling approaches used fail to capture the effects of any structural change in the trend and thus potentially produce incorrect forecasts of future mortality rates. In this paper we look at a range of leading stochastic models of mortality and test for structural breaks in the trend time series. We find that in almost all cases structural breaks in the time series are present and when allowing for these the resulting forecasts are significantly improved.
Full-text available
Article
Abstract We demonstrate here several previously unrecognized or insuciently,appreciated properties of the Lee-Carter mortality forecasting approach, a method used widely in both the academic literature and practical applications. We show that this model is a special case of a considerably simpler, and less often biased, random walk with drift model, and prove that the age profile forecast from both approaches will always become less smooth and unrealistic after a point (when forecasting forward or backwards in time) and will eventually deviate from any given baseline. We use these and other properties we demonstrate to suggest when the model would be most applicable in practice. The method proposed in Lee and Carter (1992) has become the “leading statistical model of mortality [forecasting] in the demographic literature” (Deaton and Paxson, 2004). It was used as a benchmark for recent Census Bureau population forecasts (Hollmann, Mulder and Kallan, 2000), and two U.S. Social Security Technical Advisory Panels recommended its use, or the use of a method consistent with it (Lee and Miller, 2001). In the last decade, scholars have “rallied” (White, 2002) to this and closely related approaches, and policy analysts forecasting all-cause and cause-specific mortality in countries around the world have followed suit (Booth, Maindonald and Smith, 2002; Deaton and Paxson, 2004; Haberland and Bergmann, 1995; Lee, Carter and Tuljapurkar, 1995; Lee and Rofman, 1994; Lee and Skinner, 1999; Miller, 2001; NIPSSR, 2002; Perls et al., 2002; Preston, 1993; Tuljapurkar and Boe, 1998; Tuljapurkar, Li and Boe, 2000; Wilmoth, 1996, 1998a,b). Lee and Carter developed their approach specifically for U.S. mortality data, 1933-1987. However, the method is now being applied to all-cause and cause-specific mortality data from many countries and time periods, all well beyond the application for which it was designed. It thus appears to be a good time to reassess the approach, as the issues these
Full-text available
Article
The bootstrap is proposed as a method for assessing the precision of Gaussian maximum likelihood estimates of the parameters of linear state-space models. Our results also apply to autoregressive moving average models, since they are a special case of state-space models. It is shown that for a time-invariant, stable system, the bootstrap applied to the innovations yields asymptotically consistent standard errors. To investigate the performance of the bootstrap for finite sample lengths, simulation results are presented for a two-state model with 50 and 100 observations; two cases are investigated, one with real characteristic roots and one with complex characteristic roots. The bootstrap is then applied to two real data sets, one used in a test for efficient capital markets and one used to develop an autoregressive integrated moving average model for quarterly earnings data. We find the bootstrap to be of definite value over the conventional asymptotics.
Full-text available
Article
This report presents an Expectation-Maximization (EM) algorithm for estimation of the maximum-likelihood parameter values of constrained multivariate autoregressive Gaussian state-space (MARSS) models. The MARSS model can be written: x(t)=Bx(t-1)+u+w(t), y(t)=Zx(t)+a+v(t), where w(t) and v(t) are multivariate normal error-terms with variance-covariance matrices Q and R respectively. MARSS models are a class of dynamic linear model and vector autoregressive state-space model. Shumway and Stoffer presented an unconstrained EM algorithm for this class of models in 1982, and a number of researchers have presented EM algorithms for specific types of constrained MARSS models since then. In this report, I present a general EM algorithm for constrained MARSS models, where the constraints are on the elements within the parameter matrices (B,u,Q,Z,a,R). The constraints take the form vec(M)=f+Dm, where M is the parameter matrix, f is a column vector of fixed values, D is a matrix of multipliers, and m is the column vector of estimated values. This allows a wide variety of constrained parameter matrix forms. The presentation is for a time-varying MARSS model, where time-variation enters through the fixed (meaning not estimated) f(t) and D(t) matrices for each parameter. The algorithm allows missing values in y and partially deterministic systems where 0s appear on the diagonals of Q or R. Open source code for estimating MARSS models with this algorithm is provided in the MARSS R package on the Comprehensive R Archive Network (CRAN).
Full-text available
Article
This paper builds on the two-factor mortality model known as the Cairns-Blake-Dowd (CBD) model, which is used to project future mortality. It is shown that these two factors do not follow a random walk, as proposed in the original model, but that each should instead be modelled as a random fluctuation around a trend, the trend changing periodically. The paper uses statistical techniques to determine the points at which there are statistically significant changes in each trend. The frequency of change in each trend is then used to project the frequency of future changes, and the sizes of historical changes are used to project the sizes of future changes. The results are then presented as fan charts, and used to estimate the range of possible future outcomes for period life expectancies. These projections show that modelling mortality rates in this way leaves much greater uncertainty over future life expectancy in the long term.
Full-text available
Article
In examining basis risk in index longevity hedges, it is important not to ignore the dependence between the population underlying the hedging instrument and the population being hedged. We consider four extensions to the Lee-Carter model that incorporate such dependence: Both populations are jointly driven by the same single time-varying index, the two populations are cointegrated, the populations depend on a common age factor, and there is an augmented com-mon factor model in which a population-specific time-varying index is added to the common factor model with the property that it will tend toward a certain constant level over time. Using data from the female populations of Canada and the United States, we show the augmented common factor model is preferred in terms of both goodness-of-fit and ex post forecasting per-formance. This model is then used to quantify the basis risk in a longevity hedge of 65-year old Canadian females structured using a portfolio of q-forward contracts predicated on U.S. female population mortality. The hedge effectiveness is estimated at 56% on the basis of longevity value-at-risk and 81.61% on the basis of longevity risk reduction.
Full-text available
Article
The mortality rate dynamics between two related but different-sized populations are modeled consistently using a new stochastic mortality model that we call the gravity model. The larger population is modeled independently, and the smaller population is modeled in terms of spreads (or deviations) relative to the evolution of the former, but the spreads in the period and cohort effects between the larger and smaller populations depend on gravity or spread reversion param-eters for the two effects. The larger the two gravity parameters, the more strongly the smaller population's mortality rates move in line with those of the larger population in the long run. This is important where it is believed that the mortality rates between related populations should not diverge over time on grounds of biological reasonableness. The model is illustrated using an extension of the Age-Period-Cohort model and mortality rate data for English and Welsh males representing a large population and the Continuous Mortality Investigation assured male lives representing a smaller related population.
Full-text available
Article
This paper introduces a new framework for modelling the joint development over time of mortality rates in a pair of related populations with the primary aim of producing consistent mortality forecasts for the two populations. The primary aim is achieved by combining a number of recent and novel developments in stochastic mortality modelling, but these, additionally, provide us with a number of side benefits and insights for stochastic mortality modelling. By way of example, we propose an Age-Period-Cohort model which incorporates a mean-reverting stochastic spread that allows for different trends in mortality improvement rates in the short-run, but parallel improvements in the long run. Second, we fit the model using a Bayesian framework that allows us to combine estimation of the unobservable state variables and the parameters of the stochastic processes driving them into a single procedure. Key benefits of this include dampening down of the impact of Poisson variation in death counts, full allowance for paramater uncertainty, and the flexibility to deal with missing data. The framework is designed for large populations coupled with a small sub-population and is applied to the England & Wales national and Continuous Mortality Investigation assured lives males populations. We compare and contrast results based on the two-population approach with single-population results.
Full-text available
Article
Mortality dynamics are characterized by changes in mortality regimes. This paper describes a Markov regime switching model which incorporates mortality state switches into mortality dynamics. Using the 1901-2005 US population mortality data, we illustrate that regime switching models perform better than well-known models in the literature. Furthermore, we extend the Lee-Carter (1992) model in such a way that the error term of the time-series common factor has distinct mortality regimes with different means and volatilities. Finally, we show how to price mortality securities with this model.
Full-text available
Article
The Lee-Carter method of mortality forecasting assumes an invariant age component and most applications have adopted a linear time component. The use of the method with Australian data is compromised by significant departures from linearity in the time component and changes over time in the age component. We modify the method to adjust the time component to reproduce the age distribution of deaths, rather than total deaths, and to determine the optimal fitting period in order to address non-linearity in the time component. In the Australian case the modification has the added advantage that the assumption of invariance is better met. For Australian data, the modifications result in higher forecast life expectancy than the original Lee-Carter method and official projections, and a 50 per cent reduction in forecast error. The model is also expanded to take account of age-time interactions by incorporating additional terms, but these are not readily incorporated into forecasts.
Article
Most mortality models proposed in recent literature rely on the standard ARIMA framework (in particular: a random walk with drift) to project mortality rates. As a result the projections are highly sensitive to the calibration period. We therefore analyse the impact of allowing for multiple structural changes on a large collection of mortality models. We find that this may lead to more robust projections for the period effect but that there is only a limited effect on the ranking of the models based on backtesting criteria, since there is often not yet sufficient statistical evidence for structural changes. However, there are cases for which we do find improvements in estimates and we therefore conclude that one should not exclude on beforehand that structural changes may have occurred.
Book
This new edition of the Handbook of Insurance reviews the last forty years of research developments in insurance and its related fields. A single reference source for professors, researchers, graduate students, regulators, consultants and practitioners, the book starts with the history and foundations of risk and insurance theory, followed by a review of prevention and precaution, asymmetric information, risk management, insurance pricing, new financial innovations, reinsurance, corporate governance, capital allocation, securitization, systemic risk, insurance regulation, the industrial organization of insurance markets and other insurance market applications. It ends with health insurance, longevity risk, long-term care insurance, life insurance financial products and social insurance. This second version of the Handbook contains 15 new chapters. Each of the 37 chapters has been written by leading authorities in risk and insurance research, all contributions have been peer reviewed, and each chapter can be read independently of the others. © Springer Science+Business Media New York 2013. All rights reserved.
Article
In this article, we study the feasibility of dynamic longevity hedging with standardized securities that are linked to broad-based mortality indexes. On the technical front, we generalize the dynamic "delta" hedging strategy developed by Cairns (2011) to incorporate the situation when population basis risk exists. On the economic front, we discuss the potential financial benefits of an index-based hedge over a bespoke risk transfer. By considering data from a large group of national populations, we find evidence supporting the diversifiability of population basis risk. We further propose a customized surplus swap-executed between a hedger and reinsurer-to utilize the diversifiability. As standardized instruments demand less illiquidity premium, a combination of a dynamic index-based hedge and the proposed customized surplus swap may possibly be a more economical (and equally effective) alternative to a bespoke risk transfer.
Article
Forecasted mortality rates using mortality models proposed in the recent literature are sensitive to the sample size. In this paper we propose a method based on Bayesian learning to determine model-specific posterior distributions of the sample sizes. In particular, the sample size is included as an extra parameter in the parameter space of the mortality model, and its posterior distribution is obtained based on historical performance for different forecast horizons up to 20 years. Age- and gender-specific posterior distributions of sample sizes are computed. Our method is applicable to a large class of linear mortality models. As illustration, we focus on the first generation of the Lee-Carter model and the Cairns-Blake-Dowd model. Our method is applied to U.S. and Dutch data. For both countries we find highly concentrated posterior distributions of the sample size that are gender- and age-specific. In the out-of-sample forecast analysis, the Bayesian model outperforms the original mortality models with fixed sample sizes in the majority of cases.
Article
Two-population stochastic mortality models play an crucial role in the securitization of longevity risk. In particular, they allow us to quantify the population basis risk when longevity hedges are built from broad-based mortality indexes. In this paper, we propose and illustrate a systematic process for constructing a two-population mortality model for a pair of populations. The process encompasses four steps, namely (1) determining the conditions for biological reasonableness, (2) identifying an appropriate base model specification, (3) choosing a suitable time-series process and correlation structure for projecting period and/or cohort effects into the future, and (4) model evaluation.
Article
This study sets out a backtesting framework applicable to the multiperiod-ahead forecasts from stochastic mortality models and uses it to evaluate the forecasting performance of six different stochastic mortality models applied to English & Welsh male mortality data. The models considered are the following: Lee-Carter’s 1992 one-factor model; a version of Renshaw-Haberman’s 2006 extension of the Lee-Carter model to allow for a cohort effect; the age-period-cohort model, which is a simplified version of Renshaw-Haberman; Cairns, Blake, and Dowd’s 2006 two-factor model; and two generalized versions of the last named with an added cohort effect. For the data set used herein, the results from applying this methodology suggest that the models perform adequately by most backtests and that prediction intervals that incorporate parameter uncertainty are wider than those that do not. We also find little difference between the performances of five of the models, but the remaining model shows considerable forecast instability.
Article
In recent years mortality has improved considerably faster than had been predicted, resulting in unforeseen mortality losses for annuity and pension liabilities. Actuaries have considered various models to make stochastic mortality projections, one of which is the celebrated Lee-Carter model. In using the Lee-Carter model, mortality forecasts are made on the basis of the assumed linearity of a mortality index, parameter k t , in the model. However, if this index is indeed not linear, forecasts will tend to be biased and inaccurate. A primary objective of this paper is to examine the linearity of this index by rigorous statistical hypothesis tests. Specifically, we consider Zivot and Andrews’ procedure to determine if there are any structural breaks in the Lee-Carter mortality indexes for the general populations of England and Wales and the United States. The results indicate that there exists a statistically significant structural breakpoint in each of the indexes, suggesting that forecasters should be extra cautious when they extrapolate these indexes. Our findings also provide sound statistical evidence for some demographers’ observation of an accelerated mortality decline after the mid-1970s.
Article
Can death rates be reduced for octogenarians, nonagenarians, and even centenarians? It is widely assumed that mortality at advanced ages is attributable to old age per se and that death rates at advanced ages cannot be substantially reduced. Using a larger body of data than previously available, the authors find that developed countries have made progress in reducing death rates even at the highest ages. Furthermore, the pace of this progress has accelerated over the course of the twentieth century. In most developed countries outside Eastern Europe, average death rates at ages 80-99 have declined at a rate of 1 to 2 percent per year for females and 0.5 to 1.5 percent per year for males since the 1960s. For an aggregate of nine countries with reliable data through 1991, the annual average rate of improvement between 1982-86 and 1987-91 was 1.7 percent for male octogenarians and 2.5 percent for female octogenarians.
Article
In this paper, we investigate the construction of mortality indexes using the time-varying parameters in common stochastic mortality models. We first study how existing models can be adapted to satisfy the new-data-invariant property, a property that is required to ensure the resulting mortality indexes are tractable by market participants. Among the collection of adapted models, we find that the adapted Model M7 (the Cairns–Blake–Dowd model with cohort and quadratic age effects) is the most suitable model for constructing mortality indexes. One basis of this conclusion is that the adapted model M7 gives the best fitting and forecasting performance when applied to data over the age range of 40-90 for various populations. Another basis is that the three time-varying parameters in it are highly interpretable and rich in information content. Based on the three indexes created from this model, one can write a standardized mortality derivative called K-forward, which can be used to hedge longevity risk exposures. Another contribution of this paper is a method called key K-duration that permits one to calibrate a longevity hedge formed by K-forward contracts. Our numerical illustrations indicate that a K-forward hedge has a potential to outperform a q-forward hedge in terms of the number of hedging instruments required.
Article
In this paper, we propose an alternative approach for forecasting mortality for multiple populations jointly. Our contribution is developed upon the generalized linear models introduced by Renshaw et al. (1996) and Sithole et al. (2000), in which mortality forecasts are generated within the model structure, without the need of additional stochastic processes. To ensure that the resulting forecasts are coherent, a modifed time-transformation is developed to stipulate the expected mortality differential between two populations to remain constant when the long-run equilibrium is attained. The model is then further extended to incorporate a structural change, an important property that is observed in the historical mortality data of many national populations. The proposed modeling methods are illustrated with data from two different pairs of populations: (1) Swedish and Danish males; (2) English and Welsh males and U.K. male insured lives. Visit the following link to have a free access to the article until December 14, 2014. http://authors.elsevier.com/a/1Pwfwc7vg8rD2
Chapter
This chapter focuses on the longevity risk associated with the provision of retirement income. It addresses solutions for transferring longevity risk via the capital markets from hedgers (pension plans and insurers) to end investors, and reviews two kinds of longevity risk transfer transactions that have been implemented over the course of 2008 and are in the public domain. One important residual risk that cannot be hedged with standardized hedges is population basis risk. This is the risk that the mortality improvement experience of the underlying pension plan or annuity portfolio differs from that of the index used in the hedge. With appropriately chosen hedging building blocks, the basis risk with respect to age and gender can be virtually eliminated, leaving a basis risk relating to differences in the socioeconomic profiles of the two populations.
Article
When hedging longevity risk with standardized contracts, the hedger needs to calibrate the hedge carefully so that it can effectively reduce the risk. In this article, we present a calibration method that is based on matching mortality rate sensitivities. Specifically, we introduce a measure called key q-duration, which allows us to estimate the price sensitivity of a life-contingent liability to each portion of the underlying mortality curve. Given this measure, one can easily construct a longevity hedge with a small number of q-forward contracts. We further propose an extension for hedging the longevity risk associated with multiple birth cohorts, and another extension for accommodating population basis risk.
Article
Exact tests of the hypothesis that the mean vector of an observed T-variate random variable follows a conventional fixed-coefficient linear model, against the alternative that the regression parameters vary according to a first-order Markov process, are derived. Power functions of the tests are investigated in order to provide means of choosing a test.
Article
In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.
Article
The Lee-Carter model and its variants have been extensively employed by actuaries, demographers, and many others to forecast age-specific mortality. In this study, we use mortality data from England and Wales, and four Scandinavian countries to perform time-series outlier analysis of the key component of the Lee-Carter model – the mortality index. We begin by employing a systematic outlier detection process to ascertain the timing, magnitude, and persistence of any outliers present in historical mortality trends. We then try to match the identified outliers with imperative events that could possibly justify the vacillations in human mortality levels. At the same time, we adjust the effect of the outliers for model re-estimation. A new iterative model re-estimation method is proposed to reduce the chance of erroneous model specification. The empirical results indicate that the outlier-adjusted model could achieve more efficient forecasts of variables such as death rates and life expectancies. Finally, we point out that the Lee-Carter forecasts are especially vulnerable to outliers near the forecast origin, and discuss the potential limitations of the application of the Lee-Carter model to mortality forecasting.
Article
A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation" com-pletely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary statistics. The variance equation is closely related to the Hamiltonian (canonical) differential equations of the calculus of variations. Analytic solutions are available in some cases. The significance of the variance equation is illustrated by examples which duplicate, simplify, or extend earlier results in this field. The Duality Principle relating stochastic estimation and deterministic control problems plays an important role in the proof of theoretical results. In several examples, the estimation problem and its dual are discussed side-by-side. Properties of the variance equation are of great interest in the theory of adaptive systems. Some aspects of this are considered briefly.
Article
Time series methods are used to make long-run forecasts, with confidence intervals, of age-specific mortality in the United States from 1990 to 2065. First, the logs of the age-specific death rates are modeled as a linear function of an unobserved period-specific intensity index, with parameters depending on age. This model is fit to the matrix of U.S. death rates, 1933 to 1987, using the singular value decomposition (SVD) method; it accounts for almost all the variance over time in age-specific death rates as a group. Whereas e0 has risen at a decreasing rate over the century and has decreasing variability, k(t) declines at a roughly constant rate and has roughly constant variability, facilitating forecasting. k(t), which indexes the intensity of mortality, is next modeled as a time series (specifically, a random walk with drift) and forecast. The method performs very well on within-sample forecasts, and the forecasts are insensitive to reductions in the length of the base period from 90 to 30 years; some instability appears for base periods of 10 or 20 years, however. Forecasts of age-specific rates are derived from the forecasts of k, and other life table variables are derived and presented. These imply an increase of 10.5 years in life expectancy to 86.05 in 2065 (sexes combined), with a confidence band of plus 3.9 or minus 5.6 years, including uncertainty concerning the estimated trend. Whereas 46% now survive to age 80, by 2065 46% will survive to age 90. Of the gains forecast for person-years lived over the life cycle from now until 2065, 74% will occur at age 65 and over. These life expectancy forecasts are substantially lower than direct time series forecasts of e0, and have far narrower confidence bands; however, they are substantially higher than the forecasts of the Social Security Administration's Office of the Actuary.
Article
The locally most powerful test is derived for the hypothesis that the regression coefficients are constant over time against the alternative that they vary according to the random walk process. When the regression equation contains the constant term only, comparisons are made with the tests suggested by LaMotte and McWhorter (1978). These are based on exact powers and on three different types of asymptotic efficiencies including the classical Pitman and Bahadur approaches and the new one due to Gregory (1980). The concept of the Bahadur efficiency is extended to cover also the random slopes. Suggestions are made for choosing the test.
Article
This article compares two methodologies for modeling and forecasting statistical time series models of demographic processes: Box-Jenkins ARIMA and structural time series analysis. The Lee-Carter method is used to construct nonlinear demographic models of U.S. mortality rates for the total population, gender, and race and gender combined. Single time varying parameters of k, the index of mortality, are derived from these model and fitted and forecasted using the two methodologies. Forecasts of life expectancy at birth, e0, are generated from these indexes of k. Results show marginal differences in fit and forecasts between the two statistical approaches with a slight advantage to structural models. Stability across models for both methodologies offers support for the robustness of this approach to demographic forecasting.
Article
In December 2007, Goldman Sachs launched a product called QxX index swap, which is designed to allow market participants to hedge or gain exposure to longevity and mortality risks. In this paper, we offer a quantitative analysis of this brand new financial innovation. First of all, we set up a risk-neutral framework to price QxX index swaps. This framework, which is based on the dynamics of death rates under a two-factor stochastic mortality model in a risk-adjusted probability measure, yields prices (spreads) that are fairly close to the spreads that Goldman Sachs currently offers. We then explore the uncertainty involved in this model-based pricing framework. Specifically, we study parameter risk by using Bayesian methods and model risk by examining structural changes in mortality dynamics. Our results indicate that both model risk and parameter risk are significant. Actuaries should therefore be aware of these issues when placing a value on a longevity index swap.
Article
This paper looks at the development of dynamic hedging strategies for typical pen-sion plan liabilities using longevity-linked hedging instruments. Progress in this area has been hindered by the lack of closed-form formulas for the valuation of mortality-linked liabilities and assets, and the consequent requirement for simulations within simulations. We propose use of the probit function along with a Taylor expansion to approximate longevity-contingent values. This makes it possible to develop and implement computationally efficient, discrete-time Delta hedging strategies using q-forwards as hedging instruments.
Article
We consider situations where a pension plan has opted to hedge its longevity risk using an index-based longevity hedging instrument such as a q-forward or deferred longevity swap. The use of index-based hedges gives rise to basis risk, but ben-efits, potentially, from lower costs to the hedger and greater liquidity. We focus on quantification of optimal hedge ratios and hedge effectiveness and investigate how robust these quantities are relative to inclusion of recalibration risk, parame-ter uncertainty and Poisson risk. We find that strategies are robust relative to the inclusion of parameter uncertainty and Poisson risk. In contrast, single-instrument hedging strategies are found to lack robustness relative to the inclusion of recali-bration risk at the future valuation date, although we also demonstrate that some hedging instruments are more robust than others. To address this problem, we de-velop multi-instrument hedging strategies that are robust relative to recalibration risk.
Article
This paper studies the hedging problem of life insurance policies, when the mortality and interest rates are stochastic. We focus primarily on stochastic mortality. We represent death arrival as the first jump time of a doubly stochastic process, i.e. a jump process with stochastic intensity. We propose a Delta-Gamma Hedging technique for mortality risk in this context. The risk factor against which to hedge is the difference between the actual mortality intensity in the future and its "forecast" today, the instantaneous forward intensity. We specialize the hedging technique first to the case in which survival intensities are affine, then to Ornstein-Uhlenbeck and Feller processes, providing actuarial justifications for this restriction. We show that, without imposing no arbitrage, we can get equivalent probability measures under which the HJM condition for no arbitrage is satisfied. Last, we extend our results to the presence of both interest rate and mortality risk, when the forward interest rate follows a constant-parameter Hull and White process. We provide a UK calibrated example of Delta and Gamma Hedging of both mortality and interest rate risk.
Article
This article compares two methodologies for modeling and forecasting statistical time series models of demographic processes: Box-Jenkins ARIMA and structural time series analysis. The Lee-Carter method is used to construct nonlinear demographic models of U.S. mortality rates for the total population, gender, and race and gender combined. Single time varying parameters of k, the index of mortality, are derived from these model and fitted and forecasted using the two methodologies. Forecasts of life expectancy at birth, e0, are generated from these indexes of k. Results show marginal differences in fit and forecasts between the two statistical approaches with a slight advantage to structural models. Stability across models for both methodologies offers support for the robustness of this approach to demographic forecasting.
Article
The Lee-Carter method for mortality forecasting is outlined, discussed and improved utilizing standard time series approaches. The new framework, which integrates estimation and forecasting, delivers more robust results and permits more detailed insight into underlying mortality dynamics. An application to women's mortality data illustrates the methods.
Article
This paper develops a framework for developing forecasts of future mortality rates. We discuss the suitability of six stochastic mortality models for forecasting future mortality and estimating the density of mortality rates at different ages. In particular, the models are assessed individually with reference to the following qualitative criteria that focus on the plausibility of their forecasts: biological reasonableness; the plausibility of predicted levels of uncertainty in forecasts at different ages; and the robustness of the forecasts relative to the sample period used to fit the model. An important, though unsurprising, conclusion is that a good fit to historical data does not guarantee sensible forecasts. We also discuss the issue of model risk, common to many modelling situations in demography and elsewhere. We find that even for those models satisfying our qualitative criteria, there are significant differences among central forecasts of mortality rates at different ages and among the distributions surrounding those central forecasts.
Article
The classical filtering and prediction problem is re-examined using the Bode-Sliannon representation of random processes and the “state-transition” method of analysis of dynamic systems. New results are: (1) The formulation and methods of solution of the problem apply without modification to stationary and nonstationary statistics and to growing-memory and infinitememory filters. (2) A nonlinear difference (or differential) equation is derived for the covariance matrix of the optimal estimation error. From the solution of this equation the coefficients of the difference (or differential) equation of the optimal linear filter are obtained without further calculations. (3) The filtering problem is shown to be the dual of the noise-free regulator problem. The new method developed here is applied to two well-known problems, confirming and extending earlier results. The discussion is largely self-contained and proceeds from first principles; basic concepts of the theory of random processes are reviewed in the Appendix.
Article
In this article, we consider the evolution of the post-age-60 mortality curve in the United Kingdom and its impact on the pricing of the risk associated with aggregate mortality improvements over time: so-called longevity risk. We introduce a two-factor stochastic model for the development of this curve through time. The first factor affects mortality-rate dynamics at all ages in the same way, whereas the second factor affects mortality-rate dynamics at higher ages much more than at lower ages. The article then examines the pricing of longevity bonds with different terms to maturity referenced to different cohorts. We find that longevity risk over relatively short time horizons is very low, but at horizons in excess of ten years it begins to pick up very rapidly. A key component of the article is the proposal and development of a method for calculating the market risk-adjusted price of a longevity bond. The proposed adjustment includes not just an allowance for the underlying stochastic mortality, but also makes an allowance for parameter risk. We utilize the pricing information contained in the November 2004 European Investment Bank longevity bond to make inferences about the likely market prices of the risks in the model. Based on these, we investigate how future issues might be priced to ensure an absence of arbitrage between bonds with different characteristics.
Article
The belief that old-age mortality is intractable remains deeply held by many people. Remarkable progress, however, has been made since 1950, and especially since 1970, in substantially improving survival at older ages, even the most advanced ages. The pace of mortality improvement at older ages continues to be particularly rapid in Japan, even though mortality levels in Japan are lower than elsewhere. The progress in improving survival has accelerated the growth of the population of older people and has advanced the frontier of human survival substantially beyond the extremes of longevity attained in pre-industrial times. Little, however, is known about why mortality among the oldest-old has been so plastic since 1950. The little that is known has largely been learned within the past few years. New findings, especially concerning genetic factors that influence longevity, are emerging at accelerating rate.
Article
Lee and Carter (LC) published a new statistical method for forecasting mortality in 1992. This paper examines its actual and hypothetical forecast errors, and compares them with Social Security forecast errors. Hypothetical historical projections suggest that LC tended to underproject gains, but by less than did Social Security. True e0 was within the ex ante 95% probability interval 97% of the time overall, but intervals were too broad up to 40 years and too narrow after 50 years. Projections to 1998 made after 1945 always contain errors of less than two years. Hypothetical projections for France, Sweden, Japan, and Canada would have done well. Changing age patterns of mortality decline over the century pose problems for the method.
Article
Is human life expectancy approaching its limit? Many--including individuals planning their retirement and officials responsible for health and social policy--believe it is, but the evidence presented in the [Policy Forum][1] suggests otherwise. For 160 years, best-performance life expectancy has steadily increased by a quarter of a year per year, an extraordinary constancy of human achievement. Mortality experts have repeatedly asserted that life expectancy is close to an ultimate ceiling; these experts have repeatedly been proven wrong. The apparent leveling off of life expectancy in various countries is an artifact of laggards catching up and leaders falling behind. [1]: http://www.sciencemag.org/cgi/content/full/296/5570/1029