Article

Ensemble Economic Scenario Generators: Unity Makes Strength

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Over the last 40 years, various frameworks have been proposed to model economic and financial variables relevant to actuaries. These models are helpful, but searching for a unique model that gives optimal forecasting performance can be frustrating and ultimately futile. This study therefore investigates whether we can create better, more reliable economic scenario generators by combining them. We first consider eight prominent economic scenario generators and apply Bayesian estimation techniques to them, thus allowing us to account for parameter uncertainty. We then rely on predictive distribution stacking to obtain optimal model weights that prescribe how the models should be averaged. The weights are constructed in a leave-future-out fashion to build truly out-of-sample forecasts. An extensive empirical study based on three economies—the United States, Canada, and the United Kingdom—and data from 1992 to 2021 is performed. We find that the optimal weights change over time and differ from one economy to another. The out-of-sample behavior of the ensemble model compares favorably to the other eight models: the ensemble model’s performance is substantially better than that of the worse models and comparable to that of the better models. Creating ensembles is thus beneficial from an out-of-sample perspective because it allows for robust and reasonable forecasts.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... His framework, which relies on the Box-Jenkins approach, is based on four connected models: an inflation model, a long interest rate model, a dividend yield model, and a stock index return model. In a follow-up article, -We use a homoskedastic autoregressive model for price inflation, wage inflation, and the dividend yield with (monetary) regime-dependent long-run levels, as supported by the conclusions of Bégin ( , 2023. ...
Preprint
Full-text available
This study investigates the benefits and drawbacks of pension plan consolidation by quantifying the impact of mergers on different stakeholders in a unique Canadian implementation of defined benefit plans. Using a comprehensive framework, we evaluate the combined effect of asset- and liability-side changes on three groups of measures: plan-related risk measures assessing profits from an economic capital perspective, consumption-based metrics to understand the impact on members, and contribution risk measures capturing the risk from the employer's viewpoint. We apply the framework to a hypothetical and empirically relevant merger and find that consolidation is favourable under most circumstances.
Article
Full-text available
This study investigates the benefits and drawbacks of pension plan consolidation by quantifying the impact of mergers of heterogeneous plans on different stakeholders in a unique Canadian implementation of defined benefit plans. Using a comprehensive framework that combines a realistic economic scenario generator, a stochastic mortality model that captures differences among subpopulations, a cost model with economies of scale, and a dynamic asset allocation methodology, we evaluate the combined effect of asset- and liability-side changes on three groups of measures: plan-related risk measures assessing profits from an economic capital perspective, consumption-based metrics to understand the impact on members, and contribution risk measures capturing the risk from the employer’s viewpoint. We apply the framework to a hypothetical and empirically relevant merger and find that consolidation is favorable under most circumstances: the positive impacts of better diversification and economies of scale continue to outweigh the negative effects of heterogeneity even when the merging plans have different mortality expectations, different maturity levels, or modest differences in initial funded ratios.
Article
Full-text available
The purpose of the paper is to develop the most probable scenarios and to determine the strategic directions and effective tools for the Ukrainian industrial recovery, which will ensure the resistance of the economy in the conditions of military challenges. The method of strategic scenarios allows to find out how the industrial development in Ukraine will develop in the course of war and post-war recovery. Methodology. The methods of system analysis and logical modeling were used to describe the transition of the Ukrainian production from the current situation of military crisis to the target one; structural analysis was used to determine the system of indicators characterizing the resistance of the industry. For this purpose, national (State Statistics of Ukraine) and international (World Bank, Eurostat official website) databases characterizing the level and structure of industrial development in the last 5 years were used. The method of calculation takes into account the criteria of changes in indicators: direction (growth/decline occurred); rate of changes based on the cumulative annual growth rate for the period of 5 years. The study was carried out using analytical methods of the influence of trends in the formation of strategic scenarios in unpredictable situations (conditions of wartime uncertainty), to assess changes in the probability of occurrence due to the actual occurrence of one of them, which made it possible to identify trends, justify scenarios and take them into account when analyzing the prospects for industrial development to strengthen the defense capabilities and economic growth of Ukraine. The results of the survey showed that the strategic scenarios for the industrial development of Ukraine will be adjusted as necessary for the post-war industrial recovery in case of a long-term external military threat to preserve the state sovereignty. The achievement of the set strategic goals depends on the driving forces determining the industrial development in Ukraine. As the main indicators characterizing the tendencies of industrial development in Ukraine, the indicators reflecting the efficiency of the use of productive forces have been chosen: indicators of industrial production efficiency; labor productivity; indicators characterizing innovative development; performance indicators of foreign economic activity and investment development. Taking into account the influence of each of the driving forces of industrial development in the conditions of wartime uncertainty, three scenarios of industrial development were developed: a conditionally positive scenario, in which the economic system will gradually stabilize due to the cessation of hostilities and the recovery of production capacities; a conditionally negative scenario, which will be characterized by the disintegration of the economic system, the destruction of energy infrastructure facilities, where negative trends will dominate; a conditionally neutral (basic) scenario, in which the disintegration of the economic system will not reach extreme levels, and industrial production will develop in areas not covered by hostilities. Practical implications. The key problem of restoring economic stability in Ukraine is to create conditions for favorable development of industrial business, which depends on balanced strategic policy decisions. The transformation of industry into an effective force for the revival of the Ukrainian economy in the conditions of the war and post-war period requires a balanced strategic management of the future development, because it is crucial to meet the unprecedented demands of the war on the available resources of the country and to prevent a social, humanitarian, economic, financial, environmental, military crisis. At the same time, traditional methods of indicative planning cannot take into account all factors of wartime uncertainty, therefore, the rationale of future development vectors based on scenario planning makes it possible to create conditions for minimizing threats and realizing potential opportunities. Value/originality. Strategic scenarios provide for better economic recovery planning with long-term national priorities, development strategies of related industries and sectors for ensuring the Ukrainian manufacturing resistance.
Article
Full-text available
Introduction. In this paper, we examine the importance of scenario strategizing of industrial development under conditions of uncertainty caused by human, material and non-material losses during wartime. The main approaches to the development of the industrial development scenarios are systematized. It has been established that scenario methods can be used to cope with future uncertainties by envisioning plausible futures and identifying paths to reach desirable targets. Based on the study of the main indicators that characterize the internal opportunities for the manufacturing development in Ukraine (the dynamics of sold product volume, labor productivity, industrial energy consumption, the dynamics of research and innovation activities, high-tech exports, etc.), the authors identify the driving forces and weaknesses of the Ukrainian industry. Also the opportunities and threats of innovative transformations for manufacturing in the war and post-war period were identified. The proposals for a quick and effective manufacturing recovery in relation to the development policy priorities of the European Community were substantiated relying supports country-level collaboration between governments, business and civil society. Materials and methods. In article the methods of the system analysis and logical modeling - for explanation the ways of transition of the industry from the current situation to the target one; structural analysis to determine the system of indicators characterizing the sustainability of the industry were used. For this, national (State Statistics of Ukraine) and international (World Bank, Eurostat official website) databases characterizing the level and structure of industrial development over the past 5 years were used. The calculation methodology takes into account the criteria for changing indicators: direction (growth/decline occurred); rate of changes. The calculation of trends for measures with quantitative units is based on the cumulative annual growth rate for the 5 years period. The study was carried out using elements of the method of analyzing the influence of trends in the formation of strategic scenarios in the event of unpredictable situations (in conditions of wartime uncertainty), to assess changes in the probability of occurrence of a given set of events due to the actual occurrence of one of them, which made it possible to identify trends, justify scenarios and take them into account when analyzing the prospects for industrial development to strengthen the defense capabilities and economic growth of Ukraine. Results and discussion. Strategic scenarios for the industrial development of Ukraine will be adjusted as needed for post-war industry recovery in case of an long-term military external threat to preserve the state sovereignty of our country. It stresses the importance of ensuring a consistently high level of state defense capability. This additionally prioritizes the need to develop a developed industrial complex in creating available resources to meet the needs of not only the civilian population, but also the army. Achieving the set strategic goals depends on the driving forces that determine the industrial development in our country. As the main indicators characterizing the tendencies of industrial development in Ukraine, the indicators reflecting the efficiency of the use of productive forces were chosen: indicators of the industrial production`s efficiency; labor productivity; indicators characterizing innovative development; performance indicators of foreign economic activity and investment development. This choice is due to the fact that a strong industrial base should generate productive and stable employment, and, as a result, an increasing in the average level of wages. In addition, the industry should ensure the production of socially significant industrial goods (food, medicine, hygiene items, clothing and footwear, fuel). Taking into account the influence of each of the driving forces of the industry development in the conditions of the wartime uncertainty, three scenarios of industrial development have been developed: a conditionally positive scenario, in which the economic system will gradually stabilize due to the cessation of hostilities and the manufacturing capacities recovery; a conditionally negative scenario, which will be characterized by the disintegration of the economic system, the destruction of energy infrastructure facilities, where negative trends will be dominant; a conditionally neutral (basic) scenario, in which the turbulence of the economic system will not reach extreme levels, and industrial production will develop in areas not covered by hostilities. Conclusions. The key problem of restoring the economic stability of Ukraine is to provide conditions for the favorable conduct of industrial business, which depends on balanced strategic policy decisions. The transformation of industry into an effective force for the revival of Ukrainian business in the conditions of wartime and the post-war period requires a balanced strategic management of the future development of the economy, because it is critically important to meet the unprecedented demands of the war on the available resources in the country and prevent a social, humanitarian, economic, financial, environmental, military crisis. At the same time, traditional methods of indicative planning cannot take into account all factors of the wartime uncertainty, therefore, the rationale for future development vectors based on scenario planning makes it possible to create conditions for minimizing threats and realizing potential opportunities.
Article
Full-text available
This article proposes a complex economic scenario generator that nests versions of well-known actuarial frameworks. The generator estimation relies on the Bayesian paradigm and accounts for both model and parameter uncertainty via Markov chain Monte Carlo methods. So, to the question is less more? , we answer maybe, but it depends on your criteria. From an in-sample fit perspective, on the one hand, a complex economic scenario generator seems better. From the conservatism, forecasting and coverage perspectives, on the other hand, the situation is less clear: having more complex models for the short rate, term structure and stock index returns is clearly beneficial. However, that is not the case for inflation and the dividend yield.
Article
Full-text available
One of the common goals of time series analysis is to use the observed series to inform predictions for future observations. In the absence of any actual new data to predict, cross-validation can be used to estimate a model's future predictive accuracy, for instance, for the purpose of model comparison or selection. Exact cross-validation for Bayesian models is often computationally expensive, but approximate cross-validation methods have been developed, most notably methods for leave-one-out cross-validation (LOO-CV). If the actual prediction task is to predict the future given the past, LOO-CV provides an overly optimistic estimate because the information from future observations is available to influence predictions of the past. To properly account for the time series structure, we can use leave-future-out cross-validation (LFO-CV). Like exact LOO-CV, exact LFO-CV requires refitting the model many times to different subsets of the data. Using Pareto smoothed importance sampling, we propose a method for approximating exact LFO-CV that drastically reduces the computational costs while also providing informative diagnostics about the quality of the approximation.
Article
Full-text available
In this article, we study parameter uncertainty and its actuarial implications in the context of economic scenario generators. To account for this additional source of uncertainty in a consistent manner, we cast Wilkie’s four-factor framework into a Bayesian model. The posterior distribution of the model parameters is estimated using Markov chain Monte Carlo methods and is used to perform Bayesian predictions on the future values of the inflation rate, the dividend yield, the dividend index return and the long-term interest rate. According to the US data, parameter uncertainty has a significant impact on the dispersion of the four economic variables of Wilkie’s framework. The impact of such parameter uncertainty is then assessed for a portfolio of annuities: the right tail of the loss distribution is significantly heavier when parameters are assumed random and when this uncertainty is estimated in a consistent manner. The risk measures on the loss variable computed with parameter uncertainty are at least 12% larger than their deterministic counterparts.
Article
Full-text available
The goal of this paper is to compare several widely used Bayesian model selection methods in practical model selection problems, highlight their differences and give recommendations about the preferred approaches. We focus on the variable subset selection for regression and classification and perform several numerical experiments using both simulated and real world data. The results show that the optimization of a utility estimate such as the cross-validation score is liable to finding overfitted models due to relatively high variance in the utility estimates when the data is scarce. Better and much less varying results are obtained by incorporating all the uncertainties into a full encompassing model and projecting this information onto the submodels. The reference model projection appears to outperform also the maximum a posteriori model and the selection of the most probable variables. The study also demonstrates that the model selection can greatly benefit from using cross-validation outside the searching process both for guiding the model size selection and assessing the predictive performance of the finally selected model.
Article
Full-text available
ASBTRACT In this paper we adopt the multiple time-series modelling approach suggested by Tiao & Box (1981) to construct a stochastic investment model for price inflation, share dividends, share dividend yields and long-term interest rates in the United Kingdom. This method has the advantage of being direct and transparent. The sequential and iterative steps of tentative specification, estimation and diagnostic checking parallel those of the well-known Box-Jenkins method in the univariate time-series analysis. It is not required to specify any a priori causality as compared to some other stochastic asset models in the literature.
Article
Full-text available
In this paper I first define the regime-switching lognormal model. Monthly data from the Standard and Poor's 500 and the Toronto Stock Exchange 300 indices are used to fit the model parameters, using maximum likelihood estimation. The fit of the regime-switching model to the data is compared with other common econometric models, including the generalized autoregressive conditionally heteroskedastic model. The distribution function of the regime-switching model is derived. Prices of European options using the regime-switching model are derived and implied volatilities explored. Finally, an example of the application of the model to maturity guarantees under equity-linked insurance is presented. Equations for quantile and conditional tail expectation (Tail-VaR) risk measures are derived, and a numerical example compares the regime-switching lognormal model results with those using the more traditional lognormal stock return model.
Article
Full-text available
We consider the problem of speaker diarization, the problem of segment-ing an audio recording of a meeting into temporal segments corresponding to individual speakers. The problem is rendered particularly difficult by the fact that we are not allowed to assume knowledge of the number of people partic-ipating in the meeting. To address this problem, we take a Bayesian nonpara-metric approach to speaker diarization that builds on the hierarchical Dirichlet process hidden Markov model (HDP-HMM) of Teh et al. (2006). Although the basic HDP-HMM tends to over-segment the audio data—creating redun-dant states and rapidly switching among them—we describe an augmented HDP-HMM that provides effective control over the switching rate. We also show that this augmentation makes it possible to treat emission distributions nonparametrically. To scale the resulting architecture to realistic diarization problems, we develop a sampling algorithm that employs a truncated approx-imation of the Dirichlet process to jointly resample the full state sequence, greatly improving mixing rates. Working with a benchmark NIST data set, we show that our Bayesian nonparametric architecture yields state-of-the-art speaker diarization results.
Article
Full-text available
The aim of this paper is to construct Bayesian model comparison tests between discrete distri-butions used for claim count modeling in the actuarial field. We use advanced computational techniques to estimate the posterior model odds among different distributions for claim counts. We construct flexible reversible jump Markov Chain Monte Carlo algorithms and implement them in various illustrated examples.
Chapter
Full-text available
To improve forecasting accuracy, combine forecasts derived from methods that differ substantially and draw from different sources of information. When feasible, use five or more methods. Use formal procedures to combine forecasts: An equal-weights rule offers a reasonable starting point, and a trimmed mean is desirable if you combine forecasts resulting from five or more methods. Use different weights if you have good domain knowledge or information on which method should be most accurate. Combining forecasts is especially useful when you are uncertain about the situation, uncertain about which method is most accurate, and when you want to avoid large errors. Compared with errors of the typical individual forecast, combining reduces errors. In 30 empirical comparisons, the reduction in ex ante errors for equally weighted combined forecasts averaged about 12.5% and ranged from 3 to 24 percent. Under ideal conditions, combined forecasts were sometimes more accurate than their most accurate components.
Article
Full-text available
We summarize the literature on the effectiveness of combining forecasts by assessing the conditions under which combining is most valuable. Using data on the six US presidential elections from 1992 to 2012, we report the reductions in error obtained by averaging forecasts within and across four election forecasting methods: poll projections, expert judgment, quantitative models, and the Iowa Electronic Markets. Across the six elections, the resulting combined forecasts were more accurate than any individual component method, on average. The gains in accuracy from combining increased with the numbers of forecasts used, especially when these forecasts were based on different methods and different data, and in situations involving high levels of uncertainty. Such combining yielded error reductions of between 16% and 59%, compared to the average errors of the individual forecasts. This improvement is substantially greater than the 12% reduction in error that had been reported previously for combining forecasts.
Article
Full-text available
Projections of future climate change caused by increasing greenhouse gases depend critically on numerical climate models coupling the ocean and atmosphere (GCMs). However, difierent models difier substantially in their projections, which raises the question of how the difierent models can best be combined into a probability distribution of future climate change. For this analysis, we have collected both current and future projected mean temperatures produced by nine climate models for 22 regions of the earth. We also have estimates of current mean temperatures from actual observations, together with standard errors, that can be used to calibrate the climate models. We propose a Bayesian analysis that allows us to combine the difierent climate models into a posterior distribution of future temperature increase, for each of the 22 regions, while allowing for the difierent climate models to have difierent variances. Two versions of the analysis are proposed, a univariate analysis in which each region is analyzed separately, and a multivariate analysis in which the 22 regions are combined into an overall statistical model. A cross-validation approach is proposed to conflrm the reasonableness of our Bayesian predictive distributions. The results of this analysis allow for a quantiflcation of the uncertainty of climate model projections as a Bayesian posterior distribution, substantially extending previous approaches to uncertainty in climate models.
Article
Full-text available
This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers. Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second space whose inputs are (for example) the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is (for example) the correct guess. When used with multiple generalizers, stacked generalization can be seen as a more sophisticated version of cross-validation, exploiting a strategy more sophisticated than cross-validation's crude winner-takes-all for combining the individual generalizers. When used with a single generalizer, stacked generalization is a scheme for estimating (and then correcting for) the error of a generalizer which has been trained on a particular learning set and then asked a particular question. After introducing stacked generalization and justifying its use, this paper presents two numerical experiments. The first demonstrates how stacked generalization improves upon a set of separate generalizers for the NETtalk task of translating text to phonemes. The second demonstrates how stacked generalization improves the performance of a single surface-fitter. With the other experimental evidence in the literature, the usual arguments supporting cross-validation, and the abstract justifications presented in this paper, the conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate. This paper ends by discussing some of the variations of stacked generalization, and how it touches on other fields like chaos theory.
Article
Full-text available
A proper choice of a proposal distribution for Markov chain Monte Carlo methods, for example for the Metropolis-Hastings algorithm, is well known to be a crucial factor for the convergence of the algorithm. In this paper we introduce an adaptive Metropolis (AM) algorithm, where the Gaussian proposal distribution is updated along the process using the full information cumulated so far. Due to the adaptive nature of the process, the AM algorithm is non-Markovian, but we establish here that it has the correct ergodic properties. We also include the results of our numerical tests, which indicate that the AM algorithm competes well with traditional Metropolis-Hastings algorithms, and demonstrate that the AM algorithm is easy to use in practical computation.
Article
Predicting the evolution of mortality rates plays a central role for life insurance and pension funds. Various stochastic frameworks have been developed to model mortality patterns by taking into account the main stylized facts driving these patterns. However, relying on the prediction of one specific model can be too restrictive and can lead to some well-documented drawbacks, including model misspecification, parameter uncertainty, and overfitting. To address these issues we first consider mortality modeling in a Bayesian negative-binomial framework to account for overdispersion and the uncertainty about the parameter estimates in a natural and coherent way. Model averaging techniques are then considered as a response to model misspecifications. In this paper, we propose two methods based on leave-future-out validation and compare them to standard Bayesian model averaging (BMA) based on marginal likelihood. An intensive numerical study is carried out over a large range of simulation setups to compare the performances of the proposed methodologies. An illustration is then proposed on real-life mortality datasets, along with a sensitivity analysis to a Covid-type scenario. Overall, we found that both methods based on an out-of-sample criterion outperform the standard BMA approach in terms of prediction performance and robustness.
Article
Many alternative approaches for selecting mortality models and forecasting mortality have been proposed. The usual practice is to base forecasts on a single mortality model selected using in-sample goodness-of-fit measures. However, cross-validation measures are increasingly being used in model selection, and model combination methods are becoming a common alternative to using a single mortality model. We propose and assess a stacked regression ensemble that optimally combines different mortality models to reduce out-of-sample mean squared errors and mitigate model selection risk. Stacked regression uses a meta-learner to approximate horizon-specific weights by minimizing a cross-validation criterion for each forecasting horizon. The horizon-specific weights determine a mortality model combination customized to each horizon. We use 44 populations from the Human Mortality Database to compare the stacked regression ensemble with alternative methods. We show that, using one-year-ahead to 15-year-ahead out-of-sample mean squared errors, the stacked regression ensemble improves mortality forecast accuracy by 13% - 49% for males and 19% - 90% for females over individual mortality models. The stacked regression ensembles also have better predictive accuracy than other model combination methods, including Simple Model Averaging, Bayesian Model Averaging, and Model Confidence Set. We provide an R package, CoMoMo, that combines forecasts for Generalized-Age-Period-Cohort models.
Article
The retirement systems in many developed countries have been increasingly moving from defined benefit towards defined contribution system. In defined contribution systems, financial and longevity risks are shifted from pension providers to retirees. In this paper, we use a probabilistic approach to analyse the uncertainty associated with superannuation accumulation and decumulation. We apply an economic scenario generator called the Simulation of Uncertainty for Pension Analysis (SUPA) model to project uncertain future financial and economic variables. This multi-factor stochastic investment model, based on the Monte Carlo method, allows us to obtain the probability distribution of possible outcomes regarding the superannuation accumulation and decumulation phases, such as relevant percentiles. We present two examples to demonstrate the implementation of the SUPA model for the uncertainties during both phases under the current superannuation and Age Pension policy, and test two superannuation policy reforms suggested by the Grattan Institute.
Article
The Wilkie economic scenario generator has had a significant influence on economic scenario generators since the first formal publication in 1986 by Wilkie. In this article we update the model parameters using U.S. data to 2014, and review the model performance. In particular, we consider stationarity assumptions, parameter stability, and structural breaks.
Article
We propose a statistical model of the term structure of U.S. treasury yields tailored for long-term probability-based scenario generation and forecasts. Our model is easy to estimate and is able to simultaneously reproduce the positivity, persistence, and factor structure of the yield curve. Moreover, we incorporate heteroskedasticity and time-varying correlations across yields, both prevalent features of the data. The model also features a regime-switching short-rate model. We evaluate the out-of-sample performance of our model in terms of forecasting ability and coverage properties, and find that it improves on the standard Diebold and Li model.
Article
The widely recommended procedure of Bayesian model averaging is flawed in the M-open setting in which the true data-generating process is not one of the candidate models being fit. We take the idea of stacking from the point estimation literature and generalize to the combination of predictive distributions, extending the utility function to any proper scoring rule, using Pareto smoothed importance sampling to efficiently compute the required leave-one-out posterior distributions and regularization to get more stability. We compare stacking of predictive distributions to several alternatives: stacking of means, Bayesian model averaging (BMA), pseudo-BMA using AIC-type weighting, and a variant of pseudo-BMA that is stabilized using the Bayesian bootstrap. Based on simulations and real-data applications, we recommend stacking of predictive distributions, with BB-pseudo-BMA as an approximate alternative when computation cost is an issue.
Article
Episode Treatment Groups (ETGs) classify related services into medically relevant and distinct units describing an episode of care. Proper model selection for those ETG-based costs is essential to adequately price and manage health insurance risks. The optimal claim cost model (or model probabilities) can vary depending on the disease. We compare four potential models (lognormal, gamma, log-skew- t and Lomax) using four different model selection methods (AIC and BIC weights, Random Forest feature classification and Bayesian model averaging) on 320 ETGs. Using the data from a major health insurer, which consists of more than 33 million observations from 9 million claimants, we compare the various methods on both speed and precision, and also examine the wide range of selected models for the different ETGs. Several case studies are provided for illustration. It is found that Random Forest feature selection is computationally efficient and sufficiently accurate, hence being preferred in this large data set. When feasible (on smaller data sets), Bayesian model averaging is preferred because of the posterior model probabilities.
Article
Changes in residual volatility are often used for identifying structural shocks in vector autoregressive (VAR) analysis. A number of different models for heteroskedasticity or conditional heteroskedasticity are proposed and used in applications in this context. The different volatility models are reviewed and their advantages and drawbacks are indicated. An application investigating the interaction between U.S. monetary policy and the stock market illustrates the related issues.
Article
The Bayesian paradigm provides a natural way to deal with uncertainty in model selection through assigning each model in a list of models under consideration a posterior probability, with these probabilities providing a basis for inferences or used as weights in model-averaged predictions. Unfortunately, this framework relies on the assumption that one of the models in the list is the true model. When this assumption is violated, the model that is closest in Kullback-Leibler divergence to the true model is often assigned probability converging to one asymptotically. However, when all the models are imperfect, interpretation of posterior model probabilities is unclear. We propose a new approach which relies on evaluating parametric Bayesian models relative to a nonparametric Bayesian reference using Kullback-Leibler divergence. This leads to a new notion of absolute posterior model probabilities, which can be used to assess the quality of imperfect models. Some properties of this framework are described. We consider an application to linear model selection against a Gaussian process reference, providing simple analytic forms for routine implementation. The framework is illustrated through simulations and applications.
Article
A FIMAG Working Party was set up in 1989 to consider the stochastic investment model proposed by A. D. Wilkie, which had been used by a number of actuaries for various purposes, but had not itself been discussed at the Institute. This is the Report of that Working Party. First, the Wilkie model is described. Then the model is reviewed, and alternative types of model are discussed. Possible applications of the model are considered, and the important question of ‘actuarial judgement’ is introduced. Finally the Report looks at possible future developments. In appendices, Clarkson describes a specific alternative model for inflation, and Wilkie describes some experiments with ARCH models. In further appendices possible applications of stochastic investment models to pension funds, to life assurance and to investment management are discussed.
Article
The best asset allocation model is searched for. In this paper, we argue that it is unlikely to find an individual model which continuously outperforms its competitors. Rather one should consider a combined model out of a given set of asset allocation models. In a large empirical study using various standard asset allocation models, we find that (i) the best model depends strongly on the chosen data set, (ii) it is difficult to ex-ante select the best model, and (iii) the combination of models performs exceptionally well. Frequently, the combination even outperforms the ex-post best asset allocation model. The promising results are obtained by a simple combination method based on a bootstrap procedure. More advanced combination approaches are likely to achieve even better results. © 2014 Central University of Finance and Economics. all rights reserved.
Article
We present a simulation-based method for solving discrete-time portfolio choice problems involving non-standard preferences, a large number of assets with arbitrary return distribution, and, most importantly, a large number of state variables with potentially path-dependent or non-stationary dynamics. The method is flexible enough to accommodate intermediate consumption, portfolio constraints, parameter and model uncertainty, and learning. We first establish the properties of the method for the portfolio choice between a stock index and cash when the stock returns are either iid or predictable by the dividend yield. We then explore the problem of an investor who takes into account the predictability of returns but is uncertain about the parameters of the data generating process. The investor chooses the portfolio anticipating that future data realizations will contain useful information to learn about the true parameter values.
Article
Simulated asset returns are used in many areas of actuarial science. For example, life insurers use them to price annuities, life insurance, and investment guarantees. The quality of those simulations has come under increased scrutiny during the current financial crisis. When simulating the asset price process, properly choosing which model or models to use, and accounting for the uncertainty in that choice, is essential. We investigate how best to choose a model from a flexible set of models. In our regime-switching models the individual regimes are not constrained to be from the same distributional family. Even with larger sample sizes, the standard model-selection methods (AIC, BIC, and DIC) incorrectly identify the models far too often. Rather than trying to identify the best model and limiting the simulation to a single distribution, we show that the simulations can be made more realistic by explicitly modeling the uncertainty in the model-selection process. Specifically, we consider a parallel model-selection method that provides the posterior probabilities of each model being the best, enabling model averaging and providing deeper insights into the relationships between the models. The value of the method is demonstrated through a simulation study, and the method is then applied to total return data from the S&P 500.
Article
We present an application of the reversible jump Markov chain Monte Carlo (RJMCMC) method to the important problem of setting claims reserves in general insurance business for the outstanding loss liabilities. A measure of the uncertainty in these claims reserves estimates is also needed for solvency purposes. The RJMCMC method described in this paper represents an improvement over the manual processes often employed in practice. In particular, our RJMCMC method describes parameter reduction and tail factor estimation in the claims reserving process, and, moreover, it provides the full predictive distribution of the outstanding loss liabilities.
Article
This paper reviews the United Kingdom stochastic asset model developed by Wilkie (1995b). Certain aspects of the methodology used to develop this model could be problematic. Moreover, Wilkie (1995b) did not provide a complete evaluation of his model; certain economic theories and the constancy of the model's parameter values did not appear to have been specifically considered. This paper attempts to provide a comprehensive review of Wilkie's model.
Article
1.1. The purpose of this paper is to present to the actuarial profession a stochastic investment model which can be used for simulations of “possible futures” extending for many years ahead. The ideas were first developed for the Maturity Guarantees Working Party (MGWP) whose report was published in 1980. The ideas were further developed in my own paper “Indexing Long Term Financial Contracts” (1981). However, these two papers restricted themselves to a consideration of ordinary shares and of inflation respectively, whereas in this paper I shall present what seems to me to be the minimum model that might be used to describe the total investments of a life office or pension fund.
Article
In this paper the ‘Wilkie investment model’ is discussed, updated and extended. The original model covered price inflation, share dividends, share dividend yields (and hence share prices) and long-term interest rates, and was based on data for the United Kingdom from 1919 to 1982, taken at annual intervals. The additional aspects now covered include: the extension of the data period to 1994 (with omission of the period from 1919 to 1923); the inclusion of models for a wages (earnings) index, short-term interest rates, property rentals and yields (and hence property prices) and yields on index-linked stock; consideration of data for observations more frequently than yearly, in particular monthly data; extension of the U.K. model to certain other countries; introduction of a model for currency exchange rates; extension of certain aspects of the model to a larger number of other countries; and consideration of more elaborate forms of time-series modelling, in particular cointegrated models and ARCH models.
Article
This article offers a synthesis of Bayesian and sample-reuse approaches to the problem of high structure model selection geared to prediction. Similar methods are used for low structure models. Nested and nonnested paradigms are discussed and examples given.
Article
The prequential approach is founded on the premises that the purpose of statistical inference is to make sequential probability forecasts for future observations, rather than to express information about parameters. Many traditional parametric concepts, such as consistency and efficiency, prove to have natural counterparts in this formulation, which sheds new light on these and suggests fruitful extensions.
Article
As investment guarantees become increasingly complex, realistic simulation of the price becomes more critical. Currently, regime-switching models are com-monly used to simulate asset returns. Under a regime switching model, simulat-ing random asset streams involves three steps: (i) estimate the model parameters given the number of regimes using maximum likelihood, (ii) choose the number of regimes using a model selection criteria, and (iii) simulate the streams using the optimal number of regimes and parameter values. This method, however, does not properly incorporate regime or parameter uncertainty into the gener-ated asset streams and therefore into the price of the guarantee. To remedy this, this article adopts a Bayesian approach to properly account for those two sources of uncertainty and improve pricing.
Article
We investigate the use of adaptive MCMC algorithms to automatically tune the Markov chain parameters during a run. Examples include the Adaptive Metropolis (AM) multivariate algorithm of Haario, Saksman, and Tamminen (2001), Metropoliswithin- Gibbs algorithms for nonconjugate hierarchical models, regionally adjusted Metropolis algorithms, and logarithmic scalings. Computer simulations indicate that the algorithms perform very well compared to nonadaptive algorithms, even in high dimension. © 2009 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Article
In structural vector autoregressive (SVAR) analysis a Markov regime switching (MS) property can be exploited to identify shocks if the reduced form error covariance matrix varies across regimes. Unfortunately, these shocks may not have a meaningful structural economic interpretation. It is discussed how statistical and conventional identifying information can be combined. The discussion is based on a VAR model for the US containing oil prices, output, consumer prices and a short-term interest rate. The system has been used for studying the causes of the early millennium economic slowdown based on traditional identi¯cation with zero and long-run restrictions and using sign restrictions. We find that previously drawn conclusions are questionable in our framework.
Article
Many financial time series processes appear subject to periodic structural changes in their dynamics. Regression relationships are often not robust to outliers nor stable over time, whilst the existence of changes in variance over time is well documented. This paper considers a vector autoregression subject to pseudocyclical structural changes. The parameters of a vector autoregression are modelled as the outcome of an unobserved discrete Markov process with unknown transition probabilities. The unobserved regimes, one for each time point, together with the regime transition probabilities, are to be determined in addition to the vector autoregression parameters within each regime. A Bayesian Markov Chain Monte Carlo estimation procedure is developed which generates the joint posterior density of the parameters and the regimes, rather than the more common point estimates. The complete likelihood surface is generated at the same time. The procedure can readily be extended to produce joint prediction densities for the variables, incorporating parameter uncertainty. Results using simulated and real data are provided. A clear separation of the variance between regimes is observed. Ignoring regime shifts is very likely to produce misleading volatility estimates, and is unlikely to be robust to outliers. A comparison with commonly used models suggests that the regime switching vector autoregression provides a particularly good description of the data.
Book
Academic finance has had a remarkable impact on many financial services, yet long-term investors have curiously received little guidance from academic financial economists. Using recent theoretical and empirical research, this book addresses the real world problem of how to develop a long term portfolio strategy. Written by a leading academic and bestselling author, this seminal work is a must read for practitioners, finance academics, and graduate students in finance. Available in OSO: http://www.oxfordscholarship.com/oso/public/content/economicsfinance//toc.html
Article
It is argued that in structural vector autoregressive (SVAR) analysis a Markov regime switching (MS) property can be exploited to identify shocks if the reduced form error covariance matrix varies across states. The model setup is formulated and discussed and it is shown how it can be used to test restrictions which are just-identifying in a standard structural vector autoregressive analysis. The approach is illustrated by two SVAR examples which have been reported in the literature and which have features that can be accommodated by the MS structure.
Article
Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.
Article
In this paper, we consider the process of modelling uncertainty. In particular, we are concerned with making inferences about some quantity of interest which, at present, has been unobserved. Examples of such a quantity include the probability of ruin of a surplus process, the accumulation of an investment, the level or surplus or deficit in a pension fund and the future volume of new business in an insurance company. Uncertainty in this quantity of interest, y, arises from three sources: (1) uncertainty due to the stochastic nature of a given model; (2) uncertainty in the values of the parameters in a given model; (3) uncertainty in the model underlying what we are able to observe and determining the quantity of interest. It is common in actuarial science to find that the first source of uncertainty is the only one which receives rigorous attention. A limited amount of research in recent years has considered the effect of parameter uncertainty, while there is still considerable scope for development of methods which deal in a balanced way with model risk. Here we discuss a methodology which allows all three sources of uncertainty to be assessed in a more coherent fashion.
Article
A range of approximate methods have been proposed for model choice based on Bayesian principles, given the problems involved in multiple integration in multi-parameter problems. Formal Bayesian model assessment is based on prior model probabilities P(M=j) and posterior model probabilities P(M=j|Y) after observing the data. An approach is outlined here that produces posterior model probabilities and hence Bayes factor estimates but not marginal likelihoods. It uses a Monte Carlo approximation based on independent MCMC sampling of two or more different models. While parallel sampling of the models is not necessary, such a form of sampling facilitates model averaging and assessing the impact of individual observations on the overall estimated Bayes factor. Three worked examples used before in model choice studies illustrate application of the method.
Article
This paper investigates the impact of monetary policy on stock returns in 13 OECD countries over the period 1972–2002. Our results indicate that monetary policy shifts significantly affect stock returns, thereby supporting the notion of monetary policy transmission via the stock market. Our contribution with respect to previous work is threefold. First, we show that our findings are robust to various alternative measures of stock returns. Second, our inferences are adjusted for the non-normality exhibited by the stock returns data. Finally, we take into account the increasing co-movement among international stock markets. The sensitivity analysis indicates that the results remain largely unchanged.
Article
We extend and improve two existing methods of generating random correlation matrices, the onion method of Ghosh and Henderson [S. Ghosh, S.G. Henderson, Behavior of the norta method for correlated random vector generation as the dimension increases, ACM Transactions on Modeling and Computer Simulation (TOMACS) 13 (3) (2003) 276-294] and the recently proposed method of Joe [H. Joe, Generating random correlation matrices based on partial correlations, Journal of Multivariate Analysis 97 (2006) 2177-2189] based on partial correlations. The latter is based on the so-called D-vine. We extend the methodology to any regular vine and study the relationship between the multiple correlation and partial correlations on a regular vine. We explain the onion method in terms of elliptical distributions and extend it to allow generating random correlation matrices from the same joint distribution as the vine method. The methods are compared in terms of time necessary to generate 5000 random correlation matrices of given dimensions.
Article
Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to over-confident inferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA)provides a coherent mechanism for accounting for this model uncertainty. Several methods for implementing BMA have recently emerged. We discuss these methods and present a number of examples.In these examples, BMA provides improved out-of-sample predictive performance. We also provide a catalogue of currently available BMA software.