Article

Dynamic conditional score models of degrees of freedom: filtering with score-driven heavy tails

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This article extends the quasi-autoregressive (QAR) plus Beta-t-EGARCH (exponential generalized autoregressive conditional heteroscedasticity) dynamic conditional score (DCS) model. For the new DCS model, the degrees of freedom parameter is time varying and tail thickness of the error term is updated by the conditional score. We compare the performance of QAR plus Beta-t- EGARCH with constant degrees of freedom (benchmark model) and QAR plus Beta-t-EGARCH with time-varying degrees of freedom (extended model). We use data from the Standard and Poor’s 500 (S&P 500) index, and a random sample of its 150 components that are from different industries of the United States (US) economy. For the S&P 500, all likelihood-based model selection criteria support the extended model, which identifies extreme events with significant impact on the US stock market. We find that for 59% of the 150 firms, the extended model has a superior statistical performance. The results suggest that the extended model is superior for those industries, which produce products that people usually are unwilling to cut out of their budgets, regardless of their financial situation. We perform an application to compare the density forecast performance of both DCS models. We perform an application to Monte Carlo value-atrisk for both DCS models.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Presentation
Full-text available
We present the Beta-t-QVAR (quasi-vector autoregression) model for the joint modelling of score-driven location plus scale of strictly stationary and ergodic variables. Beta-t-QVAR is an extension of Beta-t-EGARCH (exponential generalized autoregressive conditional heteroscedasticity) and Beta-t-EGARCH-M (Beta-t-EGARCH-in-mean). We prove the asymptotic properties of the maximum likelihood (ML) estimator for correctly specified Beta-t-QVAR models. We use Dow Jones Industrial Average (DJIA) data for the period of 1985 to 2020. We find that the volatility forecasting accuracy of Beta-t-QVAR is superior to the volatility forecasting accuracies of Beta-t-EGARCH, Beta-t-EGARCH-M, A-PARCH (asymmetric power ARCH), and GARCH for the period of 2010 to 2020.
Preprint
Full-text available
In this paper, we introduce Beta-t-QVAR (quasi-vector autoregression) for the joint modelling of score-driven location and scale. Asymptotic theory of the maximum likelihood (ML) estimator is presented, and sufficient conditions of consistency and asymptotic normality of ML are proven. For the joint score-driven modelling of risk premium and volatility, Dow Jones Industrial Average (DJIA) data are used in an empirical illustration. Prediction accuracy of Beta-t-QVAR is superior to the prediction accuracies of Beta-t-EGARCH (exponential generalized AR conditional heteroscedasticity), A-PARCH (asymmetric power ARCH), and GARCH (generalized ARCH). The empirical results motivate the use of Beta-t-QVAR for the valuation of DJIA options.
Preprint
Full-text available
We introduce new dynamic conditional score (DCS) volatility models with dynamic scale and shape parameters for the effective measurement of volatility. In the new models, we use the EGB2 (exponential generalized beta of the second kind), NIG (normal-inverse Gaussian) and Skew-Gent (skewed generalized-t) probability distributions. Those distributions involve several shape parameters that control the dynamic skewness, tail shape and peakedness of financial returns. We use daily return data from the Standard & Poor's 500 (S&P 500) index for the period of January 4, 1950 to December 30, 2017. We estimate all models by using the maximum likelihood (ML) method. We present new conditions for the asymptotic properties of the ML estimator, by extending the DCS literature. We study those conditions for the S&P 500 and we also perform diagnostic tests for the residuals. The statistical performances of several DCS specifications with dynamic shape are superior to the statistical performance of the DCS specification with constant shape. Outliers in the shape parameters are associated with important announcements that affected the United States (US) stock market. Our results motivate the application of the new DCS models to volatility measurement, pricing financial derivatives, or estimation of the value-at-risk (VaR) and expected shortfall (ES) metrics.
Article
Full-text available
We propose a test for comparing the out-of-sample accuracy of competing density forecasts of a variable. The test is valid under general conditions: the data can be heterogeneous and the forecasts can be based on (nested or non-nested) parametric models or produced by semi-parametric, non-parametric or Bayesian estimation techniques. The evaluation is based on scoring rules, which are loss functions defined over the density forecast and the realizations of the variable. We restrict attention to the logarithmic scoring rule and propose an out-of-sample 'weighted likelihood ratio' test that compares weighted averages of the scores for the competing forecasts. The user-defined weights are a way to focus attention on different regions of the distribution of the variable. For a uniform weight function, the test can be interpreted as an extension of Vuong (1989)'s likelihood ratio test to time series data and to an out-of-sample testing framework. We apply the tests to evaluate density forecasts of US inflation produced by linear and Markov Switching Phillips curve models estimated by either maximum likelihood or Bayesian methods. We conclude that a Markov Switching Phillips curve estimated by maximum likelihood produces the best density forecasts of inflation.
Article
Full-text available
This paper addresses the problem of forecast evaluation in the context of a simple but realistic decision problem, and proposes a procedure, for the evaluation of forecasts based on their average realized value to the decision maker. It is shown that by concentrating on probability forecasts stronger theoretical results can be achieved than if just event forecasts were used. A possible generalisation is considered concerning the use of the correct, conditional predictive density function when forming forecasts.
Article
The volatility of financial returns changes over time and, for the last thirty years, Generalized Autoregressive Conditional Heteroscedasticity (GARCH) models have provided the principal means of analyzing, modeling and monitoring such changes. Taking into account that financial returns typically exhibit heavy tails – that is, extreme values can occur from time to time – Andrew Harvey’s new book shows how a small but radical change in the way GARCH models are formulated leads to a resolution of many of the theoretical problems inherent in the statistical theory. The approach can also be applied to other aspects of volatility. The more general class of Dynamic Conditional Score models extends to robust modeling of outliers in the levels of time series and to the treatment of time-varying relationships. The statistical theory draws on basic principles of maximum likelihood estimation and, by doing so, leads to an elegant and unified treatment of nonlinear time-series modeling.
Article
An EGARCH model in which the conditional distribution is heavy-tailed and skewed is proposed. The properties of the model, including unconditional moments, autocorrelations and the asymptotic distribution of the maximum likelihood estimator, are set out. Evidence for skewness in a conditional t-distribution is found for a range of returns series, and the model is shown to give a better fit than comparable skewed-t GARCH models in nearly all cases. A two-component model gives further gains in goodness of fit and is able to mimic the long memory pattern displayed in the autocorrelations of the absolute values.
Article
This paper argues in favour of a closer link between decision and forecast evaluation problems. Although the idea of using decision theory for forecast evaluation appears early in the dynamic stochastic programming literature, and has continued to be used in meteorological forecasts, it is hardly mentioned in standard academic textbooks on economic forecasting. Some of the main issues involved are illustrated in the context of a two-state, two-action decision problem as well as in a more general setting. Relationships between statistical and economic methods of forecast evaluation are discussed and useful links between Kuipers score, used as a measure of forecast accuracy in the meteorology literature, and the market timing tests used in finance, are established. An empirical application to the problem of stock market predictability is also provided, and the conditions under which such predictability could be exploited in the presence of transaction costs are discussed.
Article
We propose a class of observation-driven time series models referred to as generalized autoregressive score (GAS) models. The mechanism to update the parameters over time is the scaled score of the likelihood function. This new approach provides a unified and consistent framework for introducing time-varying parameters in a wide class of nonlinear models. The GAS model encompasses other well-known models such as the generalized autoregressive conditional heteroskedasticity, autoregressive conditional duration, autoregressive conditional intensity, and Poisson count models with time-varying mean. In addition, our approach can lead to new formulations of observation-driven models. We illustrate our framework by introducing new model specifications for time-varying copula functions and for multivariate point processes with time-varying parameters. We study the models in detail and provide simulation and empirical evidence. Copyright © 2012 John Wiley & Sons, Ltd.
Book
Econometric Theory and Methods International Edition provides a unified treatment of modern econometric theory and practical econometric methods. The geometrical approach to least squares is emphasized, as is the method of moments, which is used to motivate a wide variety of estimators and tests. Simulation methods, including the bootstrap, are introduced early and used extensively. The book deals with a large number of modern topics. In addition to bootstrap and Monte Carlo tests, these include sandwich covariance matrix estimators, artificial regressions, estimating functions and the generalized method of moments, indirect inference, and kernel estimation. Every chapter incorporates numerous exercises, some theoretical, some empirical, and many involving simulation.
Article
We propose a test for comparing the out-of-sample accuracy of competing density forecasts of a variable. The test is valid under general conditions: The data can be heterogeneous and the forecasts can be based on (nested or nonnested) parametric models or produced by semiparametric, nonparametric, or Bayesian estimation techniques. The evaluation is based on scoring rules, which are loss functions defined over the density forecast and the realizations of the variable. We restrict attention to the logarithmic scoring rule and propose an out-of-sample "weighted likelihood ratio" test that compares weighted averages of the scores for the competing forecasts. The user-defined weights are a way to focus attention on different regions of the distribution of the variable. For a uniform weight function, the test can be interpreted as an extension of Vuong's likelihood ratio test to time series data and to an out-of-sample testing framework. We apply the tests to evaluate density forecasts of U.S. inflation produced by linear and Markov-switching Phillips curve models estimated by either maximum likelihood or Bayesian methods. We conclude that a Markov-switching Phillips curve estimated by maximum likelihood produces the best density forecasts of inflation.
Article
This paper describes a simple method of calculating a heteroskedasticity and autocorrelation consistent covariance matrix that is positive semi-definite by construction. It also establishes consistency of the estimated covariance matrix under fairly general conditions.
Article
This paper introduces an ARCH model (exponential ARCH) that (1) allows correlation between returns and volatility innovations (an important feature of stock market volatility changes), (2) eliminates the need for inequality constraints on parameters, and (3) allows for a straightforward interpretation of the "persistence" of shocks to volatility. In the above respects, it is an improvement over the widely-used GARCH model. The model is applied to study volatility changes and the risk premium on the CRSP Value-Weighted Market Index from 1962 to 1987. Copyright 1991 by The Econometric Society.
Article
Traditional econometric models assume a constant one-period forecast variance. To generalize this implausible assumption, a new class of stochastic processes called autoregressive conditional heteroscedastic (ARCH) processes are introduced in this paper. These are mean zero, serially uncorrelated processes with nonconstant variances conditional on the past, but constant unconditional variances. For such processes, the recent past gives information about the one-period forecast variance. A regression model is then introduced with disturbances following an ARCH process. Maximum likelihood estimators are described and a simple scoring iteration formulated. Ordinary least squares maintains its optimality properties in this set-up, but maximum likelihood is more efficient. The relative efficiency is calculated and can be infinite. To test whether the disturbances follow an ARCH process, the Lagrange multiplier procedure is employed. The test is based simply on the autocorrelation of the squared OLS residuals. This model is used to estimate the means and variances of inflation in the U.K. The ARCH effect is found to be significant and the estimated variances increase substantially during the chaotic seventies.
Article
A natural generalization of the ARCH (Autoregressive Conditional Heteroskedastic) process introduced in Engle (1982) to allow for past conditional variances in the current conditional variance equation is proposed. Stationarity conditions and autocorrelation structure for this new class of parametric models are derived. Maximum likelihood estimation and testing are also considered. Finally an empirical example relating to the uncertainty of the inflation rate is presented.
Beta-t-(E)GARCH Cambridge Working Papers in Economics 0840, Faculty of Economics, University of Cambridge, Cambridge. Accessed 6
  • A C Harvey
  • T Chakravarty
Harvey, A. C., and T. Chakravarty. 2008. " Beta-t-(E)GARCH. " Cambridge Working Papers in Economics 0840, Faculty of Economics, University of Cambridge, Cambridge. Accessed 6 March 2017. http://www.econ.cam.ac.uk/research/repec/cam/pdf/cwpe0840.pdf
  • R Davidson
  • J G Mackinnon
Davidson, R., and J. G. MacKinnon. 2003. Econometric Theory and Methods. New York: Oxford University Press.
  • J D Hamilton
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Cambridge Working Papers in Economics 0840
  • A C Harvey
  • T Chakravarty
Harvey, A. C., and T. Chakravarty. 2008. "Beta-T-(E)Garch." Cambridge Working Papers in Economics 0840, Faculty of Economics, University of Cambridge, Cambridge. Accessed 6 March 2017. http://www.econ.cam.ac.uk/ research/repec/cam/pdf/cwpe0840.pdf
  • D B Nelson
Nelson, D. B. 1991. "Conditional Heteroskedasticity in Asset Returns: A New Approach". Econometrica 59 2: 347-370. doi:10.2307/2938260.