Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We consider alternative specifications of conditional autoregressive quantile models to estimate Value-at-Risk and Expected Shortfall. The proposed specifications include a slow moving component in the quantile process, along with aggregate returns from heterogeneous horizons as regressors. Using data for 10 stock indices, we evaluate the performance of the models and find that the proposed features are useful in capturing tail dynamics better.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This paper proposes value-at risk (VaR) estimation methods that are a synthesis of conditional autoregressive value at risk (CAViaR) time series models and implied volatility. The appeal of this proposal is that it merges information from the historical time series and the different information supplied by the market's expectation of risk. Forecast-combining methods, with weights estimated using quantile regression, are considered. We also investigate plugging implied volatility into the CAViaR models—a procedure that has not been considered in the VaR area so far. Results for daily index returns indicate that the newly proposed methods are comparable or superior to individual methods, such as the standard CAViaR models and quantiles constructed from implied volatility and the empirical distribution of standardised residuals. We find that the implied volatility has more explanatory power as the focus moves further out into the left tail of the conditional distribution of S&P 500 daily returns. Copyright © 2011 John Wiley & Sons, Ltd.
Article
Full-text available
The paper proposes an additive cascade model of volatility components defined over different time periods. This volatility cascade leads to a simple AR-type model in the realized volatility with the feature of considering different volatility components realized over different time horizons and thus termed Heterogeneous Autoregressive model of Realized Volatility (HAR-RV). In spite of the simplicity of its structure and the absence of true long-memory properties, simulation results show that the HAR-RV model successfully achieves the purpose of reproducing the main empirical features of financial returns (long memory, fat tails, and self-similarity) in a very tractable and parsimonious way. Moreover, empirical results show remarkably good forecasting performance.
Article
Full-text available
Conditional quantile estimation is an essential ingredient in modern risk management. Although generalized autoregressive conditional heteroscedasticity (GARCH) processes have proven highly successful in modeling financial data, it is generally recognized that it would be useful to consider a broader class of processes capable of representing more flexibly both asymmetry and tail behavior of conditional returns distributions. In this article we study estimation of conditional quantiles for GARCH models using quantile regression. Quantile regression estimation of GARCH models is highly nonlinear; we propose a simple and effective two-step approach of quantile regression estimation for linear GARCH time series. In the first step, we use a quantile autoregression sieve approximation for the GARCH model by combining information over different quantiles. Then second-stage estimation for the GARCH model is carried out based on the first-stage minimum distance estimation of the scale process of the time series. Asymptotic properties of the sieve approximation, the minimum distance estimators, and the final quantile regression estimators using generated regressors are studied. These results are of independent interest and have applications in other quantile regression settings. Monte Carlo and empirical application results indicate that the proposed estimation methods outperform some existing conditional quantile estimation methods.
Article
Full-text available
We study the relation at intraday level between serial correlation and volatility of the Standard and Poor (S&P) 500 stock index futures returns. At daily and weekly levels, serial correlation and volatility forecasts have been found to be negatively correlated (LeBaron effect). After finding a significant attenuation of the original effect over time, we show that a similar but more pronounced effect holds by using intraday measures, by such as realized volatility and variance ratio. We also test the impact of unexpected volatility, defined as the part of volatility which cannot be forecasted, on the presence of intraday serial correlation in the time series by employing a model for realized volatility based on the heterogeneous market hypothesis. We find that intraday serial correlation is negatively correlated to volatility forecasts, whereas it is positively correlated to unexpected volatility.
Article
Full-text available
We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic and need not even be symmetric), and forecast errors can be non-Gaussian, nonzero mean, serially correlated, and contemporaneously correlated. Asymptotic and exact finite-sample tests are proposed, evaluated, and illustrated.
Article
Motivated by the Basel III regulations, recent studies have considered joint forecasts of Value-at-Risk and Expected Shortfall. A large family of scoring functions can be used to evaluate forecast performance in this context. However, little intuitive or empirical guidance is currently available, which renders the choice of scoring function awkward in practice. We therefore develop graphical checks of whether one forecast method dominates another under a relevant class of scoring functions, and propose an associated hypothesis test. We illustrate these tools with simulation examples and an empirical analysis of S&P 500 and DAX returns.
Article
Expected Shortfall (ES) is the average return on a risky asset conditional on the return being below some quantile of its distribution, namely its Value-at-Risk (VaR). The Basel III Accord, which will be implemented in the years leading up to 2019, places new attention on ES, but unlike VaR, there is little existing work on modeling ES. We use recent results from statistical decision theory to overcome the problem of “elicitability” for ES by jointly modeling ES and VaR, and propose new dynamic models for these risk measures. We provide estimation and inference methods for the proposed models, and confirm via simulation studies that the methods have good finite-sample properties. We apply these models to daily returns on four international equity indices, and find the proposed new ES–VaR models outperform forecasts based on GARCH or rolling window models.
Article
This paper tests whether it is possible to improve point, quantile, and density forecasts of realised volatility by conditioning on a set of predictive variables. We employ quantile autoregressive models augmented with macroeconomic and financial variables. Complete subset combinations of both linear and quantile forecasts enable us to efficiently summarise the information content in the candidate predictors. Our findings suggest that no single variable is able to provide more information for the evolution of the volatility distribution beyond that contained in its own past. The best performing variable is the return on the stock market followed by the inflation rate. Our complete subset approach achieves superior point, quantile, and density predictive performance relative to the univariate models and the autoregressive benchmark.
Article
Conditional forecasts of risk measures play an important role in internal risk management of financial institutions as well as in regulatory capital calculations. In order to assess forecasting performance of a risk measurement procedure, risk measure forecasts are compared to the realized financial losses over a period of time and a statistical test of correctness of the procedure is conducted. This process is known as backtesting. Such traditional backtests are concerned with assessing some optimality property of a set of risk measure estimates. However, they are not suited to compare different risk estimation procedures.We investigate the proposal of comparative backtests, which are better suited for method comparisons on the basis of forecasting accuracy, but necessitate an elicitable risk measure.We argue that supplementing traditional backtests with comparative backtests will enhance the existing trading book regulatory framework for banks by providing the correct incentive for accuracy of risk measure forecasts. In addition, the comparative backtesting framework could be used by banks internally as well as by researchers to guide selection of forecasting methods. The discussion focuses on three risk measures, Value at Risk, expected shortfall and expectiles, and is supported by a simulation study and data analysis.
Article
Value at Risk (VaR) forecasts can be produced from conditional autoregressive VaR models, estimated using quantile regression. Quantile modeling avoids a distributional assumption, and allows the dynamics of the quantiles to differ for each probability level. However, by focusing on a quantile, these models provide no information regarding Expected Shortfall (ES), which is the expectation of the exceedances beyond the quantile. We introduce a method for predicting ES corresponding to VaR forecasts produced by quantile regression models. It is well known that quantile regression is equivalent to maximum likelihood based on an asymmetric Laplace (AL) density. We allow the density's scale to be time-varying, and show that it can be used to estimate conditional ES. This enables a joint model of conditional VaR and ES to be estimated by maximizing an AL log-likelihood. Although this estimation framework uses an AL density, it does not rely on an assumption for the returns distribution. We also use the AL log-likelihood for forecast evaluation, and show that it is strictly consistent for the joint evaluation of VaR and ES. Empirical illustration is provided using stock index data.
Book
In this revised, updated, and expanded edition of his New York Times bestseller, Nobel Prize-winning economist Robert Shiller, who warned of both the tech and housing bubbles, cautions that signs of irrational exuberance among investors have only increased since the 2008-9 financial crisis. With high stock and bond prices and the rising cost of housing, the post-subprime boom may well turn out to be another illustration of Shiller's influential argument that psychologically driven volatility is an inherent characteristic of all asset markets. In other words, Irrational Exuberance is as relevant as ever. Previous editions covered the stock and housing markets-and famously predicted their crashes. This edition expands its coverage to include the bond market, so that the book now addresses all of the major investment markets. It also includes updated data throughout, as well as Shiller's 2013 Nobel Prize lecture, which places the book in broader context. In addition to diagnosing the causes of asset bubbles, Irrational Exuberance recommends urgent policy changes to lessen their likelihood and severity-and suggests ways that individuals can decrease their risk before the next bubble bursts. No one whose future depends on a retirement account, a house, or other investments can afford not to read this book.
Article
A statistical functional, such as the mean or the median, is called elicitable if there is a scoring function or loss function such that the correct forecast of the functional is the unique minimizer of the expected score. Such scoring functions are called strictly consistent for the functional. The elicitability of a functional opens the possibility to compare competing forecasts and to rank them in terms of their realized scores. In this paper, we explore the notion of elicitability formulti-dimensional functionals and give both necessary and sufficient conditions for strictly consistent scoring functions. We cover the case of functionals with elicitable components, but we also show that one-dimensional functionals that are not elicitable can be a component of a higher order elicitable functional. In the case of the variance, this is a known result. However, an important result of this paper is that spectral risk measures with a spectral measure with finite support are jointly elicitable if one adds the a œcorrecta quantiles. A direct consequence of applied interest is that the pair (Value at Risk, Expected Shortfall) is jointly elicitable under mild conditions that are usually fulfilled in risk management applications.
Article
Recently, advances in time-varying quantile modeling have proven effective in financial Value-at-Risk forecasting. Some well-known dynamic conditional autoregressive quantile models are generalized to a fully nonlinear family. The Bayesian solution to the general quantile regression problem, via the Skewed- Laplace distribution, is adapted and designed for parameter estimation in this model family via an adaptive Markov chain Monte Carlo sampling scheme. A simulation study illustrates favorable precision in estimation, compared to the standard numerical optimization method. The proposed model family is clearly favored in an empirical study of 10 major stock markets. The results that show the proposed model is more accurate at Value-at-Risk forecasting over a two-year period, when compared to a range of existing alternative models and methods.
Article
In this note, we comment on the relevance of elicitability for backtesting risk measure estimates. In particular, we propose the use of Diebold-Mariano tests, and show how they can be implemented for Expected Shortfall (ES), based on the recent result of Fissler and Ziegel (2015) that ES is jointly elicitable with Value at Risk.
Article
Given the growing need for managing financial risk, risk prediction plays an increasing role in banking and finance. In this study we compare the out-of-sample performance of existing methods and some new models for predicting value-at-risk (VaR) in a univariate context. Using more than 30 years of the daily return data on the NASDAQ Composite Index, we find that most approaches perform inadequately, although several models are acceptable under current regulatory assessment rules for model adequacy. A hybrid method, combining a heavy-tailed generalized autoregressive conditionally heteroskedastic (GARCH) filter with an extreme value theory-based approach, performs best overall, closely followed by a variant on a filtered historical simulation, and a new model based on heteroskedastic mixture distributions. Conditional autoregressive VaR (CAViaR) models perform inadequately, though an extension to a particular CAViaR model is shown to outperform the others.
Article
Most downside risk models implicitly assume that returns are a sufficient statistic with which to forecast the daily conditional distribution of a portfolio. In this paper, we address this question empirically and analyze if the variables that proxy for market liquidity and trading conditions convey valid information to forecast the quantiles of the conditional distribution of several representative market portfolios. Using quantile regression techniques, we report evidence of predictability that can be exploited to improve Value at Risk forecasts. Including trading- and spread-related variables improves considerably the forecasting performance.
Article
A parametric approach to estimating and forecasting Value-at-Risk (VaR) and expected shortfall (ES) for a heteroscedastic financial return series is proposed. The well-known GJR–GARCH form models the volatility process, capturing the leverage effect. To capture potential skewness and heavy tails, the model assumes an asymmetric Laplace form as the conditional distribution of the series. Furthermore, dynamics in higher moments are modeled by allowing the shape parameter in this distribution to be time-varying. Estimation is via an adaptive Markov chain Monte Carlo (MCMC) sampling scheme, employing the Metropolis–Hastings (MH) algorithm with a mixture of Gaussian proposal distributions. A simulation study highlights accurate estimation and improved inference compared to a single-Gaussian-proposal MH method. The model is illustrated by applying it to four international stock market indices and two exchange rates, generating one-step-ahead forecasts of VaR and ES. Standard and non-standard tests are applied to these forecasts, and the finding is that the proposed model performs favourably compared to some popular competitors: in particular it is the only conservative model of risk over the period studied, which includes the recent global financial crisis.
Article
This paper studies the link between two popular measures of risk, that are the Value-at-Risk (VaR) and the Tail-VaR (TVaR). We study how the TVaR and VaR are related through their risk levels and characterize the underlying distributions under which this relationship is linear. A large portion of this paper is devoted to the related econometric analysis, such as the estimation and test of this relationship. We apply the results to currency portfolios and observe that this linearity relationship between the TVaR and VaR is a surprisingly common phenomenon for the portfolios considered for both historical and conditional risk measures.
Article
Here we assess the return fitting and option valuation performance of generalized autoregressive conditional heteroscedasticity (GARCH) models. We compare component versus GARCH(1, 1) models, affine versus nonaffine GARCH models, and conditionally normal versus nonnormal GED models. We find that nonaffine models dominate affine models in terms of both fitting returns and option valuation. For the affine models, we find strong evidence in favor of the component structure for both returns and options; for the nonaffine models, the evidence is less convincing in option valuation. The evidence in favor of the nonnormal GED models is strong when fitting daily returns, but not when valuing options.
Article
It appears that stock market index returns exhibit (weak) autocorrelation and there is evidence that volatility in equity markets is inversely related to first order autocorrelation. A prominent theory that can address both the aforementioned stylized facts is the feedback trading theory that relates autocorrelation to individual investor trading patterns. We provide a unified framework to simultaneously investigate market wide feedback trading strategies effects and the presence of volatility-correlation relationships. Our empirical application uses market portfolios from six international markets and employs conditional non-linear in mean and in variance models. Parametric specifications for conditional dependence beyond the mean and variance are also modelled following Hansen's (1994) (Hansen, B.E., 1994. Int. Econ. Rev. 35 (a), 705–730) autoregressive conditional density estimation. For the sample period under investigation (post 1990), our results support the presence of feedback traders and an inverse volatility–correlation relationship in three out of six markets while only two of the series under investigation do not provide evidence of autocorrelation. Our empirical work supports volatility induced sign switches in correlation. We document positive and significant risk premium parameters in three markets while we find that volatility at the market level is enhanced by past movements of the daily range (daily high–low price difference) series.
Article
We provide maximum likelihood estimators of term structures of conditional probabilities of corporate default, incorporating the dynamics of firm-specific and macroeconomic covariates. For US Industrial firms, based on over 390,000 firm-months of data spanning 1980 to 2004, the term structure of conditional future default probabilities depends on a firm's distance to default (a volatility-adjusted measure of leverage), on the firm's trailing stock return, on trailing S&P 500 returns, and on US interest rates. The out-of-sample predictive performance of the model is an improvement over that of other available models.
Article
This paper presents a new model for the valuation of European options, in which the volatility of returns consists of two components. One is a long-run component and can be modeled as fully persistent. The other is short-run and has a zero mean. Our model can be viewed as an affine version of Engle and Lee [1999. A permanent and transitory component model of stock return volatility. In: Engle, R., White, H. (Eds.), Cointegration, Causality, and Forecasting: A Festschrift in Honor of Clive W.J. Granger. Oxford University Press, New York, pp. 475–497], allowing for easy valuation of European options. The model substantially outperforms a benchmark single-component volatility model that is well established in the literature, and it fits options better than a model that combines conditional heteroskedasticity and Poisson–normal jumps. The component model's superior performance is partly due to its improved ability to model the smirk and the path of spot volatility, but its most distinctive feature is its ability to model the volatility term structure. This feature enables the component model to jointly model long-maturity and short-maturity options.
Article
This paper extends the work by Ding, Granger, and Engle (1993) and further examines the long memory property for various speculative returns. The long memory property found for S&P 500 returns is also found to exist for four other different speculative returns. One significant difference is that for foreign exchange rate returns, this property is strongest when instead of at d = 1 for stock returns. The theoretical autocorrelation functions for various GARCH(1, 1) models are also derived and found to be exponential decreasing, which is rather different from the sample autocorrelation function for the real data. A general class of long memory models that has no memory in returns themselves but long memory in absolute returns and their power transformations is proposed. The issue of estimation and simulation for this class of model is discussed. The Monte Carlo simulation shows that the theoretical model can mimic the stylized empirical facts strikingly well.
Article
This paper studies the performance of nonparametric quantile regression as a tool to predict Value at Risk (VaR). The approach is flexible as it requires no assumptions on the form of return distributions. A monotonized double kernel local linear estimator is applied to estimate moderate (1%) conditional quantiles of index return distributions. For extreme (0.1%) quantiles, where particularly few data points are available, we propose to combine nonparametric quantile regression with extreme value theory. The out-of-sample forecasting performance of our methods turns out to be clearly superior to different specifications of the Conditionally Autoregressive VaR (CAViaR) models.
Article
Typically, point forecasting methods are compared and assessed by means of an error measure or scoring function, such as the absolute error or the squared error. The individual scores are then averaged over forecast cases, to result in a summary measure of the predictive performance, such as the mean absolute error or the (root) mean squared error. I demonstrate that this common practice can lead to grossly misguided inferences, unless the scoring function and the forecasting task are carefully matched. Effective point forecasting requires that the scoring function be specified ex ante, or that the forecaster receives a directive in the form of a statistical functional, such as the mean or a quantile of the predictive distribution. If the scoring function is specified ex ante, the forecaster can issue the optimal point forecast, namely, the Bayes rule. If the forecaster receives a directive in the form of a functional, it is critical that the scoring function be consistent for it, in the sense that the expected score is minimized when following the directive. A functional is elicitable if there exists a scoring function that is strictly consistent for it. Expectations, ratios of expectations and quantiles are elicitable. For example, a scoring function is consistent for the mean functional if and only if it is a Bregman function. It is consistent for a quantile if and only if it is generalized piecewise linear. Similar characterizations apply to ratios of expectations and to expectiles. Weighted scoring functions are consistent for functionals that adapt to the weighting in peculiar ways. Not all functionals are elicitable; for instance, conditional value-at-risk is not, despite its popularity in quantitative finance.
Article
Expectile models are derived using asymmetric least squares. A simple formula has been presented that relates the expectile to the expectation of exceedances beyond the expectile. We use this as the basis for estimating the expected shortfall. It has been proposed that the θ quantile be estimated by the expectile for which the proportion of observations below the expectile is θ. In this way, an expectile can be used to estimate value at risk. Using expectiles has the appeal of avoiding distributional assumptions. For univariate modeling, we introduce conditional autoregressive expectiles (CARE). Empirical results for the new approach are competitive with established benchmarks methods.
Article
A complete theory for evaluating interval forecasts has not been worked out to date. Most of the literature implicitly assumes homoskedastic errors even when this is clearly violated and proceed by merely testing for correct unconditional coverage. Consequently, the author sets out to build a consistent framework for conditional interval forecast evaluation, which is crucial when higher-order moment dynamics are present. The new methodology is demonstrated in an application to the exchange rate forecasting procedures advocated in risk management. Copyright 1998 by Economics Department of the University of Pennsylvania and the Osaka University Institute of Social and Economic Research Association.
Article
Value at Risk (VaR) has become the standard measure of market risk employed by financial institutions for both internal and regulatory purposes. VaR is defined as the value that a portfolio will lose with a given probability, over a certain time horizon (usually one or ten days). Despite its conceptual simplicity, its measurement is a very challenging statistical problem and none of the methodologies developed so far give satisfactory solutions. Interpreting the VaR as the quantile of future portfolio values conditional on current information, we propose a new approach to quantile estimation which does not require any of the extreme assumptions invoked by existing methodologies (such as normality or i.i.d. returns). The Conditional Autoregressive Value-at-Risk or CAViaR model moves the focus of attention from the distribution of returns directly to the behavior of the quantile. We specify the evolution of the quantile over time using a special type of autoregressive process and use the regression quantile framework introduced by Koenker and Bassett to determine the unknown parameters. Since the objective function is not differentiable, we use a differential evolutionary genetic algorithm for the numerical optimization. Utilizing the criterion that each period the probability of exceeding the VaR must be independent of all the past information, we introduce a new test of model adequacy, the Dynamic Quantile test. Applications to simulated and real data provide empirical support to this methodology and illustrate the ability of these algorithms to adapt to new risk environments.
Article
The authors find support for a negative relation between conditional expected monthly return and conditional variance of monthly return using a GARCH-M model modified by allowing (1) seasonal patterns in volatility, (2) positive and negative innovations to returns having different impacts on conditional volatility, and (3) nominal interest rates to predict conditional variance. Using the modified GARCH-M model, they also show that monthly conditional volatility may not be as persistent as was thought. Positive unanticipated returns appear to result in a downward revision of the conditional volatility, whereas negative unanticipated returns result in an upward revision of conditional volatility. Copyright 1993 by American Finance Association.
Article
A semiparametric ARCH model is introduced with conditional first and second moments given by ARMA and ARCH formulations, and a conditional density that is approximated by a nonparametric density estimator. For several densities, the relative efficiency of the quasi-maximum likelihood estimator is compared with maximum likelihood under correct specification. These potential efficiency gains for a fully adaptive procedure are compared in a Monte Carlo experiment with the observed gains from using the semiparametric procedure, and it is found that the estimator captures a substantial proportion of the potential. The estimator is applied to daily stock returns and to the British pound/dollar exchange rate.
Article
In this paper we analyze a credit economy � la Kiyotaki and Moore [1997. Credit cycles. Journal of Political Economy 105, 211-248] enriched with learning dynamics, where both borrowers and lenders need to form expectations about the future price of the collateral. We find that under homogeneous learning, the MSV REE for this economy is E-stable and can be learned by agents, but when heterogeneous learning is allowed and uncertainty in terms of a stochastic productivity is added, expectations of lenders and borrowers can diverge and lead to bankruptcy (default) on the part of the borrowers.
Stock returns and volatility: Pricing the short-run and long-run components of market risk
  • Adrian
Forecasting the return distribution using high-frequency volatility measures
  • Hua
Evaluating value-at-risk models with desk-level data
  • Berkowitz
Pricing credit default swaps with observable covariates
  • Doshi
A macroprudential approach to financial regulation
  • Hanson