Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We consider jointly modelling a finite collection of quantiles over time under a Bayesian nonparametric framework. Formal Bayesian inference on quantiles is challenging since we need access to both the quantile function and the likelihood (which is given by the derivative of the inverse quantile function). We propose a flexible Bayesian transformation model, which allows the likelihood and the quantile function to be directly calculated, and define a novel stationary process which can be “centred” over a parametric model. Markov chain Monte Carlo (MCMC) methods are employed to illustrate the usefulness of the model in fitting and forecasting on a sample of stock, index, and commodity returns.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, in many applications, a mean prediction is not good enough. Full predictive distributions, also known as probabilistic forecasts, are required in applications where an assessment of the associated uncertainty is essential, for example in models of future disease progression (Küffner et al. 2015), electricity demand (Cabrera and Schulz 2017), stock asset returns (Mitrodima and Griffin 2017), and counterfactual distributions (Chernozhukov et al. 2013). In these applications, the prediction "takes the form of a predictive probability distribution over future quantities B Torsten Hothorn Torsten.Hothorn@uzh.ch 1 Institut für Epidemiologie, Biostatistik und Prävention, Universität Zürich, Hirschengraben 84, 8001 Zürich, Switzerland or events of interest" (Gneiting and Katzfuss 2014). ...
... In light of this approach, it seems computationally attractive to model the distribution function in the distribution regression model F Y |X=x (y | X = x) = F Z (h Y (y) − x β(y)) rather than the quantile function in a quantile regression model Q Y |X=x (τ | X = x) = α(τ ) + x δ(τ ) of the same complexity (τ ∈ [0, 1]; α and δ being the probability-varying intercept and coefficient functions, respectively). Bayesian inference for the corresponding model parameters in conditional transformation models is, however, still under development (Mitrodima and Griffin 2017). ...
Article
Full-text available
The broad class of conditional transformation models includes interpretable and simple as well as potentially very complex models for conditional distributions. This makes conditional transformation models attractive for predictive distribution modelling, especially because models featuring interpretable parameters and black-box machines can be understood as extremes in a whole cascade of models. So far, algorithms and corresponding theory was developed for special forms of conditional transformation models only: maximum likelihood inference is available for rather simple models, there exists a tailored boosting algorithm for the estimation of additive conditional transformation models, and a special form of random forests targets the estimation of interaction models. Here, I propose boosting algorithms capable of estimating conditional transformation models of arbitrary complexity, starting from simple shift transformation models featuring linear predictors to essentially unstructured conditional transformation models allowing complex nonlinear interaction functions. A generic form of the likelihood is maximized. Thus, the novel boosting algorithms for conditional transformation models are applicable to all types of univariate response variables, including randomly censored or truncated observations.
Article
Full-text available
We introduce a semi-parametric Bayesian framework for a simultaneous analysis of linear quantile regression models. A simultaneous analysis is essential to attain the true potential of the quantile regression framework, but is computationally challenging due to the associated monotonicity constraint on the quantile curves. For a univariate covariate, we present a simpler equivalent characterization of the monotonicity constraint through an interpolation of two monotone curves. The resulting formulation leads to a tractable likelihood function and is embedded within a Bayesian framework where the two monotone curves are modeled via logistic transformations of a smooth Gaussian process. A multivariate extension is suggested by combining the full support univariate model with a linear projection of the predictors. The resulting single-index model remains easy to fit and provides substantial and measurable improvement over the first order linear heteroscedastic model. Two illustrative applications of the proposed method are provided.
Article
Full-text available
Conditional quantile estimation is an essential ingredient in modern risk management. Although generalized autoregressive conditional heteroscedasticity (GARCH) processes have proven highly successful in modeling financial data, it is generally recognized that it would be useful to consider a broader class of processes capable of representing more flexibly both asymmetry and tail behavior of conditional returns distributions. In this article we study estimation of conditional quantiles for GARCH models using quantile regression. Quantile regression estimation of GARCH models is highly nonlinear; we propose a simple and effective two-step approach of quantile regression estimation for linear GARCH time series. In the first step, we use a quantile autoregression sieve approximation for the GARCH model by combining information over different quantiles. Then second-stage estimation for the GARCH model is carried out based on the first-stage minimum distance estimation of the scale process of the time series. Asymptotic properties of the sieve approximation, the minimum distance estimators, and the final quantile regression estimators using generated regressors are studied. These results are of independent interest and have applications in other quantile regression settings. Monte Carlo and empirical application results indicate that the proposed estimation methods outperform some existing conditional quantile estimation methods.
Article
Full-text available
Parallel tempering is a generic Markov chain Monte Carlo sampling method which allows good mixing with multimodal target distributions, where conventional Metropolis-Hastings algorithms often fail. The mixing properties of the sampler depend strongly on the choice of tuning parameters, such as the temperature schedule and the proposal distribution used for local exploration. We propose an adaptive algorithm which tunes both the temperature schedule and the parameters of the random-walk Metropolis kernel automatically. We prove the convergence of the adaptation and a strong law of large numbers for the algorithm. We illustrate the performance of our method with examples. Our empirical findings indicate that the algorithm can cope well with different kind of scenarios without prior tuning.
Article
Full-text available
In this paper we consider the multivariate equation Xn+1=An+1Xn+Bn+1X_{n+1} = A_{n+1}X_n + B_{n+1} with i.i.d. coefficients which have only a logarithmic moment. We give a necessary and sufficient condition for existence of a strictly stationary solution independent of the future. As an application we characterize the multivariate ARMA equations with general noise which have such a solution.
Article
Full-text available
P\'{o}lya trees fix partitions and use random probabilities in order to construct random probability measures. With quantile pyramids we instead fix probabilities and use random partitions. For nonparametric Bayesian inference we use a prior which supports piecewise linear quantile functions, based on the need to work with a finite set of partitions, yet we show that the limiting version of the prior exists. We also discuss and investigate an alternative model based on the so-called substitute likelihood. Both approaches factorize in a convenient way leading to relatively straightforward analysis via MCMC, since analytic summaries of posterior distributions are too complicated. We give conditions securing the existence of an absolute continuous quantile process, and discuss consistency and approximate normality for the sequence of posterior distributions. Illustrations are included.
Article
We propose the construction of copulas through the inversion of nonlinear state space models. These copulas allow for new time series models that have the same serial dependence structure as a state space model, but with an arbitrary marginal distribution, and flexible density forecasts. We examine the time series properties of the copulas, outline serial dependence measures, and estimate the models using likelihood-based methods. Copulas constructed from three example state space models are considered: a stochastic volatility model with an unobserved component, a Markov switching autoregression, and a Gaussian linear unobserved component model. We show that all three inversion copulas with flexible margins improve the fit and density forecasts of quarterly U.S. broad inflation and electricity inflation.
Article
We characterize the Lyapunov exponent and ergodicity of nonlinear stochastic recursion models, including nonlinear AR-GARCH models, in terms of an easily defined, uniformly ergodic process. Properties of this latter process, known as the collapsed process, also determine the existence of moments for the stochastic recursion when it is stationary. As a result, both the stability of a given model and the existence of its moments may be evaluated with relative ease. The method of proof involves piggybacking a Foster-Lyapunov drift condition on certain characteristic behavior of the collapsed process.
Article
Recently, advances in time-varying quantile modeling have proven effective in financial Value-at-Risk forecasting. Some well-known dynamic conditional autoregressive quantile models are generalized to a fully nonlinear family. The Bayesian solution to the general quantile regression problem, via the Skewed- Laplace distribution, is adapted and designed for parameter estimation in this model family via an adaptive Markov chain Monte Carlo sampling scheme. A simulation study illustrates favorable precision in estimation, compared to the standard numerical optimization method. The proposed model family is clearly favored in an empirical study of 10 major stock markets. The results that show the proposed model is more accurate at Value-at-Risk forecasting over a two-year period, when compared to a range of existing alternative models and methods.
Book
Recent Developments in GARCH Modeling.- An Introduction to Univariate GARCH Models.- Stationarity, Mixing, Distributional Properties and Moments of GARCH(p, q)#x2013 Processes.- ARCH(#x221E ) Models and Long Memory Properties.- A Tour in the Asymptotic Theory of GARCH Estimation.- Practical Issues in the Analysis of Univariate GARCH Models.- Semiparametric and Nonparametric ARCH Modeling.- Varying Coefficient GARCH Models.- Extreme Value Theory for GARCH Processes.- Multivariate GARCH Models.- Recent Developments in Stochastic Volatility Modeling.- Stochastic Volatility: Origins and Overview.- Probabilistic Properties of Stochastic Volatility Models.- Moment#x2013 Based Estimation of Stochastic Volatility Models.- Parameter Estimation and Practical Aspects of Modeling Stochastic Volatility.- Stochastic Volatility Models with Long Memory.- Extremes of Stochastic Volatility Models.- Multivariate Stochastic Volatility.- Topics in Continuous Time Processes.- An Overview of Asset-Price Models.- Ornstein-Uhlenbeck Processes and Extensions.- Jump-Type Levy Processes.- Levy-Driven Continuous-Time ARMA Processes.- Continuous Time Approximations to GARCH and Stochastic Volatility Models.- Maximum Likelihood and Gaussian Estimation of Continuous Time Models in Finance.- Parametric Inference for Discretely Sampled Stochastic Differential Equations.- Realized Volatility.- Estimating Volatility in the Presence of Market Microstructure Noise: A Review of the Theory and Practical Considerations.- Option Pricing.- An Overview of Interest Rate Theory.- Extremes of Continuous-Time Processes..- Topics in Cointegration and Unit Roots.- Cointegration: Overview and Development.- Time Series with Roots on or Near the Unit Circle.- Fractional Cointegration.- Special Topics - Risk.- Different Kinds of Risk.- Value-at-Risk Models.- Copula-Based Models for Financial Time Series.- Credit Risk Modeling.- Special Topics - Time Series Methods.- Evaluating Volatility and Correlation Forecasts.- Structural Breaks in Financial Time Series.- An Introduction to Regime Switching Time Series Models.- Model Selection.- Nonparametric Modeling in Financial Time Series.- Modelling Financial High Frequency Data Using Point Processes.- Special Topics - Simulation Based Methods.- Resampling and Subsampling for Financial Time Series.- Markov Chain Monte Carlo.- Particle Filtering.
Article
Since the pioneering work by H. Tong [Threshold models in non-linear time series analysis. New York etc.: Springer-Verlag (1983; Zbl 0527.62083)], threshold time series modelling and its applications have become increasingly important for research in economics and finance. A number of books and a vast number of research papers published in this area have been motivated by Tong’s threshold models. The goal of this paper is to give a thorough review on the development of the family of threshold time series model in finance and to provide a streamlined approach to financial time series analysis. It covers threshold modeling, nonlinearity tests, statistical inference, diagnostic checking, and model selection, as well as applications of the threshold autoregressive model and its generalizations in finance.
Article
This paper proposes an infinite hidden Markov model to integrate the regime switching and the structural break dynamics in a single, coherent Bayesian framework. Two parallel hierarchical structures, one governing the transition probabilities and another governing the parameters of the conditional data density, keep the model parsimonious and improve forecasts. This flexible approach allows for regime persistence and estimates the number of states automatically. A global identification methodology for structural changes versus regime switching is presented. An application to U.S. real interest rates compares the new model to existing parametric alternatives.
Article
Empirical distributions in finance and economics might show heavy tails, volatility clustering, varying mean returns and multimodality as part of their features. However, most statistical models available in the literature assume some kind of parametric form (clearly neglecting important characteristics of the data) or focus on modeling extreme events (therefore, providing no information about the rest of the distribution). In this paper we develop a Bayesian nonparametric prior for a collection of distributions evolving in discrete time. The prior is constructed by defining the distribution at any time point as a Dirichlet process mixture of Gaussian distributions, and inducing dependence through the atoms of their stick-breaking decomposition. A general construction, which allows for trends, periodicities and regressors is described. The resulting model is applied to the estimation of the time-varying travel expense distribution of employees from a major development bank comparable to the IDB, IMF and World Bank.
Article
This paper proposes a Bayesian approach to quantile autoregressive (QAR) time series model estimation and forecasting. We establish that the joint posterior distribution of the model parameters and future values is well defined. The associated Markov chain Monte Carlo algorithm for parameter estimation and forecasting converges to the posterior distribution quickly. We also present a combining forecasts technique to produce more accurate out‐of‐sample forecasts by using a weighted sequence of fitted QAR models. A moving window method to check the quality of the estimated conditional quantiles is developed. We verify our methodology using simulation studies and then apply it to currency exchange rate data. The results obtained show that an unequally weighted combining method performs better than other forecasting methodology.
Article
The importance of choosing a good parametrization for efficient MCMC imple- mentation has been repeatedly emphasized in the literature. For a broad class of multi-level models, there exist two well-known competing parameterizations: the centered parametriza- tion (CP) and the non-centered parametrization (NCP). Much literature has been devoted to the questions of when to use which and how to compromise between themvia partial CP/NCP, a task that often requires data-dependent tuning. In this paper we provide both theoretical justifications and empirical demonstrations to show that there exists a surprisingly general and powerful strategy for boosting MCMC efficiency via simply interweaving—but not alternating— the two parameterizations. This strategy is so powerful that failure of both the CP chain and the NCP chain to converge geometrically does not prevent the interweaving algorithm from doing so. The interweaving strategy achieves this seemingly magical property by taking ad- vantage of the discordance of the two parameterizations, namely, the sufficiency of CP and the ancillarity of NCP, to dramatically reduce the Markovian dependence, especially when the orig- inal CP and NCP form a "beauty and beast" pair (i.e., when one chain mixes far more rapidly than the other). The ancillarity-sufficiency reformulation of the C P-NCP dichotomy allows us to borrow insight from the well-known Basu's theorem on the independence of (complete) suf- ficient and ancillary statistics, albeit a Bayesian version of B asu's theorem is currently lacking. To demonstrate the competitiveness and versatility of this Ancillarity-Sufficiency Interweaving Strategy (ASIS) for real-world problems, we apply it to fit 1) a Cox process model for detecting changes in source intensity of photon counts observed by the Chandra X-ray telescope from a (candidate) neutron/quark star, which was the problem that motivated the ASIS strategy as it defeated other methods we initially tried; 2) a probit model for predicting latent membranous lu- pus nephritis; and 3) an interval-censored normal model for studying the lifetime of fluorescent lights. A bevy of open questions are presented, from the mysterious but exceedingly sugges- tive connections between ASIS and fiducial/structural inferences to nested ASIS for further boosting MCMC efficiency.
Article
Suppose data consist of a random sample from a distribution function F Y , which is unknown, and that interest focuses on inferences on θ, a vector of quantiles of F Y . When the likelihood function is not fully specified, a posterior density cannot be calculated and Bayesian infer-ence is difficult. This article considers an approach which relies on a substitution likelihood characterized by a vector of quantiles. Properties of the substitution likelihood are investi-gated, strategies for prior elicitation are presented, and a general framework is proposed for quantile regression modeling. Posterior computation proceeds via a Metropolis algorithm that utilizes a normal approximation to the posterior. Results from a simulation study are presented, and the methods are illustrated through application to data from a genotoxicity experiment.
Article
GARCH volatility models with fixed parameters are too restrictive for long time series due to breaks in the volatility process. Flexible alternatives are Markov-switching GARCH and change-point GARCH models. They require estimation by MCMC methods due to the path dependence problem. An unsolved issue is the computation of their marginal likelihood, which is essential for determining the number of regimes or change-points. We solve the problem by using particle MCMC, a technique proposed by Andrieu, Doucet, and Holenstein (2010). We examine the performance of this new method on simulated data, and we illustrate its use on several return series.
Article
This paper presents a method for Bayesian nonparametric analysis of the return distribution in a stochastic volatility model. The distribution of the logarithm of the squared return is flexibly modelled using an infinite mixture of Normal distributions. This allows efficient Markov chain Monte Carlo methods to be developed. Links between the return distribution and the distribution of the logarithm of the squared returns are discussed. The method is applied to simulated data, two asset return series and two stock index return series. We find that estimates of volatility using the model can differ dramatically from those using a Normal return distribution if there is evidence of a heavy-tailed return distribution.
Article
We use a quantile-based measure of conditional skewness (or asymmetry) that is robust to outliers and therefore particularly suited for recalcitrant series such as emerging market returns. Our study is on the following portfolio returns: developed markets, emerging markets, the world, and separately 73 countries. We find that the conditional asymmetry of returns varies significantly over time, even after accounting for conditional volatility and unconditional skewness effects. Interestingly, the correlation of conditional asymmetry between developed and emerging markets is surprisingly low, despite the fact that their return co-movement has been historically high and increasing. We also document a strong relationship between conditional asymmetry and macroeconomic fundamentals. Moreover, the low correlation across developed and emerging markets can largely be explained by their opposite response to those fundamentals. The economic significance of conditional skewness is demonstrated in an international portfolio setting. Tilting the portfolio weights away from a value-weighted allocation and toward emerging markets produces significant portfolio gains.
Article
We propose a class of observation-driven time series models referred to as generalized autoregressive score (GAS) models. The mechanism to update the parameters over time is the scaled score of the likelihood function. This new approach provides a unified and consistent framework for introducing time-varying parameters in a wide class of nonlinear models. The GAS model encompasses other well-known models such as the generalized autoregressive conditional heteroskedasticity, autoregressive conditional duration, autoregressive conditional intensity, and Poisson count models with time-varying mean. In addition, our approach can lead to new formulations of observation-driven models. We illustrate our framework by introducing new model specifications for time-varying copula functions and for multivariate point processes with time-varying parameters. We study the models in detail and provide simulation and empirical evidence. Copyright © 2012 John Wiley & Sons, Ltd.
Article
This paper considers the problem of defining a time-dependent nonparametric prior for use in Bayesian nonparametric modelling of time series. A recursive construction allows the definition of priors whose marginals have a general stick-breaking form. The processes with Poisson-Dirichlet and Dirichlet process marginals are investigated in some detail. We develop a general conditional Markov Chain Monte Carlo (MCMC) method for inference in the wide subclass of these models where the parameters of the marginal stick-breaking process are nondecreasing sequences. We derive a generalised Pólya urn scheme type representation of the Dirichlet process construction, which allows us to develop a marginal MCMC method for this case. We apply the proposed methods to financial data to develop a semi-parametric stochastic volatility model with a time-varying nonparametric returns distribution. Finally, we present two examples concerning the analysis of regional GDP and its growth.
Article
The paper introduces the idea of Bayesian quantile regression employing a likelihood function that is based on the asymmetric Laplace distribution. It is shown that irrespective of the original distribution of the data, the use of the asymmetric Laplace distribution is a very natural and effective way for modelling Bayesian quantile regression. The paper also demonstrates that improper uniform priors for the unknown model parameters yield a proper joint posterior. The approach is illustrated via a simulated and two real data sets.
Article
The ultimate goal of regression analysis is to obtain information about the conditional distribution of a response given a set of explanatory variables. This goal is, however, seldom achieved because most established regression models only estimate the conditional mean as a function of the explanatory variables and assume that higher moments are not affected by the regressors. The underlying reason for such a restriction is the assumption of additivity of signal and noise. We propose to relax this common assumption in the framework of transformation models. The novel class of semiparametric regression models proposed herein allows transformation functions to depend on explanatory variables. These transformation functions are estimated by regularised optimisation of scoring rules for probabilistic forecasts, e.g. the continuous ranked probability score. The corresponding estimated conditional distribution functions are consistent. Conditional transformation models are potentially useful for describing possible heteroscedasticity, comparing spatially varying distributions, identifying extreme events, deriving prediction intervals and selecting variables beyond mean regression effects. An empirical investigation based on a heteroscedastic varying coefficient simulation model demonstrates that semiparametric estimation of conditional distribution functions can be more beneficial than kernel-based non-parametric approaches or parametric generalised additive models for location, scale and shape.
Article
Statistical volatility models rely on the assumption that the shape of the conditional distribution is fixed over time and that it is only the volatility that varies. The recently proposed conditional autoregressive value at risk (CAViaR) models require no such assumption, and allow quantiles to be modeled directly in an autoregressive framework. Although useful for risk management, CAViaR models do not provide volatility forecasts. Such forecasts are needed for several other important applications, such as option pricing and portfolio management. It has been found that, for a variety of probability distributions, there is a surprising constancy of the ratio of the standard deviation to the interval between symmetric quantiles in the tails of the distribution, such as the 0.025 and 0.975 quantiles. This result has been used in decision and risk analysis to provide an approximation of the standard deviation in terms of quantile estimates provided by experts. Drawing on the same result, we construct financial volatility forecasts as simple functions of the interval between CAViaR forecasts of symmetric quantiles. Forecast comparison, using five stock indices and 20 individual stocks, shows that the method is able to outperform generalized autoregressive conditional heteroskedasticity (GARCH) models and moving average methods.
Article
Engle and Manganelli (2004) propose CAViaR, a class of models suitable for estimating conditional quantiles in dynamic settings. Engle and Manganelli apply their approach to the estimation of Value at Risk, but this is only one of many possible applications. Here we extend CAViaR models to permit joint modeling of multiple quantiles, Multi-Quantile (MQ) CAViaR. We apply our new methods to estimate measures of conditional skewness and kurtosis defined in terms of conditional quantiles, analogous to the unconditional quantile-based measures of skewness and kurtosis studied by Kim and White (2004). We investigate the performance of our methods by simulation, and we apply MQ-CAViaR to study conditional skewness and kurtosis of S&P 500 daily returns. JEL Classification: C13, C32.
Article
R. F. Engle's autoregressive conditional heteroskedastic model is extended to permit parametric specifications for conditional dependence beyond the mean and variance. The suggestion is to model the conditional density with a small number of parameters, and then model these parameters as functions of the conditioning information. This method is applied to two data sets. The first application is to the monthly excess holding yield on U.S. Treasury securities, where the conditional density used is a Student's t distribution. The second application is to the U.S. Dollar/Swiss Franc exchange rate, using a new skewed Student t conditional distribution. Copyright 1994 by Economics Department of the University of Pennsylvania and the Osaka University Institute of Social and Economic Research Association.
Article
Value at Risk (VaR) has become the standard measure of market risk employed by financial institutions for both internal and regulatory purposes. VaR is defined as the value that a portfolio will lose with a given probability, over a certain time horizon (usually one or ten days). Despite its conceptual simplicity, its measurement is a very challenging statistical problem and none of the methodologies developed so far give satisfactory solutions. Interpreting the VaR as the quantile of future portfolio values conditional on current information, we propose a new approach to quantile estimation which does not require any of the extreme assumptions invoked by existing methodologies (such as normality or i.i.d. returns). The Conditional Autoregressive Value-at-Risk or CAViaR model moves the focus of attention from the distribution of returns directly to the behavior of the quantile. We specify the evolution of the quantile over time using a special type of autoregressive process and use the regression quantile framework introduced by Koenker and Bassett to determine the unknown parameters. Since the objective function is not differentiable, we use a differential evolutionary genetic algorithm for the numerical optimization. Utilizing the criterion that each period the probability of exceeding the VaR must be independent of all the past information, we introduce a new test of model adequacy, the Dynamic Quantile test. Applications to simulated and real data provide empirical support to this methodology and illustrate the ability of these algorithms to adapt to new risk environments.
Article
In this paper we present a Bayesian approach to quantile self-exciting threshold autoregressive time series models. The simulation work shows that the method can deal very well with nonstationary time series with very large, but not necessarily symmetric, variations. The methodology has also been applied to the growth rate of US real GNP data and some interesting results have been obtained. Copyright 2007 The Author Journal compilation 2007 Blackwell Publishing Ltd.
Article
The authors find support for a negative relation between conditional expected monthly return and conditional variance of monthly return using a GARCH-M model modified by allowing (1) seasonal patterns in volatility, (2) positive and negative innovations to returns having different impacts on conditional volatility, and (3) nominal interest rates to predict conditional variance. Using the modified GARCH-M model, they also show that monthly conditional volatility may not be as persistent as was thought. Positive unanticipated returns appear to result in a downward revision of the conditional volatility, whereas negative unanticipated returns result in an upward revision of conditional volatility. Copyright 1993 by American Finance Association.
Article
For both the academic and the financial communities it is a familiar stylized fact that stock market returns have negative skewness and severe excess kurtosis. This stylized fact has been supported by a vast collection of empirical studies. Given that the conventional measures of skewness and kurtosis are computed as an average and that averages are not robust, we ask: “How useful are the measures of skewness and kurtosis used in previous empirical studies?” To answer this question, we provide a survey of robust measures of skewness and kurtosis from the statistics literature and carry out extensive Monte Carlo simulations that compare the conventional measures with the robust measures of our survey. An application of the robust measures to daily S&P500 index data indicates that the stylized facts might have been accepted too readily. We suggest that looking beyond the standard skewness and kurtosis measures can provide deeper and more accurate insight into market returns behavior.
Article
This article presents a new way of modeling time-varying volatility. The authors generalize the usual stochastic volatility models to encompass regime-switching properties. The unobserved state variables are governed by a first-order Markov process. Bayesian estimators are constructed by Gibbs sampling. High-, medium-, and low-volatility states are identified for the Standard and Poor's 500 weekly return data. Persistence in volatility is explained by the persistence in the low- and the medium-volatility states. The high-volatility regime is able to capture the 1987 crash and overlap considerably with four U.S. economic recession periods.
Article
this paper we exploit Gibbs sampling to provide a likelihood framework for the analysis of stochastic volatility models, demonstrating how to perform either maximum likelihood or Bayesian estimation. The paper includes an extensive Monte Carlo experiment which compares the efficiency of the maximum likelihood estimator with that of quasi-likelihood and Bayesian estimators proposed in the literature. We also compare the fit of the stochastic volatility model to that of ARCH models using the likelihood criterion to illustrate the flexibility of the framework presented. Some key words: ARCH, Bayes estimation, Gibbs sampler, Heteroscedasticity, Maximum likelihood, Quasi-maximum likelihood, Simulation, Stochastic EM algorithm, Stochastic volatility, Stock returns. 1 INTRODUCTION
Article
Volatility has been one of the most active areas of research in empirical finance and time series econometrics during the past decade. This chapter provides a unified continuous-time, frictionless, no-arbitrage framework for systematically categorizing the various volatility concepts, measurement procedures, and modeling procedures. We define three different volatility concepts: (i) the notional volatility corresponding to the ex-post sample-path return variability over a fixed time interval, (ii) the ex-ante expected volatility over a fixed time interval, and (iii) the instantaneous volatility corresponding to the strength of the volatility process at a point in time. The parametric procedures rely on explicit functional form assumptions regarding the expected and/or instantaneous volatility. In the discrete-time ARCH class of models, the expectations are formulated in terms of directly observable variables, while the discrete- and continuous-time stochastic volatility models involve latent state variable(s). The nonparametric procedures are generally free from such functional form assumptions and hence afford estimates of notional volatility that are flexible yet consistent (as the sampling frequency of the underlying returns increases). The nonparametric procedures include ARCH filters and smoothers designed to measure the volatility over infinitesimally short horizons, as well as the recently-popularized realized volatility measures for (non-trivial) fixed-length time intervals.
Article
We present a new methodology for estimating time-varying conditional skewness. Our model allows for changing means and variances, uses a maximum likelihood framework with instruments, and assumes a non-central t distribution. We apply this method to daily, weekly, and monthly stock returns, and find that conditional skewness is important. In particular, we show that the evidence of asymmetric variance is consistent with conditional skewness. Inclusion of conditional skewness also impacts the persistence in conditional variance.
Accounting for Biases in BlackScholes. Tech. rep. 30. CRIF Working Paper series
  • D Backus
  • S Foresi
  • K Li
  • L Wu
Backus, D., S. Foresi, K. Li, and L. Wu. 1997. Accounting for Biases in BlackScholes. Tech. rep. 30. CRIF Working Paper series.
  • A L Bowley
Bowley, A.L. 1920. Elements of Statistics. Vol. 2. Elements of Statistics. P. S. King & son, Limited.
Beta-t-EGARCH. Cambridge Working Papers in Economics
  • A Harvey
  • T Chakravarty
Harvey, A., and T. Chakravarty. 2008. Beta-t-EGARCH. Cambridge Working Papers in Economics. Faculty of Economics, University of Cambridge.
  • Y Hothorn
  • L Möst
  • P Bühlmann
Hothorn, Y., L. Möst, and P. Bühlmann. 2018. "Most Likely Transformations". Scandinavian Journal of Statistics 45:110-134.
  • D B Nelson
Nelson, D. B. 1991. "Conditional Heteroskedasticity in Asset Returns: A New Approach". Econometrica 59:347-370.
A reliable data-based bandwidth selection method for kernel density estimation
  • E S Pearson
  • J W Tukey
Pearson, E. S., and J. W. Tukey. 1965. "A reliable data-based bandwidth selection method for kernel density estimation". Biometrica 3-4 (52): 533-546.