Preprint

Edgeworth corrections for spot volatility estimator

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

We develop Edgeworth expansion theory for spot volatility estimator under general assumptions on the log-price process that allow for drift and leverage effect. The result is based on further estimation of skewness and kurtosis, when compared with existing second order asymptotic normality result. Thus our theory can provide with a refinement result for the finite sample distribution of spot volatility. We also construct feasible confidence intervals (one-sided and two-sided) for spot volatility by using Edgeworth expansion. The Monte Carlo simulation study we conduct shows that the intervals based on Edgeworth expansion perform better than the conventional intervals based on normal approximation, which justifies the correctness of our theoretical conclusion.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The availability of high-frequency financial data has led to substantial improvements in our understanding of financial volatility. Most existing literature focuses on estimating the integrated volatility over a fixed period. This article proposes a non-parametric threshold kernel method to estimate the time-dependent spot volatility and jumps when the underlying price process is governed by Brownian semimartingale with finite activity jumps. The threshold kernel estimator combines the threshold estimation for integrated volatility and the kernel filtering approach for spot volatility when the price process is driven only by diffusions without jumps. The estimator proposed is consistent and asymptotically normal and has the same rate of convergence as the estimator studied by Kristensen (2010) in a setting without jumps. The Monte Carlo simulation study shows that the proposed estimator exhibits excellent performance over a wide range of jump sizes and for different sampling frequencies. An empirical example is given to illustrate the potential applications of the proposed method.
Article
Full-text available
The availability of high-frequency intraday data allows us to accurately estimate stock volatility. This paper em- ploys a bivariate diffusion to model the price and volatility of an asset and investigates kernel type estimators of spot volatility based on high-frequency return data. We establish both pointwise and global asymptotic distributions for the estimators.
Article
Full-text available
We propose bootstrap methods for a general class of nonlinear transformations of realized volatility which includes the raw version of realized volatility and its logarithmic transformation as special cases. We consider the independent and identically distributed (i.i.d.) bootstrap and the wild bootstrap (WB), and prove their first-order asymptotic validity under general assumptions on the log-price process that allow for drift and leverage effects. We derive Edgeworth expansions in a simpler model that rules out these effects. The i.i.d. bootstrap provides a second-order asymptotic refinement when volatility is constant, but not otherwise. The WB yields a second-order asymptotic refinement under stochastic volatility provided we choose the external random variable used to construct the WB data appropriately. None of these methods provides third-order asymptotic refinements. Both methods improve upon the first-order asymptotic theory in finite samples.
Article
Full-text available
A basic result in mathematical finance, sometimes called the fundamental theorem of asset pricing (see [DR 87]), is that for a stochastic process (St)t Î \mathbb R+(S_{t})_{t\in \mathbb {R}_{+}} , the existence of an equivalent martingale measure is essentially equivalent to the absence of arbitrage opportunities. In finance the process (St)t Î \mathbb R+(S_{t})_{t\in \mathbb {R}_{+}} describes the random evolution of the discounted price of one or several financial assets. The equivalence of no-arbitrage with the existence of an equivalent probability martingale measure is at the basis of the entire theory of “pricing by arbitrage”. Starting from the economically meaningful assumption that S does not allow arbitrage profits (different variants of this concept will be defined below), the theorem allows the probability P on the underlying probability space (Ω,ℱ,P) to be replaced by an equivalent measure Q such that the process S becomes a martingale under the new measure. This makes it possible to use the rich machinery of martingale theory. In particular the problem of fair pricing of contingent claims is reduced to taking expected values with respect to the measure Q. This method of pricing contingent claims is known to actuaries since the introduction of actuarial skills, centuries ago and known by the name of “equivalence principle”.
Article
Full-text available
The quality of the asymptotic normality of realized volatility can be poor if sampling does not occur at very high frequencies. In this article we consider an alternative approximation to the finite sample distribution of realized volatility based on Edgeworth expansions. In particular, we show how confidence intervals for integrated volatility can be constructed using these Edgeworth expansions. The Monte Carlo study we conduct shows that the intervals based on the Edgeworth corrections have improved properties relatively to the conventional intervals based on the normal approximation. Contrary to the bootstrap, the Edgeworth approach is an analytical approach that is easily implemented, without requiring any resampling of one's data. A comparison between the bootstrap and the Edgeworth expansion shows that the bootstrap outperforms the Edgeworth corrected intervals. Thus, if we are willing to incur in the additional computational cost involved in computing bootstrap intervals, these are preferred over the Edgeworth intervals. Nevertheless, if we are not willing to incur in this additional cost, our results suggest that Edgeworth corrected intervals should replace the conventional intervals based on the first order normal approximation.
Article
We propose a kernel estimator for the spot volatility of a semi-martingale at a given time point by using high frequency data, where the underlying process accommodates a jump part of infinite variation. The estimator is based on the representation of the characteristic function of Lévy processes. The consistency of the proposed estimator is established under some mild assumptions. By assuming that the jump part of the underlying process behaves like a symmetric stable Lévy process around 0, we establish the asymptotic normality of the proposed estimator. In particular, with a specific kernel function, the estimator is variance efficient. We conduct Monte Carlo simulation studies to assess our theoretical results and compare our estimator with existing ones.
Article
The main contribution of this paper is to establish the formal validity of Edgeworth expansions for realized volatility estimators. First, in the context of no microstructure effects, our results rigorously justify the Edgeworth expansions for realized volatility derived in Gonçalves and Meddahi (2009). Second, we show that the validity of the Edgeworth expansions for realized volatility may not cover the optimal two-point distribution wild bootstrap proposed by Gonçalves and Meddahi (2009). Then, we propose a new optimal nonlattice distribution which ensures the second-order correctness of the bootstrap. Third, in the presence of microstructure noise, based on our Edgeworth expansions, we show that the new optimal choice proposed in the absence of noise is still valid in noisy data for the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). Finally, we show how confidence intervals for integrated volatility can be constructed using these Edgeworth expansions for noisy data. Our Monte Carlo simulations show that the intervals based on the Edgeworth corrections have improved the finite sample properties relatively to the conventional intervals based on the normal approximation. This article is protected by copyright. All rights reserved
Article
We propose new nonparametric estimators of the integrated volatility of an It\^{o} semimartingale observed at discrete times on a fixed time interval with mesh of the observation grid shrinking to zero. The proposed estimators achieve the optimal rate and variance of estimating integrated volatility even in the presence of infinite variation jumps when the latter are stochastic integrals with respect to locally "stable" L\'{e}vy processes, that is, processes whose L\'{e}vy measure around zero behaves like that of a stable process. On a first step, we estimate locally volatility from the empirical characteristic function of the increments of the process over blocks of shrinking length and then we sum these estimates to form initial estimators of the integrated volatility. The estimators contain bias when jumps of infinite variation are present, and on a second step we estimate and remove this bias by using integrated volatility estimators formed from the empirical characteristic function of the high-frequency increments for different values of its argument. The second step debiased estimators achieve efficiency and we derive a feasible central limit theorem for them.
Article
We construct a spot volatility estimator for high-frequency financial data which contain market microstructure noise. We prove consistency and derive the asymptotic distribution of the estimator. A data-driven method is proposed to select the scale parameter and the bandwidth parameter in the estimator. In Monte Carlo simulations, we compare the finite sample performance of our estimator with some existing estimators. Empirical examples are given to illustrate the potential applications of the estimator.
Article
We examine tests for jumps based on recent asymptotic results; we interpret the tests as Hausman-type tests. Monte Carlo evidence suggests that the daily ratio z-statistic has appropriate size, good power, and good jump detection capabilities revealed by the confusion matrix comprised of jump classification probabilities. We identify a pitfall in applying the asymptotic approximation over an entire sample. Theoretical and Monte Carlo analysis indicates that microstructure noise biases the tests against detecting jumps, and that a simple lagging strategy corrects the bias. Empirical work documents evidence for jumps that account for 7% of stock market price variance.
Article
This paper shows that the asymptotic normal approximation is often insufficiently accurate for volatility estimators based on high frequency data. To remedy this, we derive Edgeworth expansions for such estimators. The expansions are developed in the framework of small-noise asymptotics. The results have application to Cornish-Fisher inversion and help setting intervals more accurately than those relying on normal distribution.
Article
In this paper, new fully nonparametric estimators of the diffusion coefficient of continuous time models are introduced. The estimators are based on Fourier analysis of the state variable trajectory observed and on the estimation of quadratic variation between observations by means of realized volatility. The estimators proposed are shown to be consistent and asymptotically normally distributed. Moreover, the Fourier estimator can be iterated to get a fully nonparametric estimate of the diffusion coefficient in a bivariate model in which one state variable is the volatility of the other. The estimators are shown to be unbiased in small samples using Monte Carlo simulations and are used to estimate univariate and bivariate models for interest rates.
Article
This paper shows how to use realized kernels to carry out efficient feasible inference on the ex post variation of underlying equity prices in the presence of simple models of market frictions. The weights can be chosen to achieve the best possible rate of convergence and to have an asymptotic variance which equals that of the maximum likelihood estimator in the parametric version of this problem. Realized kernels can also be selected to (i) be analyzed using endogenously spaced data such as that in data bases on transactions, (ii) allow for market frictions which are endogenous, and (iii) allow for temporally dependent noise. The finite sample performance of our estimators is studied using simulation, while empirical work illustrates their use in practice. Copyright 2008 The Econometric Society.
Article
The availability of intraday data on the prices of speculative assets means that we can use quadratic variation-like measures of activity in financial markets, called realized volatility, to study the stochastic properties of returns. Here, under the assumption of a rather general stochastic volatility model, we derive the moments and the asymptotic distribution of the realized volatility error-the difference between realized volatility and the discretized integrated volatility (which we call actual volatility). These properties can be used to allow us to estimate the parameters of stochastic volatility models without recourse to the use of simulation-intensive methods. Copyright 2002 The Royal Statistical Society.
Article
A kernel weighted version of the standard realised integrated volatility es- timator is proposed. By different choices of the kernel and bandwidth, the measure allows us to focus on specific characteristics of the volatility process. In particular, as the bandwidth vanishes, an estimator of the realised spot volatility is obtained. We denote this the filtered spot volatility. We show con- sistency and asymptotic normality of the kernel smoothed realised volatility and the filtered spot volatility. The choice of bandwidth is discussed and data- driven selection methods proposed. A simulation study examines the finite sample properties of the estimators.
Article
It is widely known that conditional covariances of asset returns change over time. Researchers doing empirical work have adopted many strategies for accommodating conditional heteroskedasticity. One popular strategy is performing rolling regressions in which only data from, say, the preceding five year period is used to estimate the conditional covariance of returns at a given date. The authors develop continuous record asymptotic approximations for the measurement error in conditional variances when using these methods. They derive asymptotically optimal window lengths for the standard rolling regressions and optimal weights for weighted rolling regressions. The S&P 500 is used as an empirical example. Copyright 1996 by The Econometric Society.