## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

This paper proposes a dynamic framework for modeling and forecasting of realized covariance matrices using vine copulas to allow for more flexible dependencies between assets. Our model automatically guarantees positive definiteness of the forecast through the use of a Cholesky decomposition of the realized covariance matrix. We explicitly account for long-memory behavior by using ARFIMA and HAR models for the individual elements of the decomposition. Furthermore, our model incorporates non-Gaussian innovations and GARCH effects, accounting for volatility clustering and unconditional kurtosis. The dependence structure between assets is studied using vine copula constructions, which allow for nonlinearity and asymmetry without suffering from an inflexible tail behavior or symmetry restrictions as in conventional multivariate models. Further, the copulas have a direct impact on the point forecasts of the realized covariances matrices, due to being computed as a nonlinear transformation of the forecasts for the Cholesky matrix. Beside studying in-sample properties, we assess the usefulness of our method in a one-day ahead forecasting framework, comparing recent types of models for the realized covariance matrix based on a model confidence set approach. Additionally, we find that in Value-at-Risk (VaR) forecasting, vine models require less capital requirements due to smoother and more accurate forecasts.

To read the full-text of this research,

you can request a copy directly from the authors.

... For more details, see Cooke (2001, 2002), Aas et al. (2009), and Kurowicka and Joe (2011). Vine copulas find many applications in financial econometrics such as the estimation of Valueat-Risk, see Brechmann and Czado (2013), the modeling and forecasting of realized covariance matrices, see Brechmann et al. (2018) as well as the modeling of serial dependence in stationary time series, for example, see Loaiza-Maya, Smith, and Maneesoonthorn et al. (2018). ...

We analyze the properties of the Maximum Likelihood (ML) estimator when the underlying log-likelihood function is numerically maximized with the so-called zig-zag algorithm. By splitting the parameter vector into sub-vectors, the algorithm maximizes the log-likelihood function alternatingly with respect to one sub-vector while keeping the others constant. For situations when the algorithm is initialized with a consistent estimator and is iterated sufficiently often, we establish the asymptotic equivalence of the zig-zag estimator and the “infeasible” ML estimator being numerically approximated. This result gives guidance for practical implementations. We illustrate how to employ the algorithm in different estimation problems, such as in a vine copula model and a vector autoregressive moving average model. The accuracy of the estimator is illustrated through simulations. Finally, we demonstrate the usefulness of our results in an application, where the Bitcoin heating 2017 is analyzed by a dynamic conditional correlation model.

... See also Simard et al. (2015). For the multivariate case, Brechmann et al. (2016) proposed a dynamic framework for modeling and forecasting realized covariance matrices using vine copulas. This vine-copula approach is still based on bivariate copulas to connect residuals of the univariate models for each element of the Cholesky factors. ...

Multivariate volatility modeling and forecasting are crucial in financial economics. This paper develops a copula-based approach to model and forecast realized volatility matrices. The proposed copula-based time series models can capture the hidden dependence structure of realized volatility matrices. Also, this approach can automatically guarantee the positive definiteness of the forecasts through either Cholesky decomposition or matrix logarithm transformation. In this paper we consider both multivariate and bivariate copulas; the types of copulas include Student's t, Clayton and Gumbel copulas. In an empirical application, we find that for one-day ahead volatility matrix forecasting, these copula-based models can achieve significant performance both in terms of statistical precision as well as creating economically mean-variance efficient portfolio. Among the copulas we considered, the multivariate-t copula performs better in statistical precision, while bivariate-t copula has better economical performance.

... Taking logs is used widely in applied (time series) econometrics for linearizing relations or stabilizing variances. It has become a standard transformation for time series in numerous economic and financial applications; see among others Andersen, Bollerslev, and Huang (2011), Bauer andVorkink (2011), Brechmann, Heiden, andOkhrin (2018), Golosnoy, Okhrin, and Schmid (2012), Hautsch (2012), Lütkepohl and Xu (2012), Mayr and Ulbricht (2015), or Gribisch (2018). Indeed, models in logs often turn out to be better suited for both estimation and forecasting. ...

In many economic applications, it is convenient to model and forecast a variable of interest in logs rather than in levels. However, the reverse transformation from log forecasts to levels introduces a bias. This paper compares different bias correction methods for such transformations of log series which follow a linear process with various types of error distributions. Based on Monte Carlo simulations and an empirical study of realized volatilities, we find no choice of correction method that is uniformly best. We recommend the use of the variance-based correction, either by itself or as part of a hybrid procedure where one first decides (using a pretest) whether the log series is highly persistent or not, and then proceeds either without bias correction (high persistence) or with bias correction (low persistence).

... A simulation and forecasting exercise was conducted to highlight the importance of modelling both long memory and tail dependence to capture extreme events. More recently, Brechmann et al. (2018) studied a dynamic framework for forecasting realised covariances using vine copulas. The results obtained showed that for HAR models, using a vine structure significantly improves statistical loss, mean-variance efficient portfolios and VaR; while for ARFIMA-based vine models, the gains are less apparent, except for forecasting daily VaRs. ...

The main aim of this paper is to obtain a direct measure of the relation between the future and implied volatilities, in order to determine the appropriateness of using linear modelling to establish the implied–realised volatility relation. To achieve this aim, the dependence structure for implied and realised volatilities is modelled using bivariate standard copulas. Dependence parameters are estimated using a semiparametric method and by reference to three databases corresponding to different assets and frequencies. Two of these databases have been employed in previous research, and the third was constructed specifically for the present study. The first two databases span periods of major crises during the 1980s and 1990s, while the third contains data corresponding to the 2007 financial and economic crisis. The empirical evidence obtained shows that the dependence coefficient is always positive and constant over time, as expected. However, the influence of extreme-volatility events should be taken into account when the data present significant asymmetric tail dependence; models that impose symmetry underestimate the conditional expectation in extreme tail events. Therefore, it might be preferable to model nonlinear conditional expectations to forecast the realised volatility, using implied volatility as a predictor, as is the case with copula models and neural networks.

... This method has been widely applied in finance and economics. For example, Riccetti (2013) applies vine copula method to the macroasset allocation of portfolios containing a commodity component, and Arreola Hernandez (2014) fits vine copula models and portfolio optimization methods with respect to five risk measures to investigate the dependence risk and resource allocation characteristics of two 20-stock coal-uranium and oil-gas sector portfolios from the Australian market in the context Brechmann et al. (2015) and Huang et al. (2016), who analyze the real interest rate-stock market link using vine copula models. However, multivariate dependence among commodity prices, the real value of the US dollar and the US real interest rate has not been addressed yet in the literature, which is explored in the current paper using vine copula methods. ...

Theoretical models suggest monetary policy is transmitted to commodity prices. We quantify this channel using several empirical methods under daily data. In early 2009, the US real interest rate became negative, with sample mean varying from 1.75 % (in the mid-1997 to January 28, 2009, subsample) to $-1.50\,\%$ (in January 29, 2009, to mid-September 2013 subsample). Gold displays higher risk-adjusted returns earlier, while copper and oil have higher risk-adjusted returns more recently. Shocks to the exchange rate and the real interest rate in VARs explain almost 30 % for oil and 32 % for copper more recently when impulse responses are more significant. The time-varying correlation of oil with the real interest rate in the more recent period is $-0.462$, and its correlation with the exchange rate is $-0.460$, compared to $-0.089$ and $-0.120$, respectively, in the earlier period. Vine copula methods identify a dependence pattern of C-vine copula with t-copula in almost every pair among commodity prices, the real value of the US dollar and the US real interest rate.

... However, it is possible to first apply a DCC decomposition approach and a CD on the correlation matrix thereafter. In general, the nonlinear dependence of the elements in the decomposition can also be an advantage, as the dependency structure between the Cholesky elements can be studied and used for forecasting, see e.g. Brechmann et al. (2015). ...

This paper studies the pitfalls of applying the Cholesky decomposition for forecasting multivariate volatility. We analyze the impact of one of the main issues in empirical application of using the decomposition: The sensitivity of the forecasts to the order of the variables in the covariance matrix. We find that despite being frequently used to guarantee positive semi-definiteness and symmetry of the forecasts, the Cholesky decomposition has to be used with caution, as the ordering of the variables leads to significant differences in forecast performance. A possible solution is provided by studying an alternative, the matrix exponential transformation. We show that in combination with empirical bias correction, forecasting accuracy of both decompositions does not significantly differ. This makes the matrix exponential a valuable option, especially in larger dimensions.

Models for realized volatility that take the specific form of temporal dependence into account are proposed. Current popular methods use the idea of mixed frequencies for forecasting realized volatility, but neglect the potential non-linear and non-monotonic temporal dependence. The proposed approach utilizes vine copulas to mimic different memory properties. HAR, MIDAS and bivariate copulas, which can be seen as special cases of the suggested modeling framework, are chosen as benchmarks. All models are evaluated within an extensive empirical study both in- and out-of-sample and their forecasting ability is compared statistically. The results suggest that one specific vine copula construction is significantly superior over the considered benchmarks in modeling time dependencies of realized volatilities.

A novel approach for dynamic modeling and forecasting of realized covariance matrices is proposed. Realized variances and realized correlation matrices are jointly estimated. The one-to-one relationship between a positive definite correlation matrix and its associated set of partial correlations corresponding to any vine specification is used. A method to select a vine structure, which allows for parsimonious time-series modeling, is introduced. The predicted partial correlations have a clear practical interpretation. Being algebraically independent they do not underlie any algebraic constraint. The forecasting performance is evaluated through investigation of six-dimensional real data and is compared to Cholesky decomposition based benchmark models.

This cumulative dissertation studies various approaches to improve stock market volatility forecasts based on nonlinearity and asymmetric dependence modeling as well as new innovative data sources. Studying multivariate dependence patterns using a vine copula approach and incorporating Google search data as measure of investor attention in a framework of empirical similarity significantly improves volatility forecasts based on different statistical and economic measures. The importance of accurate volatility forecasts in portfolio- and risk management is highlighted in several economic applications and empirical studies.

Multivariate copulas are commonly used in economics, finance and risk management. They allow for very flexible dependency structures, even though they are applied to transformed financial data after marginal time dependencies are removed. This is necessary to facilitate statistical parameter estimation. In this paper we consider a very flexible class of mixed C-vines, which allows the variables to be ordered according to their influence. Vines are built from bivariate copulas only and the term 'mixed' refers to allowing the pair-copula family to be chosen individually for each term. In addition there are many C-vine structure specifications possible and therefore we propose a novel data driven sequential selection procedure, which selects both the C-vine structure and its attached pair-copula families with parameters. After the model selection maximum likelihood (ML) estimation of the parameters is facilitated using the sequential estimates as starting values. An extensive simulation study shows a satisfactory performance of ML estimates in small samples. Finally an application involving US-exchange rates demonstrates the need for mixed C-vine models.

We consider a Bayesian analysis of linear regression models that can account for skewed error distributions with fat tails. The latter two features are often observed characteristics of empirical datasets, and we formally incorporate them in the inferential process. A general procedure for introducing skewness into symmetric distributions is first proposed. Even though this allows for a great deal of flexibility in distributional shape, tail behavior is not affected. Applying this skewness procedure to a Student t distribution, we generate a “skewed Student” distribution, which displays both flexible tails and possible skewness, each entirely controlled by a separate scalar parameter. The linear regression model with a skewed Student error term is the main focus of the article. We first characterize existence of the posterior distribution and its moments, using standard improper priors and allowing for inference on skewness and tail parameters. For posterior inference with this model, we suggest a numerical procedure using Gibbs sampling. The latter proves very easy to implement and renders the analysis of quite challenging problems a practical possibility. Some examples illustrate the use of this model in empirical data analysis.

The paper proposes an additive cascade model of volatility components defined over different time periods. This volatility
cascade leads to a simple AR-type model in the realized volatility with the feature of considering different volatility components
realized over different time horizons and thus termed Heterogeneous Autoregressive model of Realized Volatility (HAR-RV).
In spite of the simplicity of its structure and the absence of true long-memory properties, simulation results show that the
HAR-RV model successfully achieves the purpose of reproducing the main empirical features of financial returns (long memory,
fat tails, and self-similarity) in a very tractable and parsimonious way. Moreover, empirical results show remarkably good
forecasting performance.

The Wishart Autoregressive (WAR) process is a dynamic model for time series of multivariate stochastic volatility. The WAR naturally accommodates the positivity and symmetry of volatility matrices and provides closed-form non-linear forecasts. The estimation of the WAR is straighforward, as it relies on standard methods such as the Method of Moments and Maximum Likelihood. For illustration, the WAR is applied to a sequence of intraday realized volatility-covolatility matrices from the Toronto Stock MarketÂ (TSX).

A vine is a new graphical model for dependent random variables. Vines generalize the Markov trees often used in modeling multivariate distributions. They differ from Markov trees and Bayesian belief nets in that the concept of conditional independence is weakened to allow for various forms of conditional dependence. A general formula for the density of a vine dependent distribution is derived. This generalizes the well-known density formula for belief nets based on the decomposition of belief nets into cliques. Furthermore, the formula allows a simple proof of the Information Decomposition Theorem for a regular vine. The problem of (conditional) sampling is discussed, and Gibbs sampling is proposed to carry out sampling from conditional vine dependent distributions. The so-called ‘canonical vines’ built on highest degree trees offer the most efficient structure for Gibbs sampling.

The book is a collection of essays in honour of Clive Granger. The chapters are by some of the world'leading econometricians, all of whom have collaborated with or studied with (or both) Clive Granger. Central themes of Grangers work are reflected in the book with attention to tests for unit roots and cointegration, tests of misspecification, forecasting models and forecast evaluation, non-linear and non-parametric econometric techniques, and overall, a careful blend of practical empirical work and strong theory. The book shows the scope of Granger's research and the range of the profession that has been influenced by his work.

Copulas are functions that join multivariate distribution functions to their one-dimensional margins. The study of copulas and their role in statistics is a new but vigorously growing field. In this book the student or practitioner of statistics and probability will find discussions of the fundamental properties of copulas and some of their primary applications. The applications include the study of dependence and measures of association, and the construction of families of bivariate distributions.
With 116 examples, 54 figures, and 167 exercises, this book is suitable as a text or for self-study. The only prerequisite is an upper level undergraduate course in probability and mathematical statistics, although some familiarity with nonparametric statistics would be useful. Knowledge of measure-theoretic probability is not required. The revised second edition includes new sections on extreme value copulas, tail dependence, and quasi-copulas.
Roger B. Nelsen is Professor of Mathematics at Lewis & Clark College in Portland, Oregon. He is also the author of Proofs Without Words: Exercises in Visual Thinking and Proofs Without Words II: More Exercises in Visual Thinking, published by the Mathematical Association of America.

This book is a collaborative effort from three workshops held over the last three years, all involving principal contributors to the vine-copula methodology. Research and applications in vines have been growing rapidly and there is now a growing need to collate basic results, and standardize terminology and methods. Specifically, this handbook will (1) trace historical developments, standardizing notation and terminology, (2) summarize results on bivariate copulae, (3) summarize results for regular vines, and (4) give an overview of its applications. In addition, many of these results are new and not readily available in any existing journals. New research directions are also discussed. © 2011 by World Scientific Publishing Co. Pte. Ltd. All rights reserved.

In this article, we introduce a new method of forecasting large-dimensional covariance matrices by exploiting the theoretical and empirical potential of mixing forecasts derived from different information sets. The main theoretical contribution of the article is to find the conditions under which a mixed approach (MA) provides a smaller mean squared forecast error than a standard one. The conditions are general and do not rely on distributional assumptions of the forecasting errors or on any particular model specification. The empirical contribution of the article regards a comprehensive comparative exercise of the new approach against standard ones when forecasting the covariance matrix of a portfolio of thirty stocks. The implemented MA uses volatility forecasts computed from high-frequency-based models and correlation forecasts using realized-volatility-adjusted dynamic conditional correlation models. The MA always outperforms the standard methods computed from daily returns and performs equally well to the ones using high-frequency-based specifications, however at a lower computational cost. © The Author, 2014. Published by Oxford University Press. All rights reserved.

Time varying correlations are often estimated with multivariate generalized autoregressive conditional heteroskedasticity (GARCH) models that are linear in squares and cross products of the data. A new class of multivariate models called dynamic conditional correlation models is proposed. These have the flexibility of univariate GARCH models coupled with parsimonious parametric models for the correlations. They are not linear but can often be estimated very simply with univariate or two-step methods based on the likelihood function. It is shown that they perform well in a variety of situations and provide sensible empirical results.

It is a common practice in finance to estimate volatility from the sum of frequently sampled squared returns. However, market microstructure poses challenges to this estimation approach, as evidenced by recent empirical studies in finance. The present work attempts to lay out theoretical grounds that reconcile continuous-time modeling and discrete-time samples. We propose an estimation approach that takes advantage of the rich sources in tick-by-tick data while preserving the continuous-time assumption on the underlying returns. Under our framework, it becomes clear why and where the “usual” volatility estimator fails when the returns are sampled at the highest frequencies. If the noise is asymptotically small, our work provides a way of finding the optimal sampling frequency. A better approach, the “two-scales estimator,” works for any size of the noise.

So-called pair copula constructions (PCCs), specifying multivariate distributions only in terms of bivariate building blocks (pair copulas), constitute a flexible class of dependence models. To keep them tractable for inference and model selection, the simplifying assumption, that copulas of conditional distributions do not depend on the values of the variables which they are conditioned on, is popular.We show that the only Archimedean copulas in dimension d ≥ 3 which are of the simplified type are those based on the Gamma Laplace transform or its extension, while the Student-t copulas are the only one arising from a scale mixture of Normals. Further, we illustrate how PCCs can be adapted for situations where conditional copulas depend on values which are conditioned on, and demonstrate a technique to assess the distance of a multivariate distribution from a nearby distribution that satisfies the simplifying assumption.

Vine copula models have proven themselves as a very flexible class of multivariate copula models with regard to symmetry and tail dependence for pairs of variables. The full specification of a vine model requires the choice of a vine tree structure, the copula families for each pair copula term and their corresponding parameters. In this survey we discuss the different approaches, both frequentist and Bayesian, for these model choices so far and point to open problems.

The demand for an accurate financial risk management involving larger numbers of assets is strong not only in view of the financial crisis of 2007-2009. In particular dependencies among assets have not been captured adequately. While standard multivariate copulas have added some flexibility, this flexibility is insufficient in higher dimensional applications. Vine copulas can fill this gap by benefiting from the rich class of existing bivariate parametric copula families. Exploiting this in combination with GARCH models for margins, we develop a regular vine copula based factor model for asset returns, the Regular Vine Market Sector model, that is motivated by the classical CAPM and shown to be superior to the CAVA model proposed by Heinen and Valdesogo (2009). While the model can also be used to separate the systematic and idiosyncratic risk of specific stocks, we explicitly discuss how vine copula models can be employed for active and passive portfolio management. In particular, Value-at-Risk forecasting and asset allocation are treated in detail. All developed models and methods are used to analyze the Euro Stoxx 50 index, a major market indicator for the Eurozone. Relevant benchmark models such as the popular DCC model and the common Student-t copula are taken into account.

In this paper we provide a method for estimating multivariate distributions defined through hierarchical Archimedean copulas. In general, the true structure of the hierarchy is unknown, but we develop a computationally efficient technique to determine it from the data. For this purpose we introduce a hierarchical estimation procedure for the parameters and provide an asymptotic analysis. We consider both parametric and nonparametric estimation of the marginal distributions. A simulation study and an empirical application show the effectiveness of the grouping procedure in the sense of structure selection.

Pair-copula constructions (PCCs) offer great flexibility in modeling multivariate dependence. For inference purposes, however, conditional pair-copulas are often assumed to depend on the conditioning variables only indirectly through the conditional margins. The authors show here that this assumption can be misleading. To assess its validity in trivariate PCCs, they propose a visual tool based on a local likelihood estimator of the conditional copula parameter which does not rely on the simplifying assumption. They establish the consistency of the estimator and assess its performance in finite samples via Monte Carlo simulations. They also provide a real data application.

Volatility plays an important role when managing risks, composing portfolios, and pricing financial instruments. However it is not directly observable, being usually estimated through parametric models such as those in the GARCH family. A more natural empirical measure of daily returns variability is the so called realized volatility, computed from high-frequency intra day returns, an unbiased and highly efficient estimator of the return volatility. At this time point, with globalization effects driving markets' volatilities all over the world, it becomes of great interest to assess volatilities' co-movements and contagion. To this end we use pair-copulas, a powerful and flexible statistical model which allows for linear and nonlinear, possibly asymmetric forms of dependence without the restrictions posed by existing multivariate models. Given the importance of the Brazilian stock market in the Latin America, in this paper we characterize the dependence structure linking the realized volatilities of seven Brazilian stocks. The realized volatilities are computed using an 8-year sample of 5-minute returns from 2001 through 2009. We include a more comprehensive study involving seven emerging markets, addressing the issue of contagion in a more general scenario.

A class of ra-variate distributions with given margins and m(m ? l)/2 dependence parameters, which is based on iteratively mixing conditional distributions, is derived. The family of multivariate normal distributions is a special case. The motivation for the class is to get parametric families that have m(m ? l)/2 dependence parameters and properties that the family of multivariate normal distributions does not have. Properties of the class are studied, with details for (i) conditions for bivariate tail dependence and non-trivial limiting multivariate extreme value distributions and (ii) range of dependence for a bivariate measure of association such as Kendall's tau.

We consider a Brownian semimartingale X (the sum of a stochastic integral w.r.t. a Brownian motion and an integral w.r.t. Lebesgue measure), and for each n an increasing sequence T(n, i) of stopping times and a sequence of positive FT(n,i)-measurable variables Δ(n,i) such that S(n,i):=T(n,i)+Δ(n,i)≤ (n,i+1). We are interested in the limiting behavior of processes of the form U tⁿ (g)=√δnΣi:S(n,i)≤ t[g(T(n,i),ξiⁿ-αiⁿ(g)], where δn is a normalizing sequence tending to 0 and ξiⁿ=δ(n,i)-1/2(Xs(n,i)-XT(n,i) and αiⁿ are suitable centering terms and g is some predictable function of (ω,t,x). Under rather weak assumptions on the sequences T(n, i) as n goes to infinity, we prove that these processes converge (stably) in law to the stochastic integral of g w.r.t. a random measure B which is, conditionally on the path of X, a Gaussian random measure. We give some applications to rates of convergence in discrete approximations for the p-variation processes and local times. © The Author, 2017. Published by Oxford University Press. All rights reserved.

In practical applications of Box-Jenkins autoregressive integrated moving average (ARIMA) models, the number of times that the observed time series must be differenced to achieve approximate stationarity is usually determined by careful, but mostly informal, analysis of the differenced series. For many time series, some differencing seems appropriate, but taking the first or the second difference may be too strong. As an alternative, J. R. M. Hosking [Biometrika 68, 165-176 (1981; Zbl 0464.62088)], and C. W. J. Granger and R. Joyeux [J. Time Ser. Anal. 1, 15-29 (1980; Zbl 0503.62079)] proposed the use of fractional differences. For -1/2<d<1/2, the resulting fractional ARIMA processes are stationary. For 0<d<1/2, the correlations are not summable. The parameter d can be estimated, for instance by maximum likelihood. Unfortunately, estimation methods known so far have been restricted to the stationary range -1/2<d<1/2. We show how any real d>-1/2 can be estimated by an approximate maximum likelihood method. We thus obtain a unified approach to fitting traditional Box-Jenkins ARIMA processes as well as stationary and non-stationary fractional ARIMA processes. A confidence interval for d can be given. Tests, such as for unit roots in the autoregressive parameter or for stationarity, follow immediately. The resulting confidence intervals for the ARMA parameters take into account the additional uncertainty due to estimation of d. A simple algorithm for calculating the estimate of d and the ARMA parameters is given. Simulations and two data examples illustrate the result.

It has long been customary to measure the adequacy of an estimator by the smallness of its mean squared error. The least squares estimators were studied by Gauss and by other authors later in the nineteenth century. A proof that the best unbiased estimator of a linear function of the means of a set of observed random variables is the least squares estimator was given by Markov [12], a modified version of whose proof is given by David and Neyman [4]. A slightly more general theorem is given by Aitken [1]. Fisher [5] indicated that for large samples the maximum likelihood estimator approximately minimizes the mean squared error when compared with other reasonable estimators. This paper will be concerned with optimum properties or failure of optimum properties of the natural estimator in certain special problems with the risk usually measured by the mean squared error or, in the case of several parameters, by a quadratic function of the estimators. We shall first mention some recent papers on this subject and then give some results, mostly unpublished, in greater detail.

A large number of parameterizations have been proposed to model conditional variance dynamics in a multivariate framework. However, little is known about the ranking of multivariate volatility models in terms of their forecasting ability. The ranking of multivariate volatility models is inherently problematic because it requires the use of a proxy for the unobservable volatility matrix and this substitution may severely affect the ranking. We address this issue by investigating the properties of the ranking with respect to alternative statistical loss functions used to evaluate model performances. We provide conditions on the functional form of the loss function that ensure the proxy-based ranking to be consistent for the true one - i.e., the ranking that would be obtained if the true variance matrix was observable. We identify a large set of loss functions that yield a consistent ranking. In a simulation study, we sample data from a continuous time multivariate diffusion process and compare the ordering delivered by both consistent and inconsistent loss functions. We further discuss the sensitivity of the ranking to the quality of the proxy and the degree of similarity between models. An application to three foreign exchange rates, where we compare the forecasting performance of 16 multivariate GARCH specifications, is provided.

This note solves the puzzle of estimating degenerate Wishart Autoregressive processes, introduced by Gourieroux, Jasiak and Sufana (2009) to model multivariate stochastic volatility. It derives the asymptotic and empirical properties of the Method of Moment estimator of the Wishart degrees of freedom subject to different stationarity assumptions and specific distributional settings of the underlying processes.

Using only bivariate copulas as building blocks, regular vine copulas constitute a flexible class of high-dimensional dependency models. However, the flexibility comes along with an exponentially increasing complexity in larger dimensions. In order to counteract this problem, we propose using statistical model selection techniques to either truncate or simplify a regular vine copula. As a special case, we consider the simplification of a canonical vine copula using a multivariate copula as previously treated by Heinen & Valdesogo (2009) and Valdesogo (2009). We validate the proposed approaches by extensive simulation studies and use them to investigate a 19-dimensional financial data set of Norwegian and international market variables. The Canadian Journal of Statistics 40: 68–85; 2012 © 2012 Statistical Society of Canada
En utilisant uniquement des copules bidimensionnelles comme unités de base, les copules en arborescence régulière constituent une classe flexible pour modéliser la dépendance pour les grandes dimensions. Toutefois, en grandes dimensions, la flexibilité s'accompagne d'une croissance exponentielle de la complexité. Pour contrecarrer ce problème, nous proposons l'utilisation des techniques de sélection de modèles statistiques afin de tronquer ou encore de simplifier la copule en arborescence régulière. Comme cas particulier, nous considérons la simplification de la copule en arborescence canonique par l'utilisation d'une copule multidimensionnelle telle que présentée dans Heinen et Valdesogo (2009) et Valdesogo (2009). Nous validons les approches proposées par de vastes études de simulation et nous les utilisons pour analyser un jeu de données financières de dimension 19 sur des variables des marchés norvégien et internationaux. La revue canadienne de statistique 40: 68–85; 2012 © 2012 Société statistique du Canada

This paper proposes a methodology for dynamic modelling and forecasting of realized covariance matrices based on fractionally integrated processes. The approach allows for flexible dependence patterns and automatically guarantees positive definiteness of the forecast. We provide an empirical application of the model, which shows that it outperforms other approaches in the extant literature, both in terms of statistical precision as well as in terms of providing a superior mean-variance trade-off in a classical investment decision setting. Copyright © 2010 John Wiley & Sons, Ltd.

In recent years analyses of dependence structures using copulas have become more popular than the standard correlation analysis. Starting from Aas et al. (2009) regular vine pair-copula constructions (PCCs) are considered the most flexible class of multivariate copulas. PCCs are involved objects but (conditional) independence present in data can simplify and reduce them significantly. In this paper the authors detect (conditional) independence in a particular vine PCC model based on bivariate t copulas by deriving and implementing a reversible jump Markov chain Monte Carlo algorithm. However, the methodology is general and can be extended to any regular vine PCC and to all known bivariate copula families. The proposed approach considers model selection and estimation problems for PCCs simultaneously. The effectiveness of the developed algorithm is shown in simulations and its usefulness is illustrated in two real data applications. The Canadian Journal of Statistics 39: 239–258; 2011 © 2011 Statistical Society of Canada

We provide a Bayesian analysis of pair-copula constructions (PCCs) (Aas et al., 2009), which outperform many other multivariate copula constructions in modeling dependencies in financial data. We use bivariate t-copulas as building blocks in a PCC to allow extreme events in bivariate margins individually. While parameters may be estimated by maximum likelihood, confidence intervals are difficult to obtain. Consequently, we develop a Markov chain Monte Carlo (MCMC) algorithm and compute credible intervals. Standard errors obtained from MCMC output are compared to those obtained from a numerical Hessian matrix and bootstrapping. As applications, we consider Norwegian financial returns and Euro swap rates. Finally, we apply the Bayesian model selection approach of Congdon (2006) to identify conditional independence, thus constructing more parsimonious PCCs. Copyright The Author 2010. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oxfordjournals.org, Oxford University Press.

This paper concerns itself with applications of pair-copulas in finance, and bridges the gap between theory and application. We provide a broad view of the problem of modeling multivariate financial log-returns using pair-copulas, gathering together for this purpose theoretical and computational results from the literature on canonical vines. From the practitioner’s viewpoint, the paper shows the advantages of modeling through pair-copulas and makes clear that it is possible to implement this methodology on a daily basis. All the necessary steps (model selection, estimation, validation, simulations, and applications) are discussed at a level easily understood by all data analysts.

This paper analyzes the forecast accuracy of the multivariate realized volatility model introduced by Chiriac and Voev (2010), subject to different degrees of model parametrization and economic evaluation criteria. Bymodelling the Cholesky factors of the covariancematrices, the model generates positive definite, but biased covariance forecasts. In this paper, we provide empirical evidence that parsimonious versions of the model generate the best covariance forecasts in the absence of bias correction. Moreover, we show by means of stochastic dominance tests that any risk averse investor, regardless of the type of utility function or return distribution, would be better-off from using this model than from using some standard approaches.

This paper introduces the model confidence set (MCS) and applies it to the selection of models. An MCS is a set of models that is constructed so that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS acknowledges the limitations of the data; uninformative data yield an MCS with many models whereas informative data yield an MCS with only a few models. The MCS procedure does not assume that a particular model is the true model; in fact, the MCS procedure can be used to compare more general objects, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999) and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine the MCS of the best in terms of in-sample likelihood criteria.

C.Dolans-Dade1
and P.A.Meyer1
(1)
Sminaire de Probabilits, Univerit de Strasbourg, Strasbourg, France
Intgrales Stochastiques par Rapport aux Martingales Locales

Parametric families of continuous bivariate distributions with given margins that include independence and perfect positive dependence are compared on the basis on some important properties. Since many such families exist, the comparisons are helpful for deciding on suitable models for multivariate data. The study of the properties has motivation from applications in extreme value inference. One property considered for bivariate families is whether they extend to multivariate families, and extensions are given when possible. Several new bivariate and multivariate families are included and some open research problems in the area of multivariate families are mentioned.

We present a new matrix-logarithm model of the realized covariance matrix of stock returns. The model uses latent factors which are functions of lagged volatility, lagged returns and other forecasting variables. The model has several advantages: it is parsimonious; it does not require imposing parameter restrictions; and, it results in a positive-definite estimated covariance matrix. We apply the model to the covariance matrix of size-sorted stock returns and find that two factors are sufficient to capture most of the dynamics.

Building on the work of Bedford, Cooke and Joe, we show how multivariate data, which exhibit complex patterns of dependence in the tails, can be modelled using a cascade of pair-copulae, acting on two variables at a time. We use the pair-copula decomposition of a general multivariate distribution and propose a method for performing inference. The model construction is hierarchical in nature, the various levels corresponding to the incorporation of more variables in the conditioning sets, using pair-copulae as simple building blocks. Pair-copula decomposed models also represent a very flexible way to construct higher-dimensional copulae. We apply the methodology to a financial data set. Our approach represents the first step towards the development of an unsupervised algorithm that explores the space of possible pair-copula models, that also can be applied to huge data sets automatically.

This paper is about how to estimate the integrated covariance 〈X,Y〉T of two assets over a fixed time horizon [0,T], when the observations of X and Y are “contaminated” and when such noisy observations are at discrete, but not synchronized, times. We show that the usual previous-tick covariance estimator is biased, and the size of the bias is more pronounced for less liquid assets. This is an analytic characterization of the Epps effect. We also provide the optimal sampling frequency which balances the tradeoff between the bias and various sources of stochastic error terms, including nonsynchronous trading, microstructure noise, and time discretization. Finally, a two scales covariance estimator is provided which simultaneously cancels (to first order) the Epps effect and the effect of microstructure noise. The gain is demonstrated in data.

Both volatility clustering and conditional non-normality can induce the leptokurtosis typically observed in financial data. In this paper, the exact representation of kurtosis is derived for both GARCH and stochastic volatility models when innovations may be conditionally non-normal. We find that, for both models, the volatility clustering and non-normality contribute interactively and symmetrically to the overall kurtosis of the series.

Applied researchers often test for the difference of the Sharpe ratios of two investment strategies. A very popular tool to this end is the test of Jobson and Korkie (1981), which has been corrected by Memmel (2003). Unfortunately, this test is not valid when returns have tails heavier than the normal distribution or are of time series nature. Instead, we propose the use of robust inference methods. In particular, we suggest to construct a studentized time series bootstrap confidence interval for the difference of the Sharpe ratios and to declare the two ratios different if zero is not contained in the obtained interval. This approach has the advantage that one can simply resample from the observed data as opposed to some null-restricted data. A simulation study demonstrates the improved finite
sample performance compared to existing methods. In addition, two applications to real data are provided.

We show how pre-averaging can be applied to the problem of measuring the ex-post covariance of financial asset returns under microstructure noise and non-synchronous trading. A pre-averaged realised covariance is proposed, and we present an asymptotic theory for this new estimator, which can be configured to possess an optimal convergence rate or to ensure positive semi-definite covariance matrix estimates. We also derive a noise-robust Hayashi–Yoshida estimator that can be implemented on the original data without prior alignment of prices. We uncover the finite sample properties of our estimators with simulations and illustrate their practical use on high-frequency equity data.

To estimate the parameters of a stationary univariate fractionally integrated time series, the unconditional exact likelihood function is derived. This allows the simultaneous estimation of all the parameters of the model by exact maximum likelihood. Issues involved in obtaining maximum likelihood estimates are discussed. Particular attention is given to efficient procedures to evaluate the likelihood function, obtaining starting values, and the small sample properties of the estimators. Limitations of previous estimation procedures are also discussed.

Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3–5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly.

Misperceptions about extreme dependencies between different financial assets
have been an im- portant element of the recent financial crisis. This paper
studies inhomogeneity in dependence structures using Markov switching regular
vine copulas. These account for asymmetric depen- dencies and tail dependencies
in high dimensional data. We develop methods for fast maximum likelihood as
well as Bayesian inference. Our algorithms are validated in simulations and
applied to financial data. We find that regime switches are present in the
dependence structure of various data sets and show that regime switching models
could provide tools for the accurate description of inhomogeneity during times
of crisis.

Regular vine distributions which constitute a flexible class of multivariate
dependence models are discussed. Since multivariate copulae constructed through
pair-copula decompositions were introduced to the statistical community,
interest in these models has been growing steadily and they are finding
successful applications in various fields. Research so far has however been
concentrating on so-called canonical and D-vine copulae, which are more
restrictive cases of regular vine copulae. It is shown how to evaluate the
density of arbitrary regular vine specifications. This opens the vine copula
methodology to the flexible modeling of complex dependencies even in larger
dimensions. In this regard, a new automated model selection and estimation
technique based on graph theoretical considerations is presented. This
comprehensive search strategy is evaluated in a large simulation study and
applied to a 16-dimensional financial data set of international equity, fixed
income and commodity indices which were observed over the last decade, in
particular during the recent financial crisis. The analysis provides
economically well interpretable results and interesting insights into the
dependence structure among these indices.