Valor en riesgo: modelos econométricos contra metodologías tradicionales

Análisis Económico 01/2007;
Source: DOAJ


En este artículo se evalúa el comportamiento de diferentes métodos (paramétricos y de simulación) para estimar el valor en riesgo de portafolios compuestos por instrumentos de renta variable. También se incorpora el uso de diferentes modelos econométricos que incorporan condicionalidad en la varianza, y se comparan contra los métodos tradicionales de estimación del valor en riesgo. La estimación se realizó para dos periodos: uno con crisis financiera y volatilidad grande, y otro con estabilidad económica y volatilidad menor. En general se encontró que el valor en riesgo estimado es mayor en los periodos de crisis económica que en los periodos de estabilidad y que, según la clasificación del Banco Internacional de Pagos, todas las metodologías de estimación usadas se encuentran en la zona de aceptación de 99% de confianza. Los resultados muestran que cada modelo obtiene diferentes medidas de valor en riesgo, no obstante la metodología de simulación histórica consistentemente fue la que dio mayores estimados, con lo cual se encuentra que los métodos que incorporan condicionalidad en la varianza (exceptuando al EGARCH) permitirían al administrador de riesgos obtener estimados menores a las técnicas tradicionales.

Full-text preview

Available from:
  • Source
    [Show abstract] [Hide abstract] ABSTRACT: ARCH and GARCH models have become important tools in the analysis of time series data, particularly in financial applications. These models are especially useful when the goal of the study is to analyze and forecast volatility. This paper gives the motivation behind the simplest GARCH model and illustrates its usefulness in examining portfolio risk. Extensions are briefly discussed.
    Preview · Article · Feb 2001 · Journal of Economic Perspectives
  • Source
    [Show abstract] [Hide abstract] ABSTRACT: A ‘long memory’ property of stock market returns is investigated in this paper. It is found that not only there is substantially more correlation between absolute returns than returns themselves, but the power transformation of the absolute return ¦rt¦d also has quite high autocorrelation for long lags. It is possible to characterize ¦rt¦d to be ‘long memory’ and this property is strongest when d is around 1. This result appears to argue against ARCH type specifications based upon squared returns. But our Monte-Carlo study shows that both ARCH type models based on squared returns and those based on absolute return can produce this property. A new general class of models is proposed which allows the power δ of the heteroskedasticity equation to be estimated from the data.
    Preview · Article · Feb 1993 · Journal of Empirical Finance
  • Source
    [Show abstract] [Hide abstract] ABSTRACT: In this paper we investigate the ability of different models to produce useful VaR-estimates for exchange rate positions. Our analysis shows that it is important to take into account parameter uncertainty, since this leads to uncertainty in the predicted VaR. We make this uncertainty in the VaR explicit by means of simulation. Our empirical results suggest that more sophisticated tail-modeling approaches come at the cost of more uncertainty about the VaR-estimate itself. We show how to adjust VaR calculations in order to take the parameter uncertainty into account. This is accomplished through a data-driven method to deliver not just a point estimate of the VaR, but a region.
    Full-text · Article · Oct 2005 · SSRN Electronic Journal
  • Source
    Full-text · Article · Feb 1977 · The Journal of Business
  • Source
    [Show abstract] [Hide abstract] ABSTRACT: This article compares econometric model specifications that have been proposed to explain the commonly observed characteristics of the unconditional distribution of daily stock returns. The empirical results indicate that the most likely ranking is (1) intertemporal dependence models, (2) Student t, (3) generalized mixture-of-normal distributions, (4) Poisson jump, and (5) the stationary normal. Among the intertemporal dependence models for conditional heteroscedasticity, those with a leverage (or asymmetry) effect are superior. The Glosten, Jagannathan, and Runkle specification is the most descriptive for individual stocks, while Nelson's exponential model is the most likely for stock indexes. Copyright 1994 by University of Chicago Press.
    Full-text · Article · Feb 1994 · The Journal of Business
  • [Show abstract] [Hide abstract] ABSTRACT: The abundance of high-frequency financial data and the rapid development of computer hardware have combined to transform financial economics into, arguably, the most empirically oriented field within the social sciences. At the same time, as a result of the difficulty of conducting genuine market experiments, empirical finance remains firmly grounded in the tradition of model-driven statistical inference that is characteristic of economics. Even so, the richness of data has often spurred a practical orientation that is more familiar in the natural sciences. The combination has proved fertile, leading to the classification of a set of loosely connected empirical topics as a distinct entity, financial econometrics.
    No preview · Article · Oct 1998 · Econometric Theory
  • [Show abstract] [Hide abstract] ABSTRACT: Risk exposures are typically quantified in terms of a "value at risk" (VaR) estimate. A VaR estimate corresponds to a specific critical value of a portfolio's potential one-day profit and loss distribution. Given their functions both as internal risk management tools and as potential regulatory measures of risk exposure, it is important to assess and quantify the accuracy of an institution's VaR estimates. This study considers the formal statistical procedures that could be used to assess the accuracy of VaR estimates. The analysis demonstrates that verification of the accuracy of tail probability value estimates becomes substantially more difficult as the cumulative probability estimate being verified becomes smaller. In the extreme, it becomes virtually impossible to verify with any accuracy the potential losses associated with extremely rare events. Moreover, the economic importance of not being able to reliably detect an inaccurate model or an under-reporting institution potentially becomes much more pronounced as the cumulative probability estimate being verified becomes smaller. It does not appear possible for a bank or its supervisor to reliably verify the accuracy of an institution's internal model loss exposure estimates using standard statistical techniques. The results have implications both for banks that wish to assess the accuracy of their internal risk measurement models as well as for supervisors who must verify the accuracy of an institution's risk exposure estimate reported under an internal models approach to model risk.
    No preview · Article · Feb 1995 · The Journal of Derivatives
Show more