Article

Chapter 15 Bayesian Model Averaging in the Presence of Structural Breaks

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This chapter develops a return forecasting methodology that allows for instability in the relationship between stock returns and predictor variables, model uncertainty, and parameter estimation uncertainty. The predictive regression specification that is put forward allows for occasional structural breaks of random magnitude in the regression parameters, uncertainty about the inclusion of forecasting variables, and uncertainty about parameter values by employing Bayesian model averaging. The implications of these three sources of uncertainty and their relative importance are investigated from an active investment management perspective. It is found that the economic value of incorporating all three sources of uncertainty is considerable. A typical investor would be willing to pay up to several hundreds of basis points annually to switch from a passive buy-and-hold strategy to an active strategy based on a return forecasting model that allows for model and parameter uncertainty as well as structural breaks in the regression parameters.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Chien [9] used a Lagrange Multiplier unit root test [26] to examine the issue of whether regime changes had broken down the stability of the ripple effect in Taiwan's housing market. Ravazzolo et al. [36] utilized Bayesian model averaging, an ensemble method, to estimate structural breaks in the regression parameters, uncertainty about the inclusion of forecasting variables, and uncertainty about parameter values with application in the stock market. Vizek and Posedel [47] used threshold autoregressive (TAR) and momentum TAR (M-TAR) models, defined thresholds in terms of the changes in the error term, to test if housing prices in the United States, United Kingdom, Spain and Ireland were characterized by threshold effects. ...
... There are methods in Business and Economics, Lagrange Multiplier unit root test [9] and Bayesian model averaging [36] and GARCH model [3], which can detect thresholds in the relationships between variables. These models detect thresholds in the distribution of the time series variables, and are not very useful in terms of interpretation of the relationship. ...
Article
Statistical thresholds occur when the changes in the relationships between a response and predictor variables are not linear but abrupt at some points of the predictor variable values. In this paper, we defined a piecewise-linear regression model which can detect two thresholds in the relationships via changes in slopes. We developed the corresponding Bayesian methodology for model estimation and inference by proposing prior distributions, deriving posterior distributions, and generating posterior values using Metropolis and Gibbs sampling algorithm. The parameters in our model are easy to understand, highly interpretable, and flexible to make inferences. The methodology has been applied to estimate threshold effects in housing market pricing data in two cities-Kamloops and Chilliwack-in British Columbia, Canada. Our findings revealed that the implementation of changes in the government property tax policies had threshold effects in the market price trend. The proposed model will be useful to detect threshold effects in other correlated time series data as well.
Article
This paper documents the fact that the factors extracted from a large set of macroeconomic variables contain information that can be useful for predicting monthly US excess stock returns over the period 1975-2014. Factor-augmented predictive regression models improve upon benchmark models that include only valuation ratios and interest rate related variables, and possibly individual macro variables, as well as the historical average excess return. The improvements in out-of-sample forecast accuracy are significant, both statistically and economically. The factor-augmented predictive regressions have superior market timing abilities, such that a mean variance investor would be willing to pay an annual performance fee of several hundreds of basis points to switch from the predictions offered by the benchmark models to those of the factor-augmented models. One important reason for the superior performance of the factor-augmented predictive regressions is the stability of their forecast accuracy, whereas the benchmark models suffer from a forecast breakdown during the 1990s. (C) 2016 Published by Elsevier B.V. on behalf of International Institute of Forecasters.
Article
Full-text available
There is still some doubt about those economic variables that really matter for the Fed’s decisions. In comparison with other estimations, this study uses the approach of Bayesian model averaging (BMA). The estimations show that over the long-run inflation, unemployment rates and long-term interest rates are the crucial variables in explaining the Federal Funds Rate. In the other two estimation samples, also the fiscal deficit and monetary aggregates were of relevance. There is also evidence for interest rate smoothing. In addition, we account for parameter instability by combining BMA with time-varying coefficient (TVC) modelling. We find strong evidence for structural breaks. Finally, a model average is constructed via an TVC-BMA approach.
Article
Model uncertainty and recurrent or cyclical structural changes in macroeconomic time series dynamics are substantial challenges to macroeconomic forecasting. This paper discusses a macro variable forecasting methodology that combines model uncertainty and regime switching simultaneously. The proposed predictive regression specification permits both regime switching of the regression parameters and uncertainty about the inclusion of forecasting variables by employing Bayesian model averaging. In an empirical exercise involving quarterly US inflation, we observed that our Bayesian model averaging with regime switching leads to substantial improvements in forecast performance, particularly in the medium horizon (two to four quarters).
Article
Full-text available
A crucial problem in building a multiple regression model is the selection of predictors to include. The main thrust of this article is to propose and develop a procedure that uses probabilistic considerations for selecting promising subsets. This procedure entails embedding the regression setup in a hierarchical normal mixture model where latent variables are used to identify subset choices. In this framework the promising subsets of predictors can be identified as those with higher posterior probability. The computational burden is then alleviated by using the Gibbs sampler to indirectly sample from this multinomial posterior distribution on the set of possible subset choices. Those subsets with higher probability—the promising ones—can then be identified by their more frequent appearance in the Gibbs sample.
Article
Full-text available
We estimate a forward-looking monetary policy reaction function for the postwar United States economy, before and after Volcker's appointment as Fed Chairman in 1979. Our results point to substantial differences in the estimated rule across periods. In particular, interest rate policy in the Volcker-Greenspan period appears to have been much more sensitive to changes in expected inflation than in the pre-Volcker period. We then compare some of the implications of the estimated rules for the equilibrium properties of inflation and output, using a simple macroeconomic model, and show that the Volcker-Greenspan rule is stabilizing.
Article
Full-text available
This article presents new evidence about the time-series behavior of stock prices. Daily return series exhibit significant levels of second-order dependence, and they cannot be modeled as linear white-noise processes. A reasonable return-generating process is empirically shown to be a first-order autoregressive process with conditionally heteroskedastic innovations. In particular, generalized autoregressive conditional heteroskedastic GARCH (1, 1) processes fit to data very satisfactorily. Various out-of-sample forecasts of monthly return variances are generated and compared statistically. Forecasts based on the GARCH model are found to be superior. Copyright 1989 by the University of Chicago.
Article
Full-text available
In this article we examine the structural stability of predictive regression models of U.S. quarterly aggregate real stock returns over the postwar era. We consider predictive regressions models of S&P 500 and CRSP equal-weighted real stock returns based on eight financial variables that display predictive ability in the extant literature. We test for structural stability using the popular Andrews SupF statistic and the Bai subsample procedure in conjunction with the Hansen heteroskedastic fixed-regressor bootstrap. We also test for structural stability using the recently developed methodologies of Elliott and Müller, and Bai and Perron. We find strong evidence of structural breaks in five of eight bivariate predictive regression models of S&P 500 returns and some evidence of structural breaks in the three other models. There is less evidence of structural instability in bivariate predictive regression models of CRSP equal-weighted returns, with four of eight models displaying some evidence of structural breaks. We also obtain evidence of structural instability in a multivariate predictive regression model of S&P 500 returns. When we estimate the predictive regression models over the different regimes defined by structural breaks, we find that the predictive ability of financial variables can vary markedly over time. Copyright 2006, Oxford University Press.
Article
Full-text available
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, ‘diffuse’ priors on model-specific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an ‘automatic’ or ‘benchmark’ prior structure that can be used in such cases. We focus on the normal linear regression model with uncertainty in the choice of regressors. We propose a partly non-informative prior structure related to a natural conjugate g-prior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (Int. Stat. Rev. 63 (1995) 215), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a ‘benchmark’ prior specification in a linear regression context with model uncertainty.
Article
A simple method for subset selection of independent variables in regression models is proposed. We expand the usual regression equation to an equation that incorporates all possible subsets of predictors by adding indicator variables as parameters. The vector of indicator variables dictates which predictors to include. Several choices of priors can be employed for the unknown regression coefficients and the unknown indicator parameters. The posterior distribution of the indicator vector is approximated by means of the Markov Chain Monte Carlo algorithm. We select subsets with high posterior probabilities. In addition to linear models, we consider generalized linear models.
Article
The power of dividend yields to forecast stock returns, measured by regression R2, increases with the return horizon. We offer a two-part explanation. (1) High autocorrelation causes the variance of expected returns to grow faster than the return horizon. (2) The growth of the variance of unexpected returns with the return horizon is attenuated by a discount-rate effect - shocks to expected returns generate opposite shocks to current prices. We estimate that, on average, the future price increases implied by higher expected returns are just offset by the decline in the current price. Thus, time-varying expected returns generate ‘temporary’ components of prices.
Article
There is an ongoing debate in the literature about the apparent weak or negative relation between risk (conditional variance) and return (expected returns) in the aggregate stock market. We develop and estimate an empirical model based on the ICAPM to investigate this relation. Our primary innovation is to model and identify empirically the two components of expected returns--the risk component and the component due to the desire to hedge changes in investment opportunities. We also explicitly model the effect of shocks to expected returns on ex post returns and use implied volatility from traded options to increase estimation efficiency. As a result, the coefficient of relative risk aversion is estimated more precisely, and we find it to be positive and reasonable in magnitude. Although volatility risk is priced, as theory dictates, it contributes only a small amount to the time-variation in expected returns. Expected returns are driven primarily by the desire to hedge changes in investment opportunities. It is the omission of this hedge component that is responsible for the contradictory and counter-intuitive results in the existing literature.
Article
The idea of data augmentation arises naturally in missing value problems, as exemplified by the standard ways of filling in missing cells in balanced two-way tables. Thus data augmentation refers to a scheme of augmenting the observed data so as to make it more easy to analyze. This device is used to great advantage by the EM algorithm (Dempster, Laird, and Rubin 1977) in solving maximum likelihood problems. In situations when the likelihood cannot be approximated closely by the normal likelihood, maximum likelihood estimates and the associated standard errors cannot be relied upon to make valid inferential statements. From the Bayesian point of view, one must now calculate the posterior distribution of parameters of interest. If data augmentation can be used in the calculation of the maximum likelihood estimate, then in the same cases one ought to be able to use it in the computation of the posterior distribution. It is the purpose of this article to explain how this can be done.
Article
A Bayesian approach is presented for estimating a mixture of linear Gaussian state-space models. Such models are used to model interventions in time series and nonparametric regression. Markov chain Monte Carlo sampling is usually necessary to obtain the posterior distributions of such mixture models, because it is difficult to obtain them analytically. The methodological contribution of the article is to derive a set of recursions for dynamic mixture models that efficiently implement a Markov chain Monte Carlo sampling scheme that converges rapidly to the posterior distribution. The methodology is illustrated by fitting an autoregressive model subject to interventions to zinc concentration in sludge.
Article
We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states (``annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ``relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.
Article
In this paper, we analyze the economic value of predicting index returns as well as volatility. On the basis of fairly simple linear models, estimated recursively, we produce genuine out-of-sample forecasts for the return on the S&P 500 index and its volatility. Using monthly data from 1954 to 2001, we test the statistical significance of return and volatility predictability and examine the economic value of a number of alternative trading strategies. While we find strong evidence for market timing in both returns and volatility, the success of market timing and volatility timing varies considerably over the sample period. Further, it appears easier to forecast returns at times when volatility is high. For a mean-variance investor, this predictability is economically profitable, even if short sales are not allowed and transaction costs are quite large. The economic value of trading strategies that employ market timing in returns and volatility typically exceeds that of strategies that only employ timing in returns.
Article
We examine the evidence that expected security returns can be forecasted by the term premium, default premium, and dividend yield, in light of recent findings that similar security return patterns are associated with Federal Reserve monetary policy developments. We extend Fama and French's (1989) analysis by suggesting that the monetary environment influences investors' required returns, and hence the robustness of the models they propose. Our findings indicate that Fama and French's results vary dramatically across monetary environments; that is, the behavior of the business-conditions proxies and their influence on expected security returns is significantly affected by the monetary sector.
Article
Expected returns on common stocks and long-term bonds contain a term or maturity premium that has a clear business-cycle pattern (low near peaks, high near troughs). Expected returns also contain a risk premium that is related to longer-term aspects of business conditions. The variation through time in this premium is stronger for low-grade bonds than for high-grade bonds and stronger for stocks than for bonds. The general message is that expected returns are lower when economic conditions are strong and higher when conditions are weak.
Article
We use Bayesian model averaging to analyze the sample evidence on return predictability in the presence of model uncertainty. The analysis reveals in-sample and out-of-sample predictability, and shows that the out-of-sample performance of the Bayesian approach is superior to that of model selection criteria. We find that term and market premia are robust predictors. Moreover, small-cap value stocks appear more predictable than large-cap growth stocks. We also investigate the implications of model uncertainty from investment management perspectives. We show that model uncertainty is more important than estimation risk, and investors who discard model uncertainty face large utility losses.
Article
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, ‘diffuse’ priors on model-specific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an ‘automatic’ or ‘benchmark’ prior structure that can be used in such cases. We focus on the normal linear regression model with uncertainty in the choice of regressors. We propose a partly non-informative prior structure related to a natural conjugate g-prior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (Int. Stat. Rev. 63 (1995) 215), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a ‘benchmark’ prior specification in a linear regression context with model uncertainty.
Article
This study examines evidence of instability in models of ex post predictable components in stock returns related to structural breaks in the coefficients of state variables such as the lagged dividend yield, short interest rate, term spread and default premium. We estimate linear models of excess returns for a set of international equity indices and test for stability of the estimated regression parameters. There is evidence of instability for the vast majority of countries. Breaks do not generally appear to be uniform in time: different countries experience breaks at different times. For the majority of international indices, the predictable component in stock returns appears to have diminished following the most recent break. We assess the adequacy of the break tests and model selection procedures in a set of Monte Carlo experiments.
Article
This paper studies the intertemporal relation between the conditional mean and the conditional variance of the aggregate stock market return. We introduce a new estimator that forecasts monthly variance with past daily squared returns, the mixed data sampling (or MIDAS) approach. Using MIDAS, we find a significantly positive relation between risk and return in the stock market. This finding is robust in subsamples, to asymmetric specifications of the variance process and to controlling for variables associated with the business cycle. We compare the MIDAS results with tests of the intertemporal capital asset pricing model based on alternative conditional variance specifications and explain the conflicting results in the literature. Finally, we offer new insights about the dynamics of conditional variance.
Article
This paper demonstrates that the class of conditionally linear and Gaussian state-space models offers a general and convenient framework for simultaneously handling nonlinearity, structural change and outliers in time series. Many popular nonlinear time series models, including threshold, smooth transition and Markov-switching models, can be written in state-space form. It is then straightforward to add components that capture parameter instability and intervention effects. We advocate a Bayesian approach to estimation and inference, using an efficient implementation of Markov Chain Monte Carlo sampling schemes for such linear dynamic mixture models. The general modelling framework and the Bayesian methodology are illustrated by means of several examples. An application to quarterly industrial production growth rates for the G7 countries demonstrates the empirical usefulness of the approach.
Article
Several predetermined variables that reflect levels of bond and stock prices appear to predict returns on common stocks of firms of various sizes, long-term bonds of various default risks, and default-free bonds of various maturities. The returns on small-firm stocks and low-grade bonds are more highly correlated in January than in the rest of the year with previous levels of asset prices, especially prices of small-firm stocks. Seasonality is found in several conditional risk measures, but such seasonality is unlikely to explain, and in some cases is opposite to, the seasonal found in mean returns.
Article
We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states (``annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ``relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.
Article
We show how to use the Gibbs sampler to carry out Bayesian inference on a linear state space model with errors that are a mixture of normals and coefficients that can switch over time. Our approach simultaneously generates the whole of the state vector given the mixture and coefficient indicator variables and simultaneously generates all the indicator variables conditional on the state vectors. The states are generated efficiently using the Kalman filter. We illustrate our approach by several examples and empirically compare its performance to another Gibbs sampler where the states are generated one at a time. The empirical results suggest that our approach is both practical to implement and dominates the Gibbs sampler that generates the states one at a time.
Article
Attempts to characterize stock return predictability have resulted in little consensus on the important conditioning variables, giving rise to model uncertainty and data snooping fears. We introduce a new methodology that explicitly incorporates model uncertainty by comparing all possible models simultaneously and in which the priors are calibrated to reflect economically meaningful information. Our approach minimizes data snooping given the information set and the priors. We compare the prior views of a skeptic and a confident investor. The data imply posterior probabilities that are in general more supportive of stock return predictability than the priors for both types of investors.
Article
Correlations between international equity market returns tend to increase in highly volatile bear markets, which has led some to doubt the benefits of international diversification. This article solves the dynamic portfolio choice problem of a U.S. investor faced with a time-varying investment opportunity set modeled using a regime-switching process which may be characterized by correlations and volatilities that increase in bad times. International diversification is still valuable with regime changes and currency hedging imparts further benefit. The costs of ignoring the regimes are small for all-equity portfolios but increase when a conditionally risk-free asset can be held.
Article
This paper develops a new approach to change-point modeling that allows the number of change-points in the observed sample to be unknown. The model we develop assumes regime durations have a Poisson distribution. It approximately nests the two most common approaches: the time varying parameter model with a change-point every period and the change-point model with a small number of regimes. We focus considerable attention on the construction of reasonable hierarchical priors both for regime durations and for the parameters which characterize each regime. A Markov Chain Monte Carlo posterior sampler is constructed to estimate a change-point model for conditional means and variances. Our techniques are found to work well in an empirical exercise involving US GDP growth and inflation. Empirical results suggest that the number of change-points is larger than previously estimated in these series and the implied model is similar to a time varying parameter (with stochastic volatility) model.
Article
In October 1982 the FOMC deemphasized M1 and moved to what is commonly referred to as a borrowed reserves operating procedure. Sometime thereafter the FOMC switched to a funds rate targeting procedure but never formally announced the change. Given the close correspondence between a borrowed reserves operating procedure and a funds rate targeting procedure, Thornton (1988) suggested that the FOMC went immediately to a funds rate targeting procedure. Others date the switch to the funds rate procedure later. Meulendyke (1998) suggests the switch came in late 1987, while others suggest the change occurred later. This paper reviews the verbatim transcripts of the FOMC meetings to establish the timing of the switch. The verbatim transcripts suggest that the FOMC effectively switched to a funds rate targeting procedure in 1982. The documentary evidence is supported by an analysis of the spread between the funds rate and the funds rate target, which suggests that the differences in the behavior of the spread before October 1979 and after October 1982 are relatively small and economically unimportant.
Article
This paper analyzes the impact of changes in monetary policy on equity prices, with the objectives of both measuring the average reaction of the stock market and understanding the economic sources of that reaction. We find that, on average, a hypothetical unanticipated 25-basis-point cut in the Federal funds rate target is associated with about a 1% increase in broad stock indexes. Adapting a methodology due to Campbell and Ammer, we find that the effects of unanticipated monetary policy actions on expected excess returns account for the largest part of the response of stock prices. Copyright 2005 by The American Finance Association.
Article
We study asset allocation when the conditional moments of returns are partly predictable. Rather than first model the return distribution and subsequently characterize the portfolio choice, we determine directly the dependence of the optimal portfolio weights on the predictive variables. We combine the predictors into a single index that best captures time variations in investment opportunities. This index helps investors determine which economic variables they should track and, more importantly, in what combination. We consider investors with both expected utility (mean variance and CRRA) and nonexpected utility (ambiguity aversion and prospect theory) objectives and characterize their market timing, horizon effects, and hedging demands. Copyright The American Finance Association 2001.
Article
Sample evidence about the predictability of monthly stock returns is considered from the perspective of a risk-averse Bayesian investor who must allocate funds between stocks and cash. The investor uses the sample evidence to update prior beliefs about the parameters in a regression of stock returns on a set of predictive variables. The regression relation can seem weak when described by usual statistical measures but the current values of the predictive variables can exert a substantial influence on the investor's portfolio decision, even when the investor's prior beliefs are weighted against predictability. Copyright 1996 by American Finance Association.
Article
This paper examines the relation between stock returns and stock market volatility. We find evidence that the expected market risk premium (the expected return on a stock portfolio minus the Treasury bill yield) is positively related to the predictable volatility of stock returns. There is also evidence that unexpected stock market returns are negatively related to the unexpected change in the volatility of stock returns. This negative relation provides indirect evidence of a positive relation between expected risk premiums and volatility.
Article
The primary aim of the paper is to place current methodological discussions in macroeconometric modeling contrasting the ‘theory first’ versus the ‘data first’ perspectives in the context of a broader methodological framework with a view to constructively appraise them. In particular, the paper focuses on Colander’s argument in his paper “Economists, Incentives, Judgement, and the European CVAR Approach to Macroeconometrics” contrasting two different perspectives in Europe and the US that are currently dominating empirical macroeconometric modeling and delves deeper into their methodological/philosophical underpinnings. It is argued that the key to establishing a constructive dialogue between them is provided by a better understanding of the role of data in modern statistical inference, and how that relates to the centuries old issue of the realisticness of economic theories.
Article
This paper presents evidence of persistent 'bull' and 'bear' regimes in UK stock and bond returns and considers their economic implications from the perspective of an investor's portfolio allocation. We find that the perceived state probability has a large effect on the optimal asset allocation, particularly at short investment horizons. If ignored, the presence of such regimes gives rise to substantial welfare costs. Parameter estimation uncertainty, while clearly important, does not overturn the conclusion that predictability in the return distribution linked to the presence of bull and bear states has a significant effect on investors' strategic asset allocation. Copyright 2005 Royal Economic Society.
Article
This paper provides a survey on studies that analyze the macroeconomic effects of intellectual property rights (IPR). The first part of this paper introduces different patent policy instruments and reviews their effects on R&D and economic growth. This part also discusses the distortionary effects and distributional consequences of IPR protection as well as empirical evidence on the effects of patent rights. Then, the second part considers the international aspects of IPR protection. In summary, this paper draws the following conclusions from the literature. Firstly, different patent policy instruments have different effects on R&D and growth. Secondly, there is empirical evidence supporting a positive relationship between IPR protection and innovation, but the evidence is stronger for developed countries than for developing countries. Thirdly, the optimal level of IPR protection should tradeoff the social benefits of enhanced innovation against the social costs of multiple distortions and income inequality. Finally, in an open economy, achieving the globally optimal level of protection requires an international coordination (rather than the harmonization) of IPR protection.
Article
This paper provides a new approach to forecasting time series that are subject to discrete structural breaks. We propose a Bayesian estimation and prediction procedure that allows for the possibility of new breaks occurring over the forecast horizon, taking account of the size and duration of past breaks (if any) by means of a hierarchical hidden Markov chain model. Predictions are formed by integrating over the parameters from the meta-distribution that characterizes the stochastic break-point process. In an application to U.S. Treasury bill rates, we find that the method leads to better out-of-sample forecasts than a range of alternative methods.
Article
Green and Hollifield (1992) argue that the presence of a dominant factor would result in extreme negative weights in mean-variance efficient portfolios even in the absence of estimation errors. In that case, imposing no-short-sale constraints should hurt, whereas empirical evidence is often to the contrary. We reconcile this apparent contradiction. We explain why constraining portfolio weights to be nonnegative can reduce the risk in estimated optimal portfolios even when the constraints are wrong. Surprisingly, with no-short-sale constraints in place, the sample covariance matrix performs as well as covariance matrix estimates based on factor models, shrinkage estimators, and daily data. Copyright (c) 2003 by the American Finance Association.
Article
This article examines whether shifts in the stance of monetary policy can account for the observed predictability in excess stock returns. Using long-horizon regressions and short-horizon vector autoregressions, the article concludes that monetary policy variables are significant predictors of future returns, although they cannot fully account for observed stock return predictability. The author undertakes variance decompositions to investigate how monetary policy affects the individual components of excess returns (risk-free discount rates, risk premia, or cash flows). Copyright 1997 by American Finance Association.
Article
This paper presents estimates indicating that, for aggregate U.S. stock market data 1871-1986, a long historical average of real earnings is a good predictor of the present value of future real dividends. This is true even when the information contained in stock prices is taken into account. We estimate that for each year the optimal forecast of the present value of future real dividends is roughly a weighted average of moving average earnings and current real price, with between 2/3 and 3/4 of the weight on the earnings measure. This means that simple present value models of stock prices can be strongly rejected. We use a vector autoregressive approach which enables us to compute the implications of this for the behavior of stock prices and returns. We estimate that log dividend-price ratios are more variable than, and virtually uncorrelated with, their theoretical counterparts given the present value models. Annual returns on stocks are quite highly correlated with their theoretical counterparts, but are two to four times as variable. Our approach also reveals the connection between recent papers showing forecastability of long-horizon returns on corporate stocks, and earlier literature claiming that stock prices are too volatile to be accounted for in terms of simple present value models. We show that excess volatility directly implies the forecastability of long-horizon returns.
Article
This paper gives a comprehensive review of the literature on the interaction between real stock returns, inflation, and money growth, with a special emphasis on the role of monetary policy. This is an area of research that has interested monetary and financial economists for a long time. Monetary economists have been interested in the question whether money has any effect on real stock prices, while financial economists have investigated whether equity is a good hedge against inflation. Empirical studies show that money can be helpful in predicting future stock returns. Empirical evidence also suggest that equity is not a good hedge against inflation in the short run but may be so in the long run. The short-run negative relation between stock returns and inflation can easily be explained by theoretical models. If the central bank conducts a countercyclical monetary policy this will result in a negative relation between inflation and stock returns, while if it conducts a procyclical policy we could observe a positive relation. According to both theoretical and empirical studies investors receive an inflation risk premium for holding equity. Copyright 2001 by Blackwell Publishers Ltd
Article
Numerous studies report that standard volatility models have low explanatory power, leading some researchers to question whether these models have economic value. We examine this question by using conditional mean-variance analysis to assess the value of volatility timing to short-horizon investors. We find that the volatility timing strategies outperform the unconditionally efficient static portfolios that have the same target expected return and volatility. This finding is robust to estimation risk and transaction costs.
Article
We examine how the evidence of predictabilityinasset returns a#ects optimal portfolio choice for investors with long horizons. Particular attention is paid to estimation risk, or uncertainty about the true values of model parameters. We #nd that even after incorporating parameter uncertainty, there is enough predictability in returns to make investors allocate substantially more to stocks, the longer their horizon. Moreover, the weak statistical signi#cance of the evidence for predictability makes it important to take estimation risk into account; a long-horizon investor who ignores it mayover-allocate to stocks by a sizeable amount. # Graduate School of Business, University of Chicago. I am indebted to John Campbell and Gary Chamberlain for guidance and encouragement. I also thank an anonymous referee, the editor Ren#e Stulz, and seminar participants at Harvard, the Wharton School, Chicago Business School, the Sloan School at MIT, UCLA, Rochester, NYU, Columbia, Stanford, IN...
  • K French
  • G Schwert
  • R Stambaugh
French, K., G. Schwert, and R. Stambaugh (1987), Expcted Stock Returns and Volatility, Journal of Financial Economics, 19, 3-29.
  • R Gerlach
  • C Carter
  • R Kohn
Gerlach, R., C. Carter, and R. Kohn (2000), Efficient Bayesian Inference for Dynamic Mixture Models, Journal of the American Statistical Association, 95, 819–828.
  • P Giordani
  • R Kohn
  • D Van Dijk
Giordani, P., R. Kohn, and D. van Dijk (2006), A Unified Approach to Nonlinearity, Outliers and Structural Breaks, Journal of Econometrics, to appear.
Note: The figure presents the posterior means (solid line) of κ jt on the left side and β jt on the right side, conditional upon inclusion of the jth variable (s j = 1). The dashed lines are the 25th and 75th percentiles of the posterior densities
Note: The figure presents the posterior means (solid line) of κ jt on the left side and β jt on the right side, conditional upon inclusion of the jth variable (s j = 1). The dashed lines are the 25th and 75th percentiles of the posterior densities. Figure 3: Out-of-sample results 1986 1991 1996