Journal of Forecasting

Published by Wiley

Online ISSN: 1099-131X

·

Print ISSN: 0277-6693

Articles


Analysis and prediction of the population in Spain: 1910-2000
  • Article

August 1991

·

63 Reads

·

J Del Hoyo
"The starting hypothesis of this paper was the actual occurrence of important interactions between demographic and socio-economic factors when trying to reach population forecasts that may be more efficient than those obtained by mere extrapolative methods. In order to be able to implement this approach to the Spanish case it has been necessary to reconstruct first the Spanish population series by age and sex groups from 1910 to 1980. Later, we proceed to obtain population forecasts using alternative modeling strategies and comment on the potential problems that the new demographic situation may have for future public policy."
Share

The impact of forecasting methodology on the accuracy of national population forecasts: evidence from the Netherlands and Czechoslovakia

August 1991

·

39 Reads

"This study considers the accuracy of national population forecasts of the Netherlands and the Czechoslovak Socialist Republic.... We look at the demographic components employed in each forecast, the procedure to extrapolate fertility and the level at which assumptions for each component are formulated. Errors in total population size, fertility, mortality and foreign migration, and age structure are considered. We discuss trends in errors and methodology since 1950 and compare the situations in the two countries. The findings suggest that methodology has only a very limited impact on the accuracy of national population forecasts."

Forecasting with growth curves: The effect of error structure

October 1988

·

24 Reads

"The main theme of this paper is an investigation into the importance of error structure as a determinant of the forecasting accuracy of the logistic model. The relationship between the variance of the disturbance term and forecasting accuracy is examined empirically. A general local logistic model is developed as a vehicle to be used in this investigation. Some brief comments are made on the assumptions about error structure, implicit or explicit, in the literature." The results suggest that "the variance of the disturbance term, when using the logistic to forecast human populations, is proportional to at least the square of population size."

Household projection methods

October 1987

·

24 Reads

"The role of household projections as a basis for forecasts of households at [the] national and sub-national level is discussed and a number of criteria for such projections are outlined. The projection method used by the Department of the Environment [in the United Kingdom] is examined in the context of these criteria and it is concluded that it is both practical and robust. However, it is open to criticism, first because of its failure to make the best use of the available data and of theoretical knowledge, and secondly because of its 'black box' nature. An alternative two-stage strategy is developed. The first stage involves constructing projections using a new curve-fitting method which takes account of within cohort life-cycle headship rate changes. The second is a method of analysing the resulting projections by modelling transition rates between different household states. Worked examples of both methods are presented."

P/E changes: Some new results

July 2009

·

58 Reads

The P|E ratio is often used as a metric to compare individual stocks and the market as a whole relative to historical valuations. We examine the factors that affect changes in the inverse of the P|E ratio (E|P) over time in the broad market (S&P 500 Index). Our model includes variables that measure investor beliefs and changes in tax rates and shows that these variables are important factors affecting the P|E ratio. We extend prior work by correcting for the presence of a long-run relation between variables included in the model. As frequently conjectured, changes in the P|E ratio have predictive power. Our model explains a large portion of the variation in E|P and accurately predicts the future direction of E|P, particularly when predicted changes in E|P are large or provide a consistent signal over more than one quarter. Copyright © 2008 John Wiley & Sons, Ltd.

The accuracy rates of the individual institutions
Are 16-Month-Ahead Forecasts Useful? A Directional Analysis of Japanese GDP Forecasts
  • Article
  • Full-text available

April 2006

·

49 Reads

Past literature casts doubt on the ability of long-term macroeconomic forecasts to predict the direction of change. We re-examine this issue using the Japanese GDP forecast data of 37 institutions, and find that their 16-month-ahead forecasts contain valuable information on whether the growth rate accelerates or not. Copyright © 2006 John Wiley _ Sons, Ltd.
Download

Monetary policy, composite leading economic indicators and predicting the 2001 recession

November 2004

·

45 Reads

On 26 November 2001, the National Bureau of Economic Research announced that the US economy had officially entered into a recession in March 2001. This decision was a surprise and did not end all the conflicting opinions expressed by economists. This matter was finally settled in July 2002 after a revision to the 2001 real gross domestic product showed negative growth rates for its first three quarters. A series of political and economic events in the years 2000-01 have increased the amount of uncertainty in the state of the economy, which in turn has resulted in the production of less reliable economic indicators and forecasts. This paper evaluates the performance of two very reliable methodologies for predicting a downturn in the US economy using composite leading economic indicators (CLI) for the years 2000-01. It explores the impact of the monetary policy on CLI and on the overall economy and shows how the gradualness and uncertainty of this impact on the overall economy have affected the forecasts of these methodologies. It suggests that the overexposure of the CLI to the monetary policy tools and a strong, but less effective, expansionary money policy have been the major factors in deteriorating the predictions of these methodologies. To improve these forecasts, it has explored the inclusion of the CLI diffusion index as a prior in the Bayesian methodology. Copyright © 2004 John Wiley & Sons, Ltd.


Comparing density forecast models Previous versions of this paper have been circulated with the title, 'A Test for Density Forecast Comparison with Applications to Risk Management' since October 2003; see Bao et al. (2004).

January 2007

·

23 Reads

In this paper we discuss how to compare various (possibly misspecified) density forecast models using the Kullback-Leibler information criterion (KLIC) of a candidate density forecast model with respect to the true density. The KLIC differential between a pair of competing models is the (predictive) log-likelihood ratio (LR) between the two models. Even though the true density is unknown, using the LR statistic amounts to comparing models with the KLIC as a loss function and thus enables us to assess which density forecast model can approximate the true density more closely. We also discuss how this KLIC is related to the KLIC based on the probability integral transform (PIT) in the framework of Diebold et al. (1998). While they are asymptotically equivalent, the PIT-based KLIC is best suited for evaluating the adequacy of each density forecast model and the original KLIC is best suited for comparing competing models. In an empirical study with the S&P500 and NASDAQ daily return series, we find strong evidence for rejecting the normal-GARCH benchmark model, in favor of the models that can capture skewness in the conditional distribution and asymmetry and long memory in the conditional variance.  Copyright © 2007 John Wiley & Sons, Ltd.

Did Unexpectedly Strong Economic Growth Cause the Oil Price Shock of 2003-2008?

August 2013

·

197 Reads

Forecasts are an inherent part of economic science and the quest for perfect foresight occupies economists and researchers in multiple fields. The release of economic forecasts (and its revisions) is a popular and often publicized event, with a multitude of institutions and think-tanks devoted almost exclusively to that task. The European Central Bank (ECB) also publishes its forecasts for the euro area, however ECB’s forecast accuracy is not a deeply researched theme. The ECB forecasts’ accuracy is the main point developed in this paper, which tries to contribute to understand the nature of the errors committed by the ECB forecasts and its main differences compared to other projections. What we try to infer is whether the ECB is accurate in its projections, making less errors than the others, maybe due to some informational advantage. We conclude that the ECB seems to consistently underestimate the HICP inflation rate and overestimate GDP growth. Comparing it with the others, the ECB shows a superior performance, committing almost always fewer errors. So, this signals a possible informational advantage from the ECB. Since the forecasting errors could jeopardize ECB’s credibility public criticism could be avoided if the ECB simply let forecasts for the others. Naturally, this change should be weighted against the benefits of publishing forecasts.

Twelve lessons from 'Key technologies 2005': the French Technology Foresight Exercise

March 2003

·

1,268 Reads

The paper draws lessons from the French technology foresight exercise 'Key Technologies 2005'. It first describes the exercise as it took place: its context and objectives as well as the methodology that was adopted to identify, select and characterize 120 key technologies. Specifically, the paper describes the criteria used to select among the candidate key technologies, and then presents a specific tool which was developed to describe each technology (a characterization grid relating functional market needs and technological solutions to fulfil the generic need). Finally, twelve lessons are discussed. These deal with both the content of the foresight results and the methodology of running a technology foresight at national level. Copyright © 2003 John Wiley & Sons, Ltd.

A new production function estimate of the euro area output gap This paper is based on a report for Eurostat: 'Real time estimation of potential output, output gap, NAIRU and Phillips curve for Euro-zone', part of the Advanced statistical and econometric techniques for the analysis of PEEIs EUROSTAT Project , December 2007.

January 2010

·

49 Reads

We develop a new version of the production function (PF) approach for estimating the output gap of the euro area. Assuming a CES (constant elasticity of substitution) technology, our model does not call for any (often imprecise) measure of the capital stock and improves the estimation of the trend total factor productivity using a multivariate unobserved components model. With real-time data, we assess this approach by comparing it with the Hodrick-Prescott (HP) filter and with a Cobb-Douglas PF approach with common cycle and implemented with a multivariate unobserved components model. Our new PF estimate appears highly concordant with the reference chronology of turning points and has better real-time properties than the univariate HP filter for sufficiently long time horizons. Its inflation forecasting power appears, like the other multivariate approach, less favourable than the statistical univariate method. Copyright © 2009 John Wiley & Sons, Ltd.

A semiparametric method for predicting bankruptcy. Journal of Forecasting, 26, 317-342

August 2007

·

78 Reads

Bankruptcy prediction methods based on a semiparametric logit model are proposed for simple random (prospective) and case-control (choice-based; retrospective) data. The unknown parameters and prediction probabilities in the model are estimated by the local likelihood approach, and the resulting estimators are analyzed through their asymptotic biases and variances. The semiparametric bankruptcy prediction methods using these two types of data are shown to be essentially equivalent. Thus our proposed prediction model can be directly applied to data sampled from the two important designs. One real data example and simulations confirm that our prediction method is more powerful than alternatives, in the sense of yielding smaller out-of-sample error rates. Copyright © 2007 John Wiley & Sons, Ltd.

Introduction to Special Issue Commemorating the 50th Anniversary of the Kalman Filter and 40th Anniversary of Box and Jenkins

January 2011

·

32 Reads

This special issue of the Journal of Forecasting jointly celebrates the 40th anniversary of the publication of George Box and Gwilym Jenkins' highly influential book Time Series Analysis: Forecasting and Control, which introduced a robust and easily implementable strategy for modelling time series, and the 50th anniversary of the appearance of Rudolf Kalman's article 'A new approach to linear filtering and prediction problems' in the Journal of Basic Engineering, which has had an extraordinary impact in many diverse fields, has led to major advances in recursive estimation, and has introduced the term Kalman filter into the lexicon of time series analysis and forecasting. The huge number of papers published in the Journal of Forecasting that reference these two publications bears testament to their seminal status and long-lasting influence, making them a natural choice to base a special issue around. Copyright (C) 2010 John Wiley & Sons, Ltd.

Evaluation of Extrapolative Forecasting Methods: Results of a Survey of Academicians and Practitioners

April 1982

·

39 Reads

There exists a large number of quantitative extrapolative forecasting methods which may be applied in research work or implemented in an organizational setting. For instance, the lead article of this issue of the Journal of Forecasting compares the ability to forecast the future of over twenty univariate forecasting methods. Forecasting researchers in various academic disciplines as well as practitioners in private or public organizations are commonly faced with the problem of evaluating forecasting methods and ultimately selecting one. Thereafter, most become advocates of the method they have selected. On what basis are choices made? More specifically, what are the criteria used or the dimensions judged important? If a survey was taken among academicians and practitioners, would the same criteria arise? Would they be weighted equally? Before you continue reading this note, write on a piece of paper your criteria in order of importance and answer the last two questions. This will enable you to see whether or not you share the same values as your colleagues and test the accuracy of your perception.

Traditional versus unobserved components methods to forecast quarterly national account aggregates

March 2007

·

7 Reads

We aim to assess the ability of two alternative forecasting procedures to predict quarterly national account (QNA) aggregates. The application of Box-Jenkins techniques to observed data constitutes the basis of traditional ARIMA and transfer function methods (BJ methods). The alternative procedure exploits the information of unobserved high- and low-frequency components of time series (UC methods). An informal examination of empirical evidence suggests that the relationships between QNA aggregates and coincident indicators are often clearly different for diverse frequencies. Under these circumstances, a Monte Carlo experiment shows that UC methods significantly improve the forecasting accuracy of BJ procedures if coincident indicators play an important role in such predictions. Otherwise (i.e., under univariate procedures), BJ methods tend to be more accurate than the UC alternative, although the differences are small. We illustrate these findings with several applications from the Spanish economy with regard to industrial production, private consumption, business investment and exports.  Copyright © 2007 John Wiley & Sons, Ltd.

Forecast Accuracy after Pretesting with an Application to the Stock Market

July 2004

·

58 Reads

We investigate the salary returns to the ability to play football with both feet. The majority of footballers are predominantly right footed. Using two data sets, a cross-section of footballers in the five main European leagues and a panel of players in the German Bundesliga, we find robust evidence of a substantial salary premium for two-footed ability, even after controlling for available player performance measures. We assess how this premium varies across the salary distribution and by player position.

Forecast Accuracy and Economic Gains from Bayesian Model Averaging Using Time Varying Weights

August 2010

·

94 Reads

·

Richard Kleijn

·

·

[...]

·

Several Bayesian model combination schemes, including some novel approaches that simultaneously allow for parameter uncertainty, model uncertainty and robust time-varying model weights, are compared in terms of forecast accuracy and economic gains using financial and macroeconomic time series. The results indicate that the proposed time-varying model weight schemes outperform other combination schemes in terms of predictive and economic gains. In an empirical application using returns on the S&P 500 index, time-varying model weights provide improved forecasts with substantial economic gains in an investment strategy including transaction costs. Another empirical example refers to forecasting US economic growth over the business cycle. It suggests that time-varying combination schemes may be very useful in business cycle analysis and forecasting, as these may provide an early indicator for recessions. Copyright © 2009 John Wiley & Sons, Ltd.

The Accuracy of Individual and Group Forecasts from Business Outlook Surveys

January 1984

·

24 Reads

This paper reports on a comprehensive study of the distributions of summary measures of error for a large collection of quarterly multiperiod predictions of six variables representing inflation, real qrowth, unemployment,and percentage changes in nominal GNP and two of its more volatile components.The data come from surveys conducted since 1968 by the National Bureau of Economic Research and the American Statistical Association and cover more than 70 individuals professionally engaged in forecasting the course of the U. S.economy (mostly economists, analysts, and executives from the world of corporate business and finance). There is considerable differentiation among these forecasts, across the individuals, variables, and predictive horizons covered. Combining corresponding predictions from different sources can result insignificant gains; thus the group mean forecasts are on the average over timemore accurate than most of the corresponding sets of individual forecasts. But there is also a moderate deqree of consistency in the relative performance of a sufficient number of the survey members, as evidenced in positive rank correlations among ratios of the individual to group root mean square errors.

Assessing the Forecasting Accuracy of Alternative Nominal Exchange Rate Models: The Case of Long Memory

August 2006

·

29 Reads

This paper presents an autoregressive fractionally integrated moving-average (ARFIMA) model of nominal exchange rates and compares its forecasting capability with the monetary structural models and the random walk model. Monthly observations are used for Canada, France, Germany, Italy, Japan and the United Kingdom for the period of April 1973 through December 1998. The estimation method is Sowell's (1992) exact maximum likelihood estimation. The forecasting accuracy of the long-memory model is formally compared to the random walk and the monetary models, using the recently developed Harvey, Leybourne and Newbold (1997) test statistics. The results show that the long-memory model is more efficient than the random walk model in steps-ahead forecasts beyond 1 month for most currencies and more efficient than the monetary models in multi-step-ahead forecasts. This new finding strongly suggests that the long-memory model of nominal exchange rates be studied as a viable alternative to the conventional models.  Copyright © 2006 John Wiley & Sons, Ltd.

Regional econometric income forecast accuracy

August 2005

·

13 Reads

Econometric prediction accuracy for personal income forecasts is examined for a region of the United States. Previously published regional structural equation model (RSEM) forecasts exist ex ante for the state of New Mexico and its three largest metropolitan statistical areas: Albuquerque, Las Cruces and Santa Fe. Quarterly data between 1983 and 2000 are utilized at the state level. For Albuquerque, annual data from 1983 through 1999 are used. For Las Cruces and Santa Fe, annual data from 1990 through 1999 are employed. Univariate time series, vector autoregressions and random walks are used as the comparison criteria against structural equation simulations. Results indicate that ex ante RSEM forecasts achieved higher accuracy than those simulations associated with univariate ARIMA and random walk benchmarks for the state of New Mexico. The track records of the structural econometric models for Albuquerque, Las Cruces and Santa Fe are less impressive. In some cases, VAR benchmarks prove more reliable than RSEM income forecasts. In other cases, the RSEM forecasts are less accurate than random walk alternatives. Copyright © 2005 John Wiley & Sons, Ltd.

Evaluating the Predictive Accuracy of Volatility Models.

February 2001

·

190 Reads

Standard statistical loss functions, such as mean-squared error, are commonly used for evaluating financial volatility forecasts. In this paper, an alternative evaluation framework, based on probability scoring rules that can be more closely tailored to a forecast user's decision problem, is proposed. According to the decision at hand, the user specifies the economic events to be forecast, the scoring rule with which to evaluate these probability forecasts, and the subsets of the forecasts of particular interest. The volatility forecasts from a model are then transformed into probability forecasts of the relevant events and evaluated using the selected scoring rule and calibration tests. An empirical example using exchange rate data illustrates the framework and confirms that the choice of loss function directly affects the forecast evaluation results. Copyright © 2001 by John Wiley & Sons, Ltd.

Comparing the accuracy of density forecasts from competing models

December 2004

·

54 Reads

A rapidly growing literature emphasizes the importance of evaluating the forecast accuracy of empirical models on the basis of density (as opposed to point) forecasting performance. We propose a test statistic for the null hypothesis that two competing models have equal density forecast accuracy. Monte Carlo simulations suggest that the test, which has a known limiting distribution, displays satisfactory size and power properties. The use of the test is illustrated with an application to exchange rate forecasting. Copyright © 2004 John Wiley & Sons, Ltd.

Market risk management of banks: implications from the accuracy of Value-at-Risk forecasts

January 2003

·

98 Reads

This paper adopts the backtesting criteria of the Basle Committee to compare the performance of a number of simple Value-at-Risk (VaR) models. These criteria provide a new standard on forecasting accuracy. Currently central banks in major money centres, under the auspices of the Basle Committee of the Bank of International settlement, adopt the VaR system to evaluate the market risk of their supervised banks. Banks are required to report VaRs to bank regulators with their internal models. These models must comply with Basle's backtesting criteria. If a bank fails the VaR backtesting, higher capital requirements will be imposed. VaR is a function of volatility forecasts. Past studies mostly conclude that ARCH and GARCH models provide better volatility forecasts. However, this paper finds that ARCH- and GARCH-based VaR models consistently fail to meet Basle's backtesting criteria. These findings suggest that the use of ARCH- and GARCH-based models to forecast their VaRs is not a reliable way to manage a bank's market risk. Copyright © 2002 John Wiley & Sons, Ltd.

Evaluating forecasts: A look at aggregate bias and accuracy measures

September 2005

·

204 Reads

In this paper an investigation is made of the properties and use of two aggregate measures of forecast bias and accuracy. These are metrics used in business to calculate aggregate forecasting performance for a family (group) of products. We find that the aggregate measures are not particularly informative if some of the one-step-ahead forecasts are biased. This is likely to be the case in practice if frequently employed forecasting methods are used to generate a large number of individual forecasts. In the paper, examples are constructed to illustrate some potential problems in the use of the metrics. We propose a simple graphical display of forecast bias and accuracy to supplement the information yielded by the accuracy measures. This support includes relevant boxplots of measures of individual forecasting success. This tool is simple but helpful as the graphic display has the potential to indicate forecast deterioration that can be masked by one or both of the aggregate metrics. The procedures are illustrated with data representing sales of food items. Copyright © 2005 John Wiley & Sons, Ltd.

Top-cited authors