ArticlePDF Available

Abstract

This article examines whether decomposing time series data into two parts - level and change - produces forecasts that are more accurate than those from forecasting the aggregate directly. Prior research found that, in general, decomposition reduced forecasting errors by 35%. An earlier study on decomposition into level and change found a forecast error reduction of 23%. The current study found that nowcasts consisting of a simple average of estimates from preliminary surveys and econometric models of the U.S. lodging market, improved the accuracy of final estimates of levels. Forecasts of change from an econometric model and the improved nowcasts reduced forecast errors by 29% when compared to direct forecasts of the aggregate. Forecasts of change from an extrapolation model and the improved nowcasts reduced forecast errors by 45%. On average then, the error reduction for this study was 37%.
... According to a survey carried out by Armstrong, Green, and Graefe (2015), decomposition leads to more accurate results in time series predictions. Many individual empirical studies affirm the results of this survey (Tessier and Armstrong 2015;Tol, Pacala, and Socolowe 2009;Prema and Rao 2015;Tang et al. 2015;Ang and Zhang 2000). The rest of this study is organised as follows: Section 2 describes the empirical model-methodology. ...
Article
This study re-tests the environmetal Kuznets curve (EKC) hypothesis for the US, based on a methodology that differentiates this study from previous empirical studies.To this aim, the per-capita income series (variable) is decomposed into its increases and decreases as two new time series and only one series, which contains income increases, is used. The rationale of this decomposition method is that the EKC hypothesis is originally postulated based on the impacts of income increases on environmental degradation. Therefore, this decomposition may allow us to test the EKC hypothesis more accurately through only income increases in accordance with its original postulation. Following decomposition, the ARDL approach to cointegration is applied between 1990M1 and 2019M7. Empirical findings of decomposed and undecomposed models are exactly opposite to each other. While the undecomposed model does not detect evidence of the EKC hypothesis for the US, the decomposed model strongly does so. This can lead to the interpretation that the decomposed model discovers-detects the existing but concealed validity of the EKC hypothesis, which the undecomposed model is not capable of detecting. Based on this result, this study proposes using this method as well, as an alternative technique for the EKC hypothesis testing models.
... The six-year ahead ex-ante forecasts made by rule-based forecasting were 42% less than to those from equal-weights combinations. Other sources of prior knowledge can be used such as decomposition of time-series by level and change (Armstrong and Tessier, 2015) and by causal forces (Armstrong and Collopy, 1993.) ...
Article
Purpose: Commentary on M4-Competition and findings to assess the contribution of data models—such as from machine learning methods—to improving forecast accuracy. Methods: (1) Use prior knowledge on the relative accuracy of forecasts from validated forecasting methods to assess the M4 findings. (2) Use prior knowledge on forecasting principles and the scientific method to assess whether data models can be expected to improve accuracy relative to forecasts from previously validated methods under any conditions. Findings: Prior knowledge from experimental research is supported by the M4 findings that simple validated methods provided forecasts that are: (1) typically more accurate than those from complex and costly methods; (2) considerably more accurate than those from data models. Limitations: Conclusions were limited by incomplete hypotheses from prior knowledge such as would have permitted experimental tests of which methods, and which individual models, would be most accurate under which conditions. Implications: Data models should not be used for forecasting under any conditions. Forecasters interested in situations where much relevant data are available should use knowledge models.
... The six-year ahead ex-ante forecasts made by rule-based forecasting were 42% less than to those from equal-weights combinations. Other sources of prior knowledge can be used such as decomposition of time-series by level and change (Armstrong and Tessier, 2015) and by causal forces (Armstrong and Collopy, 1993.) ...
Preprint
Full-text available
Purpose: Commentary on M4-Competition and findings to assess the contribution of data models--such as from machine learning methods--to improving forecast accuracy. Methods: (1) Use prior knowledge on the relative accuracy of forecasts from validated forecasting methods to assess the M4 findings. (2) Use prior knowledge on forecasting principles and the scientific method to assess whether data models can be expected to improve accuracy relative to forecasts from previously validated methods under any conditions. Findings: Prior knowledge from experimental research is supported by the M4 findings that simple validated methods provided forecasts that are: (1) typically more accurate than those from complex and costly methods; (2) considerably more accurate than those from data models. Limitations: Conclusions were limited by incomplete hypotheses from prior knowledge such as would have permitted experimental tests of which methods, and which individual models, would be most accurate under which conditions. Implications: Data models should not be used for forecasting under any conditions. Forecasters interested in situations where much relevant data are available should use knowledge models.
... For six-year ahead forecasts, the ex ante forecasts provided a 42% reduction compared to those from equal-weights combinations. Other sources of prior knowledge that can be used include decomposition of time-series by level and change (Armstrong and Tessier, 2015), and by causal forces (Armstrong & Collopy, 1993.) ...
Preprint
Full-text available
In the mid-1900s, there were two streams of thought about forecasting methods. One stream-led by econometricians-was concerned with developing causal models by using prior knowledge and evidence from experiments. The other was led by statisticians, who were concerned with identifying idealized "data generating processes" and with developing models from statistical relationships in data, both in the expectation that the resulting models would provide accurate forecasts. At that time, regression analysis was a costly process. In more recent times, regression analysis and related techniques have become simple and inexpensive to use. That development led to automated procedures such as stepwise regression, which selects "predictor variables" on the basis of statistical significance. An early response to the development was titled, "Alchemy in the behavioral sciences" (Einhorn, 1972). We refer to the product of data-driven approaches to forecasting as "data models." The M4-Competition (Makridakis, Spiliotis, Assimakopoulos, 2018) has provided extensive tests of whether data models-which they refer to as "ML methods"-can provide accurate extrapolation forecasts of time series. The Competition findings revealed that data models failed to beat naïve models, and established simple methods, with sufficient reliability to be of any practical interest to forecasters. In particular, the authors concluded from their analysis, "The six pure ML methods that were submitted in the M4 all performed poorly, with none of them being more accurate than Comb and only one being more accurate than Naïve2" (p. 803.) Over the past half-century, much has been learned about how to improve forecasting by conducting experiments to compare the performance of reasonable alternative methods. On the other hand, despite billions of dollars of expenditure, the various data modeling methods have not contributed to improving forecast accuracy. Nor can they do so, as we explain below.
... One approach that is useful when the most recent data are uncertain or liable to subsequent revision is to forecast the starting level and trend separately, and then add them-a procedure called "nowcasting." Three comparative studies found that, on average, nowcasting reduced errors for short-range forecasts by 37% (Tessier and Armstrong 2015). ...
Article
Full-text available
Problem How to help practitioners, academics, and decision makers use experimental research findings to substantially reduce forecast errors for all types of forecasting problems. Methods Findings from our review of forecasting experiments were used to identify methods and principles that lead to accurate forecasts. Cited authors were contacted to verify that summaries of their research were correct. Checklists to help forecasters and their clients undertake and commission studies that adhere to principles and use valid methods were developed. Leading researchers were asked to identify errors of omission or commission in the analyses and summaries of research findings. Findings Forecast accuracy can be improved by using one of 15 relatively simple evidence-based forecasting methods. One of those methods, knowledge models, provides substantial improvements in accuracy when causal knowledge is good. On the other hand, data models – developed using multiple regression, data mining, neural nets, and “big data analytics” – are unsuited for forecasting. Originality Three new checklists for choosing validated methods, developing knowledge models, and assessing uncertainty are presented. A fourth checklist, based on the Golden Rule of Forecasting, was improved. Usefulness Combining forecasts within individual methods and across different methods can reduce forecast errors by as much as 50%. Forecasts errors from currently used methods can be reduced by increasing their compliance with the principles of conservatism (Golden Rule of Forecasting) and simplicity (Occam’s Razor). Clients and other interested parties can use the checklists to determine whether forecasts were derived using evidence-based procedures and can, therefore, be trusted for making decisions. Scientists can use the checklists to devise tests of the predictive validity of their findings.
... One approach that is useful when the most recent data are uncertain or liable to subsequent revision is to forecast the starting level and trend separately, and then add them-a procedure called "nowcasting." Three comparative studies found that, on average, nowcasting reduced errors for short-range forecasts by 37% (Tessier and Armstrong 2015). ...
Working Paper
Full-text available
Problem: How to help practitioners, academics, and decision makers use experimental research findings to substantially reduce forecast errors for all types of forecasting problems. Methods: Findings from our review of forecasting experiments were used to identify methods and principles that lead to accurate forecasts. Cited authors were contacted to verify that summaries of their research were correct. Checklists to help forecasters and their clients practice and commission studies that adhere to principles and use valid methods were developed. Leading researchers were asked to identify errors of omission or commission in the analyses and summaries of research findings. Findings: Forecast accuracy can be improved by using one of 15 relatively simple evidence-based forecasting methods. One of those methods, knowledge models, provides substantial improvements in accuracy when causal knowledge is good. On the other hand, data models—developed using multiple regression, data mining, neural nets, and “big data analytics”—are unsuited for forecasting. Originality: Three new checklists for choosing validated methods, developing knowledge models, and assessing uncertainty are presented. A fourth checklist, based on the Golden Rule of Forecasting, was improved. Usefulness: Combining forecasts within individual methods and across different methods can reduce forecast errors by as much as 50%. Forecasts errors from currently used methods can be reduced by increasing their compliance with the principles of conservatism (Golden Rule of Forecasting) and simplicity (Occam’s Razor). Clients and other interested parties can use the checklists to determine whether forecasts were derived using evidence-based procedures and can, therefore, be trusted for making decisions. Scientists can use the checklists to devise tests of the predictive validity of their findings. Key words: combining forecasts, data models, decomposition, equalizing, expectations, extrapolation, knowledge models, intentions, Occam’s razor, prediction intervals, predictive validity, regression analysis, uncertainty
... Their model generates accurate predictions for these three variables. Tessier and Armstrong (2015) examined whether decomposing time series data into two parts -level and change -produces forecasts that are more accurate than those from forecasting the aggregate directly. Tessier and Armstrong found that on average the error reduction in their study was 37%. ...
Article
Full-text available
A common phenomenon that decreases the accuracy of time series forecasting is the existence of change points in the data. This paper presents a method for time series forecasting with the possibility of a change point in the distribution of observations. The proposed method uses change point techniques to detect and estimate change points, and to improve the forecasting process by taking change points into account. The method can be applied to both stationary series and linear trend series. Change point analysis prevents the omission of relevant data as well as the forecasting that may be based on irrelevant data. The study concludes that change point techniques may increase the accuracy of forecasts, as is demonstrated in the real case study presented in this paper.
... The MAPE was reduced by 29% when the current level was based on a combination of survey data and the econometric forecast. Another test, done with forecasts from an extrapolation model, found that the MAPE was reduced by 45% (Tessier & Armstrong, 2015?in this issue). Multiplicative decomposition involves dividing the problem into elements that can be forecast and then multiplied. ...
Article
Full-text available
This article proposes a unifying theory, or Golden Rule, of forecasting. The Golden Rule of Forecasting is to be conservative. A conservative forecast is consistent with cumulative knowledge about the present and the past. To be conservative, forecasters must seek out and use all knowledge relevant to the problem, including knowledge of methods validated for the situation. Twenty-eight guidelines are logically deduced from the Golden Rule. A review of evidence identified 105 papers with experimental comparisons; 102 support the guidelines. Ignoring a single guideline increased forecast error by more than two-fifths on average. Ignoring the Golden Rule is likely to harm accuracy most when the situation is uncertain and complex, and when bias is likely. Non-experts who use the Golden Rule can identify dubious forecasts quickly and inexpensively. To date, ignorance of research findings, bias, sophisticated statistical procedures, and the proliferation of big data, have led forecasters to violate the Golden Rule. As a result, despite major advances in evidence-based forecasting methods, forecasting practice in many fields has failed to improve over the past half-century.
... Combining nowcasting with trend forecasting is an old idea that does not appear to be widely used, and comparative tests are few. Nevertheless, the two studies described inTessier & Armstrong (2015?in this issue) suggest that substantial error reduction is possible. Transforming variables can help to avoid complexity in a model. ...
Article
Full-text available
This article introduces the Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models,forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods—including those in this special issue—found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy.Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives:(1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com.
Article
Full-text available
As Big Data has undergone a transition from being an emerging topic to a growing research area, it has become necessary to classify the different types of research and examine the general trends of this research area. This should allow the potential research areas that for future investigation to be identified. This paper reviews the literature on ‘Big Data and supply chain management (SCM)’, dating back to 2006 and provides a thorough insight into the field by using the techniques of bibliometric and network analyses. We evaluate 286 articles published in the past 10 years and identify the top contributing authors, countries and key research topics. Furthermore, we obtain and compare the most influential works based on citations and PageRank. Finally, we identify and propose six research clusters in which scholars could be encouraged to expand Big Data research in SCM. We contribute to the literature on Big Data by discussing the challenges of current research, but more importantly, by identifying and proposing these six research clusters and future research directions. Finally, we offer to managers different schools of thought to enable them to harness the benefits from using Big Data and analytics for SCM in their everyday work.
Article
Full-text available
This article proposes a unifying theory, or the Golden Rule, of forecasting. The Golden Rule of Forecasting is to be conservative. A conservative forecast is consistent with cumulative knowledge about the present and the past. To be conservative, forecasters must seek out and use all knowledge relevant to the problem, including knowledge of methods validated for the situation. Twenty-eight guidelines are logically deduced from the Golden Rule. A review of evidence identified 105 papers with experimental comparisons; 102 support the guidelines. Ignoring a single guideline increased forecast error by more than two-fifths on average. Ignoring the Golden Rule is likely to harm accuracy most when the situation is uncertain and complex, and when bias is likely. Non-experts who use the Golden Rule can identify dubious forecasts quickly and inexpensively. To date, ignorance of research findings, bias, sophisticated statistical procedures, and the proliferation of big data, have led forecasters to violate the Golden Rule. As a result, despite major advances in evidence-based forecasting methods, forecasting practice in many fields has failed to improve over the past half-century.
Article
Full-text available
The usual procedure for developing linear models to predict any kind of target variable is to identify a subset of most important predictors and to estimate weights that provide the best possible solution for a given sample. The resulting “optimally” weighted linear composite is then used when predicting new data. This approach is useful in situations with large and reliable datasets and few predictor variables. However, a large body of analytical and empirical evidence since the 1970s shows that such optimal variable weights are of little, if any, value in situations with small and noisy datasets and a large number of predictor variables. In such situations, which are common for social science problems, including all relevant variables is more important than their weighting. These findings have yet to impact many fields. This study uses data from nine U.S. election-forecasting models whose vote-share forecasts are regularly published in academic journals to demonstrate the value of (a) weighting all predictors equally and (b) including all relevant variables in the model. Across the ten elections from 1976 to 2012, equally weighted predictors yielded a lower forecast error than regression weights for six of the nine models. On average, the error of the equal-weights models was 5% lower than the error of the original regression models. An equal-weights model that uses all 27 variables that are included in the nine models missed the final vote-share results of the ten elections on average by only 1.3 percentage points. This error is 48% lower than the error of the typical, and 29% lower than the error of the most accurate, regression model.
Article
Full-text available
The following hypotheses about long-range market forecasting were examined: H 1 Objective methods provide more accuracy than do subjective methods. H 2 The relative advantage of objective over subjective methods increases as the amount of change in the environment increases. H 3 Causal methods provide more accuracy than do naïve methods. H 4 The relative advantage of causal over naïve methods increases as the amount of change in the environment increases. Support for these hypotheses was then obtained from the literature and from a study of a single market. The study used three different models to make ex ante forecasts of the U.S. air travel market from 1963 through 1968. These hypotheses imply that econometric methods are more accurate for long-range market forecasting than are the major alternatives, expert judgment and extrapolation, and that the relative superiority of econometric methods increases as the time span of the forecast increases.
Article
Full-text available
With more and more firms contemplating expansion in the international market, the question of how a firm estimates its sales potential in a given country takes on increasing importance. Certainly one vital piece of information in estimating sales potential would be the size of the total current market in that country. This article considers the various ways in which firms might estimate market size by country, with particular consideration given to the use of econometric models. The article aims at three related questions. First, what has happened over the past thirty years in the use of econometric models for measuring geographical markets? Second, is it possible to demonstrate that currently available econometric techniques lead to “improved” measurement of geographical markets—and, in particular, for international markets? Finally, have advances in applied econometric analysis over the past thirty years led to any demonstrable progress in measuring geographical markets?
Article
This paper presents evidence on the role that judgmental adjustments play in macroeconomic forecast accuracy. It starts by contrasting the predictive records of four prominent forecasters who adjust their models with those of three models that are used mechanically. The adjusted forecasts tend to be more accurate overall, although important exceptions can be found. Next the article compares adjusted forecasts with those generated mechanically by the same model. Again, with some significant exceptions, judgmental adjustments improve accuracy more often than not. The article closes by considering whether macroeconomic forecasters should place more or less emphasis on their adjustments relative to their models. It finds a clear tendency for modelers to overadjust their models, illustrating what prominent psychologists have termed “the major error of intuitive prediction”. In short, model builders should not hesitate to adjust their models to offset their limitations but should also guard against the tendency to overestimate the value of their personal insights.