ArticlePDF Available

Abstract

The Golden Rule of Forecasting is a general rule that applies to all forecasting problems. The Rule was developed using logic and was tested against evidence from previously published comparison studies. The evidence suggests that a single violation of the Golden Rule is likely to increase forecast error by 44%. Some commentators argue that the Rule is not generally applicable, but do not challenge the logic or evidence provided. While further research might provide useful findings, available evidence justifies adopting the Rule now. People with no prior training in forecasting can obtain the substantial benefits of following the Golden Rule by using the Checklist to identify biased and unscientific forecasts at little cost.
Golden Rule of Forecasting Rearticulated:
Forecast Unto Others as You Would Have Them Forecast Unto You
Kesten C. Green, J. Scott Armstrong, and Andreas Graefe
April 2015
Abstract
The Golden Rule of Forecasting is a general rule that applies to all forecasting problems.
The Rule was developed using logic and was tested against evidence from previously
published comparison studies. The evidence suggests that a single violation of the Golden
Rule is likely to increase forecast error by 44 percent. Some commentators argue that the
Rule is not generally applicable, but do not challenge the logic or evidence provided. While
further research might provide useful findings, available evidence justifies adopting the Rule
now. People with no prior training in forecasting can obtain the substantial benefits of
following the Golden Rule by using the Checklist to identify biased and unscientific forecasts
at little cost.
Keywords: cost benefit analysis, index method, legal damage claims, precautionary
principle, principal components, take-the-best.
This reply to commentators on the paper “Golden Rule of Forecasting: Be conservative” (see
GoldenRuleofForecasting.com) is forthcoming with that paper in a special issue of Journal
of Business Research on the subject of simplicity versus complexity in forecasting. This
working paper version is available from
http://www.kestencgreen.com/GoldenRuleReply.pdf.
Acknowledgements: Paul Goodwin and Robert Fildes provided reviews. Hester Green,
Jen Kwok, and Lynn Selhat edited the paper. The commentators reviewed our descriptions of
their commentaries and suggested useful changes.
Contact information: Kesten C. Green, University of South Australia Business School,
and Ehrenberg-Bass Institute, GPO Box 2471, Adelaide, SA 5001, Australia;
kesten.green@unisa.edu.au. J. Scott Armstrong, The Wharton School, University of
Pennsylvania, 700 Huntsman Hall, 3730 Walnut Street, Philadelphia, PA 19104, U.S.A., and
Ehrenberg-Bass Institute, Adelaide; armstrong@wharton.upenn.edu. Andreas Graefe,
Department of Communication Science and Media Research, LMU Munich, Germany;
a.graefe@lmu.de.
2
Introduction
In our article (Armstrong, Green, and Graefe this issue), we propose the Golden Rule of
Forecasting—“the Golden Rule” hereafter—as a unifying forecasting theory. The theory
asserts that conservative forecasts will be less biased and more accurate than those that are
not conservative. A conservative forecast is one that draws upon, and is consistent with, all
relevant and important knowledge about the situation and forecasting methods. Operational
guidelines are provided to help forecasters implement the Golden Rule and to help forecast
users to assess the validity of forecasts.
Proposing a simple unifying theory for the broad and diverse field of forecasting is both
ambitious and controversial, so challenges to the theory are expected and welcome. To that
end, we are fortunate to have published, along with our article, four thoughtful commentaries
from leading forecasting researchers. In addition, the commentators provided suggestions
that led to major improvements in the article.
Fildes and Petropoulos
In two applications that they describe, Fildes and Petropoulos (this issue; henceforth F&P)
suggest that following the Golden Rule may have produced less accurate forecasts than those
obtained in contravention of the Golden Rule. F&P ask whether following the Golden Rule
might lead to rejection of “a well-performing method” that has been validated for a given
situation. Our answer is that the Golden Rule requires a priori analysis of the conditions of
the forecasting problem. The method selection procedure F&P suggest is in accordance with
many of the Golden Rule guidelines. For example, damped trend forecasting using de-
seasonalized data—F&P’s DDamped—satisfies most of the relevant Golden Rule checklist
items. DDamped also performed best of all the methods that F&P tested and provided
forecasts that were more accurate than the next-best method—ARIMA—for all eight of the
classifications of time series by characteristics—segments—that F&P examined.
While F&P’s examples favor the Golden Rule, following the Golden Rule may not
improve forecast accuracy for every forecasting problem. One can, however, expect
improvement by doing so. The Golden Rule article provides only a first step in the
development of evidence-based guidelines for conservative forecasting: other guidelines and
conditions are surely possible.
F&P are right that further research could contribute useful evidence for guidelines that
currently lack evidence. In addition, further research might lead to more effective ways to
state the guidelines, and to the identification of the conditions under which the guidelines are
most effective.
3
F&P suggest additional studies that are relevant to the Golden Rule. In particular, they
suggest Ord and Fildes (2013) in testing guideline 4.2. The suggestion is reasonable.
Inclusion would change the papers-for-versus-papers-against score from 102-to-3 to 102 to 4.
We expect that there are other relevant studies that are missing from the Golden Rule article.
Readers who are aware of omissions are welcome to forward their suggestions for posting on
GoldenRuleofForecasting.com.
F&P are also concerned with aspects of the guidelines on causal modeling, such as the
recommendation to use all variables that are important, which they regard as conflicting with
the thrust of the article, and this Special Issue, towards simplicity. While some researchers
have suggested that more variables means more complex our article argues that the number
of variables alone does not make for complexity. The Golden Rule Checklist provides
guidance on how to make use of knowledge on many variables in simple ways, and to
thereby avoid complexity.
On the topic of causal methods, F&P mention research on principal components—indexes
based on correlations among predictor variables—by Stock and Watson (2002). At first
glance, this approach might seem conservative in that it includes more information, which is
in line with Golden Rule Guideline 4.3. The approach, however, employs statistical rules
rather than causal knowledge and thus, uses less prior knowledge—which violates the
Golden Rule. Consistent with this, eight empirical comparisons found that the principal
components method harmed forecast accuracy (Armstrong 1985, pp. 223–225, 518, 580, 610,
628–629). The reasons Stock and Watson’s findings differ from other research on principal
components are unclear. We contacted the authors on two occasions, but were unable to
clarify: (1) whether their forecasts were ex ante, (2) whether they used successive updating,
(3) the number of forecasts in their ex ante test, (4) how the principal components were
forecasted, (5) why they omitted such competitive methods as equal-weights regression using
all of the variables incorporated in the principal components, or regression analyses using
variables based only on theory, and (6) why they used the mean square error, which had long
been shown to be unreliable for comparing forecasting methods (Armstrong and Collopy
1992).
F&P are right to be disappointed with the failure of software providers to include
evidence-based forecasting procedures. Imagine the losses to the economy that flow from
poor sales forecasting. The situation might change if software users request that software
providers follow the Golden Rule.
4
Goodwin
Goodwin (this issue) is skeptical about the possibility of identifying a simple unifying
theory for the field of forecasting. Moreover, he suggests that the term “conservative” does
not properly describe the nature of the 28 Golden Rule guidelines.
Goodwin does not suggest an alternative term, however. The use of the term
“conservative” in the Golden Rule article does differ somewhat from that of the Oxford
English Dictionary, though it is consistent with at least some common usages of the term.
Specifically, conservative is used in the Golden Rule in the sense of adhering to cumulative
knowledge. Thus, following the Golden Rule helps to avoid conjecture and bias. Goodwin’s
commentary nevertheless inspired an alternative description for the Golden Rule, which
became the title of this response: “Forecast unto others as you would have them forecast unto
you.”
Goodwin is correct on the need for decision-makers to consider the costs and benefits of
implementing the various guidelines. Most of the guidelines should be inexpensive to
implement. Some, however, are not; especially the need to conduct a priori analyses to
identify all important knowledge. In other words, decision makers should consider what the
marginal net benefit of increased forecast accuracy is for the problem at hand.
Goodwin suggests that further research should be done on the Golden Rule, especially
with respect to whether the Golden Rule applies to the estimates of prediction intervals and
to forecasts in the form of probability distributions. These suggestions are sensible, as is his
suggestion that more research would help by providing more evidence on the specific
conditions under which the Golden Rule is—and is not—effective in reducing forecast error.
Soyer and Hogarth
Soyer and Hogarth (this issue, henceforth S&H) suggest that there are several problems
that might hinder the use of the Golden Rule. One problem, they suggest, is that the checklist
does not provide sufficiently simple and specific instructions to be useful in practice. To
illustrate their point they refer to item 1.1: “Use all important knowledge and information.
That item is not, however, one of the guidelines—it is a heading for Guidelines 1.1.1 and
1.1.2 provided to show users the general organization of the guidelines. Nevertheless, they
make a fair point that further study would help to improve the description of the guidelines.
Another problem S&H propose is that some of the guidelines would be overly
burdensome to follow in practice, particularly the requirement to include all important
variables. Doing so involves using systematic and unbiased procedures to search the
literature, and to obtain information from heterogeneous experts. While the cost of following
the guidance can be high, the cost of not following it is likely to be much higher for
important projects. If forecasters choose to omit important information, they should fully
5
disclose what was omitted and explain why. For example, the Club of Rome’s 1972 The
Limits to Growth report employed a model with 1,000 equations to forecast that natural
resources would soon run out. Economists were quick to suggest that it would have been
helpful if the forecasters had included the prices of resources in their model. Had they done
so, their forecasts would not have been alarming—nor would they have provided the basis
for one of the best-selling environmentalist books in history.
S&H’s interpretation of research on the one-reason heuristic, which involves predicting
by using only the most important variable, is arguable. The heuristic provides a good forecast
if the forecaster knows which causal variable will be most important over the forecast
horizon, and if that variable’s effect exceeds that of all other variables combined. These
conditions are consistent with the Golden Rule, since the forecaster needs to have complete
information about which variables are important, and about the magnitudes of their effects.
One way to test the one-reason heuristic would be to compare its forecasts with those
from the index method. The index method involves obtaining evidence on causal factors by a
priori analysis. That is, the index method draws upon outside evidence, especially
experimental evidence, and does not estimate relationships from the data at hand. Thus, the
index method allows forecasters to use as many variables as theory and evidence show to be
important.
The ongoing efforts of S&H to improve ways of communicating forecasts so as to help
users interpret them are admirable. As their research shows, even leading experts in
econometrics have difficulty in interpreting the outputs from basic regression analyses (Soyer
and Hogarth 2012).
As S&H suggest, when forecasters fail to forecast improbable events, the consequences
for forecast users can be dire. Forecasts from regression analysis are susceptible to that risk
because regression models tend to exclude important variables due to lack of data and to lack
of historical variation in the some causal variables. Using the index method instead reduces
the risk of failing to forecast an improbable outcome by including information about all
factors that are known to be important.
The proper role of a forecaster is to provide decision makers with expected values and
confidence intervals for relevant costs and benefits. In turn, rational decision makers should
avoid making judgmental adjustments based on their opinions about what unusual things
might happen. Indeed, based on the research to date, judgmental adjustments of objective
forecasts are likely to harm forecast accuracy (Armstrong, Green and Graefe, this issue,
Golden Rule checklist item 6).
6
Gardner
Gardner (this issue) discusses the slow adoption of the evidence-based forecasting
technique of damping. The extensive evidence on the value of damping has been largely
ignored in practice, despite the clarity of Gardner’s writing and his efforts to ensure that the
methods are freely available.
Gardner’s research on trend damping represents one of the most important contributions
to extrapolation methods. Gardner expresses reservations, however, about three of the
Golden Rule guidelines that are intended to build on his work by distinguishing conditions
for damping.
Specifically, Gardner objects to the guidance on what to do when there is an inconsistency
between short- and long-term trends (3.3.4). He correctly notes that there is no comparative
research on this guideline. The guidelines were, however, developed as logical deductions
from the Golden Rule. The reasoning behind guideline 3.3.4 is that a long time-series
contains information not only on the recent trend, but also on cumulative knowledge about
the trend, and that cumulative knowledge should be taken into account when forecasting.
Gardner also has reservations about the guideline that advises being conservative when
the forecast horizon is longer than the historical data series (3.3.3). There was evidence from
only one comparative study, although the logic seems compelling. Surely, for example, one
should have little confidence in a 50-year-ahead forecast of dramatic change that was based
on only five years of data.
Finally, Gardner expresses reservations about adjustments based on expert knowledge of
causal forces (3.3.2), which includes the contrary-series rule. In that case, the logical
deduction of the guideline is supported by five comparative studies that found forecast error
was reduced for both one-ahead and many-ahead forecasts by following the guideline; 31
percent overall.
Gardner’s call for further research to define better the conditions under which damping is
most effective is sensible. Nevertheless, waiting for more evidence before following the
Golden Rule guidelines when making extrapolation forecasts is not justified. Logic and
evidence suggest that following the guidelines on extrapolation as currently described until
further research suggests revisions.
Discussion: Implementation of evidence-based methods
Each of the commentators raises the issue of implementation. Their concerns are
reasonable. The implementation of available evidence-based methods in practice is the major
problem for forecasting. To some extent, this may be due to ignorance of the evidence-based
procedures. The problem is almost certainly due in large part to the folklore that experts are
able to make good judgmental forecasts even, or especially, about complex and uncertain
7
situations. Another reason is the political motivation to provide a forecast that will promote a
decision that the forecaster or the client favors.
The implementation problem is especially serious for the public sector. Without the
discipline of competition and market prices, public sector forecasting is particularly
vulnerable to bias in the direction of wish fulfillment. As a consequence, citizens and firms
are exposed to the risk of major losses due to poor forecasting of government spending
programs, taxation, subsidies, pension payments, provision of services, regulations, and wars.
To counter the incentives to bias forecasts, governments should require public policy
forecasters to follow the Golden Rule. For example, if governments follow the guideline to
provide full disclosure (1.3), the media and public interest groups will be empowered to
scrutinize and critique government forecasts using the Golden Rule checklist.
Another barrier to following the Golden Rule is the so-called precautionary principle. The
precautionary principle implies that forecasting has no role when the situation is highly
uncertain and it is easy to imagine catastrophic outcomes. The call is thus: Take action now;
the apocalypse might happen, so scientific forecasts do not apply. That view brings to mind
the slogan on the Ministry of Truth building in George Orwell’s 1984: “Ignorance is
Strength.”
The argument behind the precautionary principle confuses forecasting with decision-
making and planning. The forecaster’s role is to provide accurate unbiased forecasts about
the likelihood and effects of alternative events, and the effects of alternative actions,
including doing nothing. Accordingly, forecasters should rely on the Golden Rule. Decision
makers should use the forecasts in cost and benefit analyses, and then decide on appropriate
plans and actions.
The precautionary principle is illogical. For example, either extreme global warming or
extreme global cooling might have disastrous effects. Since each is possible, the
precautionary principle should require action to prevent both warming and cooling—efforts
that would work against each other if they worked at all. Extreme warming and extreme
cooling might each benefit many people; what the precautionary principle has to say on that
point is not clear. The precautionary principle is popular among interest groups, who can
propose potential catastrophes to suit their objectives, and politicians, who benefit from being
seen to do something. People are susceptible to being swayed by appeals to the precautionary
principle when they believe that other people will or should pay the cost of the proposed
precautions. In such situations, people typically ignore probabilities, as was shown in
experiments by Sunstein and Zeckhauser (2011). As S&H noted, the way that forecasts are
presented can have a strong influence on how they are used.
Another issue with implementation is that statisticians are often unaware of the evidence
underlying the Golden Rule and propose forecasting methods—such as data mining and step-
8
wise regression—that violate the Golden Rule. Clients who are interested in accurate and
unbiased forecasts can refer their forecasters to the Golden Rule checklist at
GoldenRuleofForecasting.com before they start their forecasting efforts, and ask them to
follow the guidelines.
The implementation of evidence-based procedures would probably be enhanced if there
were penalties for failures to use proper procedures. Lawyers should use the Golden Rule
checklist for cases where inaccurate forecasts have led to harm. By following the Golden
Rule, experts should adhere to the same forecasting procedures no matter which side they
represent. Since forecasts are always subject to uncertainty, the relevant test is whether the
forecasters followed proper procedures.
Another way to encourage the use of the Golden Rule is to include the guidelines in
forecasting software programs in the form of default options. The software could report
instances where the user overrides the guidelines.
Conclusions
Logic and the empirical evidence to date support the Golden Rule of Forecasting’s status as a
general rule. The Rule was derived from evidence from all areas of forecasting, and it applies
across all fields and forecasting methods. That knowledge was used to develop the Golden Rule
checklist.
The Golden Rule is the antithesis of common antiscientific claims that scientific forecasting
does not apply because “this situation is different” or because “the outcome might be catastrophic.”
These claims of exceptionality encourage forecasters to ignore cumulative knowledge in order to
provide clients with forecasts that they prefer.
Further research to improve the Golden Rule checklist guidelines, identify new guidelines, and
learn more about the effects of conditions is desirable. Also desirable is research on whether the
Golden Rule applies to the estimation of uncertaintye.g., the determination of prediction
intervals.
Forecasters, their clients, watchdog organizations, researchers, and lawyers can all use the
Golden Rule checklist to determine whether forecasts are unbiased and likely to be accurate. Firms
can use the checklist to improve their forecasting to the benefit of their owners, suppliers, and
customers; investors can do so for new business ventures; and interested parties and the media can
use the checklist to assess public policies. While the commentators offer cautions and ideas for
extensions, there is no need to wait for further research given the many benefits of following the
Golden Rule, not least of which is that violating a single guideline in the checklist is likely to
increase forecast error by 44 percent or more than two-fifths on average.
9
References
Armstrong, J. S. (1985). Long-Range Forecasting: From Crystal Ball to Computer. (2nd ed.).
New York: John Wiley.
Armstrong, J. S., & Collopy, F. (1992). Error measures for generalizing about forecasting
methods: empirical comparisons. International Journal of Forecasting, 8(1), 69–80.
Armstrong, J. S., Green, K. C., & Graefe, A. (2015). The golden rule of forecasting. Journal of
Business Research, [this issue], xxxyyy.
Fildes, R., & Petropoulos, F. (2015). Is there a Golden Rule? Journal of Business Research,
[this issue], xxx–yyy.
Gardner, E. S., Jr. (2015). Conservative forecasting with the damped trend. Journal of
Business Research, [this issue], xxxyyy.
Goodwin, P. (2015). Is a more liberal approach to conservatism needed in forecasting?
Journal of Business Research, [this issue], xxxyyy.
Ord, J. K., & Fildes, R. (2013). Principles of Business Forecasting (International Edition).
Mason, OH: Cengage Learning.
Soyer, E., & Hogarth, R. M. (2012). Illusion of predictability: How regression statistics
mislead experts. International Journal of Forecasting, 28(3), 695–711.
Soyer, E., & Hogarth, R. M. (2015). The Golden Rule of forecasting: objections, refinements,
and enhancements. Journal of Business Research, [this issue], xxxyyy.
Stock, J. H., & Watson, M. W. (2002). Forecasting using principal components from a large
number of predictors. Journal of the American Statistical Association, 97(460), 1167–
1179.
Sunstein, C. R., & Zeckhauser, R. (2011). Overreaction to fearsome risks. Environmental and
Resource Economics, 48(3), 435–449.
... The importance of combining point forecasts as a means of increasing the forecast performance of individual predictions has received considerable attention in the literature. Composite forecasts have been shown to improve forecast accuracy and reduce the variance of errors (e.g., Armstrong, 2001;Armstrong et al. 2015;Cramer et al. 2021;Goodwin, 2015;Graefe et al. 2014;Green et al. 2015;Harvey, 2001;Lubecke et al., 1995;Ray et al. 2020;Reich et al. 2019;Thomson et al. 2019). This aspect of composite forecasts is especially beneficial when individual forecasts are based on dissimilar information sets (Wallis, 2011), as there is likely to be relative inconsistencies between the individual predictions that make up the composite forecasts. ...
Article
An analytical framework is presented for the evaluation of quantile probability forecasts. It is demonstrated using weekly quantile forecasts of changes in the number of US COVID-19 deaths. Empirical quantiles are derived using the assumption that daily changes in a variable follow a normal distribution with time varying means and standard deviations, which can be assumed constant over short horizons such as one week. These empirical quantiles are used to evaluate quantile forecasts using the Mean Squared Quantile Score (MSQS), which, in turn, is decomposed into sub-components involving bias, resolution and error variation to identify specific aspects of performance, which highlight the strengths and weaknesses of forecasts. The framework is then extended to test if performance enhancement can be achieved by combining diverse forecasts from different sources. The demonstration illustrates that the technique can effectively evaluate quantile forecasting performance based on a limited number of data points, which is crucial in emergency situations such as forecasting pandemic behavior. It also shows that combining the predictions with quantile probability forecasts generated from an Autoregressive Order One, AR(1) model provided substantially improved performance. The implications of these findings are discussed, suggestions are offered for future research and potential limitations are considered.
Article
Full-text available
There is general agreement in many forecasting contexts that combining individual predictions leads to better final forecasts. However, the relative error reduction in a combined forecast depends upon the extent to which the component forecasts contain unique/independent information. Unfortunately, obtaining independent predictions is difficult in many situations, as these forecasts may be based on similar statistical models and/or overlapping information. The current study addresses this problem by incorporating a measure of coherence into an analytic evaluation framework so that the degree of independence between sets of forecasts can be identified easily. The framework also decomposes the performance and coherence measures in order to illustrate the underlying aspects that are responsible for error reduction. The framework is demonstrated using UK retail prices index inflation forecasts for the period 1998–2014, and implications for forecast users are discussed.
Article
Full-text available
Problem How to help practitioners, academics, and decision makers use experimental research findings to substantially reduce forecast errors for all types of forecasting problems. Methods Findings from our review of forecasting experiments were used to identify methods and principles that lead to accurate forecasts. Cited authors were contacted to verify that summaries of their research were correct. Checklists to help forecasters and their clients undertake and commission studies that adhere to principles and use valid methods were developed. Leading researchers were asked to identify errors of omission or commission in the analyses and summaries of research findings. Findings Forecast accuracy can be improved by using one of 15 relatively simple evidence-based forecasting methods. One of those methods, knowledge models, provides substantial improvements in accuracy when causal knowledge is good. On the other hand, data models – developed using multiple regression, data mining, neural nets, and “big data analytics” – are unsuited for forecasting. Originality Three new checklists for choosing validated methods, developing knowledge models, and assessing uncertainty are presented. A fourth checklist, based on the Golden Rule of Forecasting, was improved. Usefulness Combining forecasts within individual methods and across different methods can reduce forecast errors by as much as 50%. Forecasts errors from currently used methods can be reduced by increasing their compliance with the principles of conservatism (Golden Rule of Forecasting) and simplicity (Occam’s Razor). Clients and other interested parties can use the checklists to determine whether forecasts were derived using evidence-based procedures and can, therefore, be trusted for making decisions. Scientists can use the checklists to devise tests of the predictive validity of their findings.
Working Paper
Full-text available
Problem: How to help practitioners, academics, and decision makers use experimental research findings to substantially reduce forecast errors for all types of forecasting problems. Methods: Findings from our review of forecasting experiments were used to identify methods and principles that lead to accurate forecasts. Cited authors were contacted to verify that summaries of their research were correct. Checklists to help forecasters and their clients practice and commission studies that adhere to principles and use valid methods were developed. Leading researchers were asked to identify errors of omission or commission in the analyses and summaries of research findings. Findings: Forecast accuracy can be improved by using one of 15 relatively simple evidence-based forecasting methods. One of those methods, knowledge models, provides substantial improvements in accuracy when causal knowledge is good. On the other hand, data models—developed using multiple regression, data mining, neural nets, and “big data analytics”—are unsuited for forecasting. Originality: Three new checklists for choosing validated methods, developing knowledge models, and assessing uncertainty are presented. A fourth checklist, based on the Golden Rule of Forecasting, was improved. Usefulness: Combining forecasts within individual methods and across different methods can reduce forecast errors by as much as 50%. Forecasts errors from currently used methods can be reduced by increasing their compliance with the principles of conservatism (Golden Rule of Forecasting) and simplicity (Occam’s Razor). Clients and other interested parties can use the checklists to determine whether forecasts were derived using evidence-based procedures and can, therefore, be trusted for making decisions. Scientists can use the checklists to devise tests of the predictive validity of their findings. Key words: combining forecasts, data models, decomposition, equalizing, expectations, extrapolation, knowledge models, intentions, Occam’s razor, prediction intervals, predictive validity, regression analysis, uncertainty
Article
Full-text available
This article proposes a unifying theory, or the Golden Rule, of forecasting. The Golden Rule of Forecasting is to be conservative. A conservative forecast is consistent with cumulative knowledge about the present and the past. To be conservative, forecasters must seek out and use all knowledge relevant to the problem, including knowledge of methods validated for the situation. Twenty-eight guidelines are logically deduced from the Golden Rule. A review of evidence identified 105 papers with experimental comparisons; 102 support the guidelines. Ignoring a single guideline increased forecast error by more than two-fifths on average. Ignoring the Golden Rule is likely to harm accuracy most when the situation is uncertain and complex, and when bias is likely. Non-experts who use the Golden Rule can identify dubious forecasts quickly and inexpensively. To date, ignorance of research findings, bias, sophisticated statistical procedures, and the proliferation of big data, have led forecasters to violate the Golden Rule. As a result, despite major advances in evidence-based forecasting methods, forecasting practice in many fields has failed to improve over the past half-century.
Article
In providing a “golden rule” for forecasting, Armstrong, Green, and Graefe (this issue) raise aspirations that reliable forecasting is possible. They advocate a conservative approach that mainly involves extrapolating from the present. We comment on three issues that relate to their proposed Golden Rule: its scope of application, the importance of highly improbable events, and the challenges of communicating forecasts.
Article
The “Golden Rule” checklist by Armstrong, Green, and Graefe, in this issue (referred to as AGG below), is a systematic procedure for implementing conservative forecasting principles, and it should help close the long-standing gap between theory and practice. The checklist is both a practical tool and an empirical research agenda. Trend damping is an important part of the checklist, and AGG rely on subjective judgment about when and how damping should be done. I recommend a more objective approach based on the damped trend method of exponential smoothing, which has a long record of success in empirical research.
Article
I discuss evidence that supports several of the principles put forward in the paper by Armstrong, Green, and Graefe (AGG), but argue that the packaging of these principles as a single “golden rule”’ and the use of the term “conservative” may lead to misunderstandings. Additional work should be carried out to investigate the extent to which these principles should be applied to probability and interval forecasts. Finally, good reasons may support why “rational” forecasters behave in ways that are inconsistent with the guidelines AGG provide in their golden-rule checklist.
Article
Armstrong, Green, and Graefe (this issue) propose the Golden Rule in forecasting: “be conservative”. According to the authors, the successful application of the Golden Rule comes through a checklist of 28 guidelines. Even if the authors of this commentary embrace the main ideas around the Golden Rule, which targets to address the “average” situation, they believe that this rule should not be applied automatically. There is no universal extrapolation method that can tackle every forecasting problem; nor are there simple rules that automatically apply without reference to the data. Similarly, it is demonstrated that for a specific causal regression model the recommended conservative rule leads to unnecessary inaccuracy. In this commentary the authors demonstrate, using the power of counter examples, two cases where the Golden Rule fails. Forecasting performance is context-dependent and, as such, forecasters (researchers and practitioners) should take into account the specific features of the situation faced.
Article
Review of J. Scott Armstrong's 1978 book on forecasting. Click on the DOI link above to read the review
Article
This article considers forecasting a single time series when there are many predictors (N) and time series observations (T). When the data follow an approximate factor model, the predictors can be summarized by a small number of indexes, which we estimate using principal components. Feasible forecasts are shown to be asymptotically efficient in the sense that the difference between the feasible forecasts and the infeasible forecasts constructed using the actual values of the factors converges in probability to 0 as both N and T grow large. The estimated factors are shown to be consistent, even in the presence of time variation in the factor model.
Article
Does the manner in which results are presented in empirical studies affect perceptions of the predictability of the outcomes? Noting the predominant role of linear regression analysis in empirical economics, we asked 257 academic economists to make probabilistic inferences given different presentations of the outputs of this statistical tool. Questions concerned the distribution of the dependent variable conditional on known values of the independent variable. Answers based on the presentation mode that is standard in the literature led to an illusion of predictability; outcomes were perceived to be more predictable than could be justified by the model. In particular, many respondents failed to take the error term into account. Adding graphs did not improve inferences. Paradoxically, when only graphs were provided (i.e., no regression statistics), respondents were more accurate. The implications of our study suggest, inter alia, the need to reconsider how to present empirical results and the possible provision of easy-to-use simulation tools that would enable readers of empirical papers to make accurate inferences.
Article
Fearsome risks are those that stimulate strong emotional responses. Such risks, which usually involve high consequences, tend to have low probabilities, since life today is no longer nasty, brutish and short. In the face of a low-probability fearsome risk, people often exaggerate the benefits of preventive, risk-reducing, or ameliorative measures. In both personal life and politics, the result is damaging overreactions to risks. We offer evidence for the phenomenon of probability neglect, failing to distinguish between high and low-probability risks. Action bias is a likely result.