Chapter

When Simple Is Hard to Accept

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Indeed, the complex statistics typically provided with regression models are unlikely to help decision-makers to make better decisions, as they confuse even statisticians. Soyer and Hogarth (2012) ask 90 economists from leading universities to interpret standard regression analysis summaries. Roughly two-thirds of their answers to three relevant questions were substantively wrong. ...
... Juster (1972, p. 23) states, " Few people would accept the naïve no-change model even if it were clearly shown to be more accurate. " This supposition was supported by Hogarth's (2012) description of four key developments in forecasting in which senior academics resisted overwhelming evidence that simple methods provide forecasts that are more accurate than those from complex ones. ...
Article
Full-text available
This article introduces this JBR Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods - including those in this special issue - found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy. Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com.
... The question of when and why simple heuristics like take-the-best achieve high predictive accuracy has been the topic of sustained research (e.g., Hogarth and Karelaia, 2006, 2007; Katsikopoulos and Martignon, 2006; Martignon and Schmitt, 1999; Schmitt & Martignon, 2006). Previous analyses can be seen as focusing on bias and asking the question of when heuristics like take-the-best make accurate inferences when cue validities are known rather than estimated from a sample. ...
... When Robyn Dawes presented the results at professional conferences, distinguished attendees told him that they were impossible. This reaction illustrates the negative impact of the bias bias: Dawes' paper with Corrigan was first rejected and deemed premature, and a sample of recent textbooks in econometrics revealed that none referred to their findings (Hogarth, 2012). These examples are an extreme case of shrinkage, a statistical technique for reducing variance by imposing restrictions on estimated parameter values (Hastie et al., 2001; Hoerl & Kennard, 2000). ...
Article
Full-text available
In marketing and finance, surprisingly simple models sometimes predict more accurately than more complex, sophisticated models. Here, we address the question of when and why simple models succeed — or fail — by framing the forecasting problem in terms of the bias–variance dilemma. Controllable error in forecasting consists of two components, the “bias” and the “variance”. We argue that the benefits of simplicity are often overlooked because of a pervasive “bias bias”: the importance of the bias component of prediction error is inflated, and the variance component of prediction error, which reflects an oversensitivity of a model to different samples from the same population, is neglected. Using the study of cognitive heuristics, we discuss how to reduce variance by ignoring weights, attributes, and dependencies between attributes, and thus make better decisions. Bias and variance, we argue, offer a more insightful perspective on the benefits of simplicity than Occam’'s razor.
... Even the expense of assembling a panel of experts may not always be necessary if we consider the use of a simple baseline model. There should not always be the presumption that the complex regression model (or expert model) is always optimum A User Validity Perspective Beyond the Test Score 8 (Hogarth, 2012). The Dawes rule provides a simple baseline model where all the predictors are correctly aligned in the direction of prediction and are added together to create a unit weighted sum (as opposed to a regression model for example, where the beta weights indicate different weights for different variables). ...
... The design of the output may, for example, have to take account of what will be acceptable to users. Stakeholders may also have an interest in maintaining their personal involvement in decision making (Hogarth, 2012). This may not just be experts protecting their vested professional interests, but may also come from a belief that their decision making is superior to an algorithm regardless of the validation evidence. ...
Article
This paper introduces the concept of user validity and provides a new perspective on the validity of interpretations from tests. Test interpretation is based on outputs such as test scores, profiles, reports, spreadsheets of multiple candidates' scores, etc. The user validity perspective focuses on the interpretations a test user makes given the purpose of the test and the information provided in the test output. This innovative perspective focuses on how user validity can be extended to content, criterion, and to some extent construct-related validity. It provides a basis for researching the validity of interpretations and an improved understanding of the appropriateness of different approaches to score interpretation, as well as how to design test outputs and assessments that are pragmatic and optimal.
... These findings, which go back to seminal work done in the 1970s (Dawes 1979, Dawes and Corrigan 1974, Einhorn and Hogarth 1975, have yet to impact many fields. A review of five standard econometrics textbooks finds that none pays attention to the equalweights literature (Hogarth 2012), and Kahneman (2011) observes that this work had no impact on statistical practice in the social sciences. ...
Conference Paper
Full-text available
The present study shows that the predictive performance of Ensemble Bayesian Model Averaging (EBMA) strongly depends on the conditions of the forecasting problem. EBMA is of limited value when uncertainty is high, a situation that is common for social science problems. In such situations, one should avoid methods that bear the risk of overfitting. Instead, one should acknowledge the uncertainty in the environment and use conservative methods that are robust when predicting new data. For combining forecasts, consider calculating simple (unweighted) averages of the component forecasts. A vast prior literature finds that simple averages yield forecasts that are often at least as accurate as those from more complex combining methods. A reanalysis and extension of a prior study on US presidential election forecasting, which had the purpose to demonstrate the usefulness of EBMA, shows that the simple average reduced the error of the combined EBMA forecasts by 25%. Simple averages produce accurate forecasts, are easy to describe, easy to understand, and easy to use. Researchers who develop new methods for combining forecasts need to compare the accuracy of their method to this widely established benchmark method. Forecasting practitioners should favor simple averages over more complex methods unless there is strong evidence in support of differential weights.
... There is one problem, however. People have no faith in simple methods. Simple methods often face resistance, as people wrongly believe that complex solutions are necessary to solve complex problems. Hogarth (2012) reviews four influential studies, which showed that simple methods often perform better than more complex ones. In each case, fellow researchers initially resisted the findings regarding the superiority of simple methods. ...
Article
Full-text available
We compare the accuracy of simple unweighted averages and Ensemble Bayesian Model Averaging (EBMA) to combining forecasts in the social sciences. A review of prior studies from the domain of economic forecasting finds that the simple average was more accurate than EBMA in four out of five studies. On average, the error of EBMA was 5% higher than the error of the simple average. A reanalysis and extension of a published study provides further evidence for US presidential election forecasting. The error of EBMA was 33% higher than the corresponding error of the simple average. Simple averages are easy to describe, easy to understand and thus easy to use. In addition, simple averages provide accurate forecasts in many settings. Researchers who develop new approaches to combining forecasts need to compare the accuracy of their method to this widely established benchmark. Forecasting practitioners should favor simple averages over more complex methods unless there is strong evidence in support of differential weights.
Article
Psychological artificial intelligence (AI) applies insights from psychology to design computer algorithms. Its core domain is decision-making under uncertainty, that is, ill-defined situations that can change in unexpected ways rather than well-defined, stable problems, such as chess and Go. Psychological theories about heuristic processes under uncertainty can provide possible insights. I provide two illustrations. The first shows how recency-the human tendency to rely on the most recent information and ignore base rates-can be built into a simple algorithm that predicts the flu substantially better than did Google Flu Trends's big-data algorithms. The second uses a result from memory research-the paradoxical effect that making numbers less precise increases recall-in the design of algorithms that predict recidivism. These case studies provide an existence proof that psychological AI can help design efficient and transparent algorithms.
Article
Full-text available
This article introduces the Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models,forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods—including those in this special issue—found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy.Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives:(1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com.
Article
Habitat modelling is increasingly relevant in biodiversity and conservation studies. A typical application is to predict potential zones of specific conservation interest. With many environmental covariates, a large number of models can be investigated but multi-model inference may become impractical. Shrinkage regression overcomes this issue by dealing with the identification and accurate estimation of effect size for prediction. In a Bayesian framework we investigated the use of a shrinkage prior, the Horseshoe, for variable selection in spatial generalized linear models (GLM). As study cases, we considered 5 datasets on small pelagic fish abundance in the Gulf of Lion (Mediterranean Sea, France) and 9 environmental inputs. We compared the predictive performances of a simple kriging model, a full spatial GLM model with independent normal priors for regression coefficients, a full spatial GLM model with a Horseshoe prior for regression coefficients and 2 zero-inflated models (spatial and non-spatial) with a Horseshoe prior. Predictive performances were evaluated by cross-validation on a hold-out subset of the data: models with a Horseshoe prior performed best, and the full model with independent normal priors worst. With an increasing number of inputs, extrapolation quickly became pervasive as we tried to predict from novel combinations of covariate values. By shrinking regression coefficients with a Horseshoe prior, only one model needed to be fitted to the data in order to obtain reasonable and accurate predictions, including extrapolations.
Article
A reanalysis and extension of Montgomery, Hollenbach, and Ward (2012) shows that the predictive performance of Ensemble Bayesian Model Averaging (EBMA) strongly depends on the conditions of the forecasting problem. EBMA is of limited value in situations with small samples and many component forecasts, a situation that is common for social science prediction problems. These results conform to a large body of research, which has determined that simple approaches to combining (such as equal weights) often perform as well as sophisticated approaches when combining forecasts. Simple averages are easy to describe, easy to understand, and easy to use. They should be favored over more complex methods unless one has strong evidence that differential weights will improve accuracy.
Article
While theories of rationality and decision making typically adopt either a single-powertool perspective or a bag-of-tricks mentality, the research program of ecological rationality bridges these with a theoretically-driven account of when different heuristic decision mechanisms will work well. Here we described two ways to study how heuristics match their ecological setting: The bottom-up approach starts with psychologically plausible building blocks that are combined to create simple heuristics that fit specific environments. The top-down approach starts from the statistical problem facing the organism and a set of principles, such as the bias- variance tradeoff, that can explain when and why heuristics work in uncertain environments, and then shows how effective heuristics can be built by biasing and simplifying more complex models. We conclude with challenges these approaches face in developing a psychologically realistic perspective on human rationality.
Article
Simple heuristics, such as deterministic elimination by aspects (DEBA) and equal weighting of attributes with DEBA as a tiebreaker (EW/DEBA), have been found to perform curiously well in choosing one out of many alternatives based on a few binary attributes. DEBA and EW/DEBA sometimes achieve near-perfect performance and complement each other (if one is wrong or does not apply, the other is correct). Here, these findings are confirmed and extended; most importantly, a theory is presented that explains them. The theory allows calculating the performance of any model, for any number of binary attributes, for any preferences of the decision maker, for all sizes of the consideration set, and for sampling alternatives with as well as without replacement. Calculations based on the theory organize previous empirical findings and provide new surprising results. For example, the performance of both DEBA and EW/DEBA is a U-shaped function of the size of the consideration set and converges relatively quickly to perfection as the size of the consideration set increases (this result holds even when the preferences of the decision maker are worst-case scenarios for the performance of the heuristics). An explanation for why DEBA and EW/DEBA complement each other is also provided. Finally, the need for a unified theory of multiattribute choice and cue-based judgment is discussed.
Article
Full-text available
In my commentary on the papers in this special section of European Psychologist, I note that the focus of past environmental psychology on changing the human environment to increase people’s well-being has in contemporary environmental psychology been replaced by a focus on changing people and their behavior to preserve the human environment. This change is justified by current concerns in society about the ongoing destruction of the human environment. Yet, the change of focus should not lead to neglecting the role of changing the environment for changing people’s behavior. I argue that it may actually be the most effective behavior change tool. I still criticize approaches focusing on single behaviors for frequently being insufficient. I endorse an approach that entails coercive measures implemented after research has established that changing consumption styles harming the environment does not harm people. Such a broader approach would alert researchers to undesirable (in particular indirect) rebound effects. My view on application is that research findings in (environmental) psychology are difficult to communicate to those who should apply them, not because they are irrelevant but because they, by their nature, are qualitative and conditional. Scholars from other disciplines failing to disclose this have an advantage in attracting attention and building trust. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
Article
Full-text available
Soyer and Hogarth’s article, 'The Illusion of Predictability,' shows that diagnostic statistics that are commonly provided with regression analysis lead to confusion, reduced accuracy, and overconfidence. Even highly competent researchers are subject to these problems. This overview examines the Soyer-Hogarth findings in light of prior research on illusions associated with regression analysis. It also summarizes solutions that have been proposed over the past century. These solutions would enhance the value of regression analysis.
Article
Full-text available
We summarize the literature on the effectiveness of combining forecasts by assessing the conditions under which combining is most valuable. Using data on the six US presidential elections from 1992 to 2012, we report the reductions in error obtained by averaging forecasts within and across four election forecasting methods: poll projections, expert judgment, quantitative models, and the Iowa Electronic Markets. Across the six elections, the resulting combined forecasts were more accurate than any individual component method, on average. The gains in accuracy from combining increased with the numbers of forecasts used, especially when these forecasts were based on different methods and different data, and in situations involving high levels of uncertainty. Such combining yielded error reductions of between 16% and 59%, compared to the average errors of the individual forecasts. This improvement is substantially greater than the 12% reduction in error that had been reported previously for combining forecasts.
ResearchGate has not been able to resolve any references for this publication.