Article

When simple is hard to accept

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The question of when and why simple heuristics like take-the-best achieve high predictive accuracy has been the topic of sustained research (e.g., Hogarth andKarelaia, 2006, 2007;Katsikopoulos and Martignon, 2006;Martignon and Schmitt, 1999;Schmitt & Martignon, 2006). Previous analyses can be seen as focusing on bias and asking the question of when heuristics like take-the-best make accurate inferences when cue validities are known rather than estimated from a sample. ...
... When Robyn Dawes presented the results at professional conferences, distinguished attendees told him that they were impossible. This reaction illustrates the negative impact of the bias bias: Dawes' paper with Corrigan was first rejected and deemed premature, and a sample of recent textbooks in econometrics revealed that none referred to their findings (Hogarth, 2012). These examples are an extreme case of shrinkage, a statistical technique for reducing variance by imposing restrictions on estimated parameter values (Hastie et al., 2001;Hoerl & Kennard, 2000). ...
Article
Full-text available
In marketing and finance, surprisingly simple models sometimes predict more accurately than more complex, sophisticated models. Here, we address the question of when and why simple models succeed — or fail — by framing the forecasting problem in terms of the bias–variance dilemma. Controllable error in forecasting consists of two components, the “bias” and the “variance”. We argue that the benefits of simplicity are often overlooked because of a pervasive “bias bias”: the importance of the bias component of prediction error is inflated, and the variance component of prediction error, which reflects an oversensitivity of a model to different samples from the same population, is neglected. Using the study of cognitive heuristics, we discuss how to reduce variance by ignoring weights, attributes, and dependencies between attributes, and thus make better decisions. Bias and variance, we argue, offer a more insightful perspective on the benefits of simplicity than Occam’'s razor.
... Even the expense of assembling a panel of experts may not always be necessary if we consider the use of a simple baseline model. There should not always be the presumption that the complex regression model (or expert model) is always optimum A User Validity Perspective Beyond the Test Score 8 (Hogarth, 2012). The Dawes rule provides a simple baseline model where all the predictors are correctly aligned in the direction of prediction and are added together to create a unit weighted sum (as opposed to a regression model for example, where the beta weights indicate different weights for different variables). ...
... The design of the output may, for example, have to take account of what will be acceptable to users. Stakeholders may also have an interest in maintaining their personal involvement in decision making (Hogarth, 2012). This may not just be experts protecting their vested professional interests, but may also come from a belief that their decision making is superior to an algorithm regardless of the validation evidence. ...
Article
This paper introduces the concept of user validity and provides a new perspective on the validity of interpretations from tests. Test interpretation is based on outputs such as test scores, profiles, reports, spreadsheets of multiple candidates' scores, etc. The user validity perspective focuses on the interpretations a test user makes given the purpose of the test and the information provided in the test output. This innovative perspective focuses on how user validity can be extended to content, criterion, and to some extent construct-related validity. It provides a basis for researching the validity of interpretations and an improved understanding of the appropriateness of different approaches to score interpretation, as well as how to design test outputs and assessments that are pragmatic and optimal.
... For inconsistent tasks, it is important to stimulate processes that are sensitive to domain knowledge [38,32] and learning [7]. These processes can be seen to increase the desired signal against the noise of competing processes that are driven by general psychological traits that are not domain specific and not amenable to learning [30,11,19,20,23,12] (see Fig. 3.17). In the spirit of Stewart [45], one must increase the reliability of estimates, in the sense of decreasing undue interand intra-rater variance. ...
Chapter
Full-text available
We start by looking at projects and their most abstract product elements, the epics, and show how to estimate their benefit using benefit points. Then we show how to sort epics according to a benefit-cost index to help decide the order in which to put epics into releases. Instantiating points with a monetary value provides added means of prioritizing and determining when to stop sending epics into construction. We show two modes of estimating benefit: one where the purpose is to fulfil a given goal (confirmatory mode), and the other where the purpose is to explore where to set the goal (exploratory mode).
... For inconsistent tasks, it is important to stimulate processes that are sensitive to domain knowledge [38, 32] and learning [7]. These processes can be seen to increase the desired signal against the noise of competing processes that are driven by general psychological traits that are not domain specific and not amenable to learning [30,11,19,20,23,12] (see Fig. 3.17). In the spirit of Stewart [45], one must increase the reliability of estimates, in the sense of decreasing undue interand intra-rater variance. ...
Book
Full-text available
This open access book presents a set of basic techniques for estimating the benefit of IT development projects and portfolios. It also offers methods for monitoring how much of that estimated benefit is being achieved during projects. Readers can then use these benefit estimates together with cost estimates to create a benefit/cost index to help them decide which functionalities to send into construction and in what order. This allows them to focus on constructing the functionality that offers the best value for money at an early stage. Although benefits management involves a wide range of activities in addition to estimation and monitoring, the techniques in this book provides a clear guide to achieving what has always been the goal of project and portfolio stakeholders: developing systems that produce as much usefulness and value as possible for the money invested. The techniques can also help deal with vicarious motives and obstacles that prevent this happening. The book equips readers to recognize when a project budget should not be spent in full and resources be allocated elsewhere in a portfolio instead. It also provides development managers and upper management with common ground as a basis for making informed decisions.
... In our view, due to its descriptive and prescriptive foundations (Kunc et al., 2016), B-OR research provides a great deal of promise to address such discrepancies, particularly where model supported interventions are concerned. Indeed, in light of the research from the "fast and frugal" paradigm over the past decade, simple may no longer be so hard to accept for the decision-analytic community (Hogarth, 2012). It is time to address the question of simple for whom and under what conditions. ...
Article
Recent studies have proposed the use of “fast and frugal” strategies as viable alternatives to support decision-processes in cases where time or other operational constraints preclude the application of standard decision-analytic methods. While a growing body of evidence shows that such procedures can be highly accurate, limited research has evaluated how well decision-makers can execute the prescriptive recommendations of aids based on such strategies in practice. Drawing on the behavioural, neuropsychological and decision-analytic literatures, we propose that an alignment between individual, model and task features will influence the effectiveness with which decision-makers can execute strategies that draw on prescriptive psychological heuristics – “fast and frugal” or otherwise. Our findings suggest that strategy execution is highly sensitive to task characteristics however, the effects of the number of alternatives and attributes on individuals’ ability to deploy a given strategy, differ in magnitude and direction depending on which decision-strategy is prescribed. A more compensatory decision-style positively affected overall task performance. Subjects’ ability to regulate inhibitory control was found to positively affect non-compensatory strategy execution, while having no discernible bearing on comparable compensatory tasks. Our findings reinforce that rather than an aspect of the prescriptive model, synergies between individual, model and task features are more instrumental in driving task performance in aided MCDM contexts. We discuss these findings in light of calls from OR scholars for the development of decision-aids that draw on prescriptive “fast and frugal” principles.
... Juster (1972, p. 23) states, "Few people would accept the naïve nochange model even if it were clearly shown to be more accurate." This supposition was supported by Hogarth's (2012) description of four key developments in forecasting in which senior academics resisted overwhelming evidence that simple methods provide forecasts that are more accurate than those from complex ones. ...
Article
Full-text available
This article introduces the Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models,forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods—including those in this special issue—found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy.Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives:(1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com.
... Unfortunately, simple models often face resistance, because people tend to wrongly believe that complex solutions are necessary to solve complex problems. Hogarth (2012) reported results from four studies, which showed that simple models often perform better than more complex ones. In each case, however, people resisted the findings regarding the performance of simple models. ...
... When the goal is pure prediction, these concerns are less pressing than accuracy and precision. For nowcasting, our results showed that a simple kriging model is enough (on small pelagic fish, Saraux et al. 2014; on the value of simplicity, Hogarth 2012, Ward et al. 2014). However, including predictors in conjunction with the use of shrinkage priors such as the Horseshoe can give equivalent, or better, predictions than a simple kriging model. ...
Article
Habitat modelling is increasingly relevant in biodiversity and conservation studies. A typical application is to predict potential zones of specific conservation interest. With many environmental covariates, a large number of models can be investigated but multi-model inference may become impractical. Shrinkage regression overcomes this issue by dealing with the identification and accurate estimation of effect size for prediction. In a Bayesian framework we investigated the use of a shrinkage prior, the Horseshoe, for variable selection in spatial generalized linear models (GLM). As study cases, we considered 5 datasets on small pelagic fish abundance in the Gulf of Lion (Mediterranean Sea, France) and 9 environmental inputs. We compared the predictive performances of a simple kriging model, a full spatial GLM model with independent normal priors for regression coefficients, a full spatial GLM model with a Horseshoe prior for regression coefficients and 2 zero-inflated models (spatial and non-spatial) with a Horseshoe prior. Predictive performances were evaluated by cross-validation on a hold-out subset of the data: models with a Horseshoe prior performed best, and the full model with independent normal priors worst. With an increasing number of inputs, extrapolation quickly became pervasive as we tried to predict from novel combinations of covariate values. By shrinking regression coefficients with a Horseshoe prior, only one model needed to be fitted to the data in order to obtain reasonable and accurate predictions, including extrapolations.
... Juster (1972, p. 23) states, "Few people would accept the naïve no-change model even if it were clearly shown to be more accurate." This supposition was supported by Hogarth's (2012) description of four key developments in forecasting in which senior academics resisted overwhelming evidence that simple methods provide forecasts that are more accurate than those from complex ones. ...
Article
Full-text available
This article introduces this JBR Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods - including those in this special issue - found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy. Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com.
... But even if changing behavior by other means (e.g., social influence) is an aim, it may be much too ambitious (and probably significantly increase the risk of failure) to focus on complex systems as done in, for instance, the transdisciplinary case study approach or agent simulations. It is a common false belief that complex problems require complex solutions (Hogarth, 2012). I think many examples (mainly in chemistry and physics to be sure) can be found of basic disciplinary research leading to revolutionary applications. ...
Article
Full-text available
In my commentary on the papers in this special section of European Psychologist, I note that the focus of past environmental psychology on changing the human environment to increase people’s well-being has in contemporary environmental psychology been replaced by a focus on changing people and their behavior to preserve the human environment. This change is justified by current concerns in society about the ongoing destruction of the human environment. Yet, the change of focus should not lead to neglecting the role of changing the environment for changing people’s behavior. I argue that it may actually be the most effective behavior change tool. I still criticize approaches focusing on single behaviors for frequently being insufficient. I endorse an approach that entails coercive measures implemented after research has established that changing consumption styles harming the environment does not harm people. Such a broader approach would alert researchers to undesirable (in particular indirect) rebound effects. My view on application is that research findings in (environmental) psychology are difficult to communicate to those who should apply them, not because they are irrelevant but because they, by their nature, are qualitative and conditional. Scholars from other disciplines failing to disclose this have an advantage in attracting attention and building trust. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
... Unfortunately, simple models often face resistance, because people tend to wrongly believe that complex solutions are necessary to solve complex problems. Hogarth (2012) reported results from four studies, which showed that simple models often perform better than more complex ones. In each case, however, people resisted the findings regarding the performance of simple models. ...
Article
Full-text available
Simple surveys that ask people who they expect to win are among the most accurate methods for forecasting U.S. presidential elections. The majority of respondents correctly predicted the election winner in 193 (89%) of 217 surveys conducted from 1932 to 2012. Across the last 100 days prior to the seven elections from 1988 to 2012, vote expectation surveys provided more accurate forecasts of election winners and vote shares than four established methods (vote intention polls, prediction markets, econometric models, and expert judgment). Gains in accuracy were particularly large compared to polls. On average, the error of expectation-based vote-share forecasts was 51% lower than the error of polls published the same day. Compared to prediction markets, vote expectation forecasts reduced the error on average by 6%. Vote expectation surveys are inexpensive, easy to conduct, and the results are easy to understand. They provide accurate and stable forecasts and thus make it difficult to frame elections as horse races. Vote expectation surveys should be more strongly utilized in the coverage of election campaigns.
... Why then is there such a strong interest in complex regression analyses? Perhaps this is due to academics' preference for complex solutions, as Hogarth (2012) describes. ...
Article
Full-text available
Soyer and Hogarth’s article, 'The Illusion of Predictability,' shows that diagnostic statistics that are commonly provided with regression analysis lead to confusion, reduced accuracy, and overconfidence. Even highly competent researchers are subject to these problems. This overview examines the Soyer-Hogarth findings in light of prior research on illusions associated with regression analysis. It also summarizes solutions that have been proposed over the past century. These solutions would enhance the value of regression analysis.
... Combining seems too simple. Hogarth (2012) reported results from four case studies showing that simple models often predict complex problems better than more complex ones. In each case, people had difficulty accepting the findings from simple models. ...
Article
Full-text available
We summarize the literature on the effectiveness of combining forecasts by assessing the conditions under which combining is most valuable. Using data on the six US presidential elections from 1992 to 2012, we report the reductions in error obtained by averaging forecasts within and across four election forecasting methods: poll projections, expert judgment, quantitative models, and the Iowa Electronic Markets. Across the six elections, the resulting combined forecasts were more accurate than any individual component method, on average. The gains in accuracy from combining increased with the numbers of forecasts used, especially when these forecasts were based on different methods and different data, and in situations involving high levels of uncertainty. Such combining yielded error reductions of between 16% and 59%, compared to the average errors of the individual forecasts. This improvement is substantially greater than the 12% reduction in error that had been reported previously for combining forecasts.
... Unfortunately, the index method's simplicity may be its biggest drawback. Summarizing evidence from the literature, Hogarth (2012) showed that people exhibit a general resistance to simple solutions. ...
Article
Full-text available
When deciding for whom to vote, voters should select the candidate they expect to best handle issues, all other things equal. A simple heuristic predicted that the candidate who is rated more favorably on a larger number of issues would win the popular vote. This was correct for nine out of ten U.S. presidential elections from 1972 to 2008. We then used simple linear regression to relate the incumbent's relative issue ratings to the actual two-party popular vote shares. The resulting model yielded out-of-sample forecasts that were competitive with those from the Iowa Electronic Markets and established quantitative models. The issue-index model has implications for political decision makers, as it can help to track campaigns and to decide which issues to focus on.
Chapter
List of works referred to in Armstrong & Green (2022) The Scientific Method
Article
Simple, transparent rules are often frowned upon while complex, black-box models are seen as holding greater promise. Yet in quickly-changing situations, simple rules can protect against overfitting and adapt quickly. We show that the surprisingly simple recency heuristic forecasts more accurately than Google Flu Trends which used big data analytics and a black-box algorithm. This heuristic predicts that “this week’s proportion of flu-related doctor visits equals the proportion from the most recent week”. It is based on psychological theory of how people deal with rapidly changing situations. Other theory-inspired heuristics have outperformed big data models in predicting outcomes such as U.S. presidential elections, or uncertain events such as consumer purchases, patient hospitalizations and terrorist attacks. Heuristics are transparent, clearly communicating the underlying rationale for their predictions. We advocate taking into account psychological principles that have evolved over millennia and using these as a benchmark when testing big data models.
Article
When deciding for whom to vote, voters should select the candidate they expect to best handle issues, all other things equal. A simple heuristic predicted that the candidate who is rated more favorably on a larger number of issues would win the popular vote. This was correct for nine out of ten U.S. presidential elections from 1972 to 2008. We then used simple linear regression to relate the incumbent’s relative issue ratings to the actual two-party popular vote shares. The resulting model yielded out-of-sample forecasts that were competitive with those from the Iowa Electronic Markets and other established quantitative models. This model has implications for political decision-makers, as it can help to track campaigns and to decide which issues to focus on.
Chapter
Homo heuristicus makes inferences in uncertain environments using simple heuristics that ignore information (Gigerenzer and Brighton, 2009). Traditionally, heuristics are seen as second-best solutions which reduce effort at the expense of accuracy, and lead to systematic errors. The prevailing assumption is that, to understand the ability of humans and other animals to cope with uncertainty, one should investigate cognitive models that optimize. We introduced the term Homo heuristicus to highlight several reasons why this assumption can be misleading, and argue that heuristics play a critical role in explaining the ability of organisms to make accurate inferences from limited observations of an uncertain and potentially changing environment. In this chapter we use examples to sketch the theoretical basis for this assertion, and examine the progress made in the development of Homo heuristicus as a model of human decision-making.
Article
Full-text available
Large river valleys have long been seen as important factors to shape the mobility, communication, and exchange of Pleistocene hunter-gatherers. However, rivers have been debated as either natural entities people adapt and react to or as cultural and meaningful entities people experience and interpret in different ways. Here, we attempt to integrate both perspectives. Building on theoretical work from various disciplines, we discuss the relationship between biophysical river properties and sociocultural river semantics and suggest that understanding a river’s persona is central to evaluating its role in spatial organization. By reviewing the literature and analyzing European Upper Paleolithic site distribution and raw material transfer patterns in relation to river catchments, we show that the role of prominent rivers varies considerably over time. Both ecological and cultural factors are crucial to explaining these patterns. Whereas the Earlier Upper Paleolithic record displays a general tendency toward conceiving rivers as mobility guidelines, the spatial consolidation process after the colonization of the European mainland is paralleled by a trend of conceptualizing river regimes as frontiers, separating archaeological entities, regional groups, or local networks. The Late Upper Paleolithic Magdalenian, however, is characterized again by a role of rivers as mobility and communication vectors. Tracing changing patterns in the role of certain river regimes through time thus contributes to our growing knowledge of human spatial behavior and helps to improve our understanding of dynamic and mutually informed human-environment interactions in the Paleolithic. Electronic supplementary material The online version of this article (doi:10.1007/s10816-015-9263-x) contains supplementary material, which is available to authorized users.
Article
While theories of rationality and decision making typically adopt either a single-powertool perspective or a bag-of-tricks mentality, the research program of ecological rationality bridges these with a theoretically-driven account of when different heuristic decision mechanisms will work well. Here we described two ways to study how heuristics match their ecological setting: The bottom-up approach starts with psychologically plausible building blocks that are combined to create simple heuristics that fit specific environments. The top-down approach starts from the statistical problem facing the organism and a set of principles, such as the bias- variance tradeoff, that can explain when and why heuristics work in uncertain environments, and then shows how effective heuristics can be built by biasing and simplifying more complex models. We conclude with challenges these approaches face in developing a psychologically realistic perspective on human rationality.
Conference Paper
Full-text available
The present study shows that the predictive performance of Ensemble Bayesian Model Averaging (EBMA) strongly depends on the conditions of the forecasting problem. EBMA is of limited value when uncertainty is high, a situation that is common for social science problems. In such situations, one should avoid methods that bear the risk of overfitting. Instead, one should acknowledge the uncertainty in the environment and use conservative methods that are robust when predicting new data. For combining forecasts, consider calculating simple (unweighted) averages of the component forecasts. A vast prior literature finds that simple averages yield forecasts that are often at least as accurate as those from more complex combining methods. A reanalysis and extension of a prior study on US presidential election forecasting, which had the purpose to demonstrate the usefulness of EBMA, shows that the simple average reduced the error of the combined EBMA forecasts by 25%. Simple averages produce accurate forecasts, are easy to describe, easy to understand, and easy to use. Researchers who develop new methods for combining forecasts need to compare the accuracy of their method to this widely established benchmark method. Forecasting practitioners should favor simple averages over more complex methods unless there is strong evidence in support of differential weights.
Article
Full-text available
We compare the accuracy of simple unweighted averages and Ensemble Bayesian Model Averaging (EBMA) to combining forecasts in the social sciences. A review of prior studies from the domain of economic forecasting finds that the simple average was more accurate than EBMA in four out of five studies. On average, the error of EBMA was 5% higher than the error of the simple average. A reanalysis and extension of a published study provides further evidence for US presidential election forecasting. The error of EBMA was 33% higher than the corresponding error of the simple average. Simple averages are easy to describe, easy to understand and thus easy to use. In addition, simple averages provide accurate forecasts in many settings. Researchers who develop new approaches to combining forecasts need to compare the accuracy of their method to this widely established benchmark. Forecasting practitioners should favor simple averages over more complex methods unless there is strong evidence in support of differential weights.
Chapter
Full-text available
The field of forecasting is concerned with making statements about matters that are currently unknown. The terms "forecast," "prediction," "projections," and "prognosis" are interchangeable as commonly used. Forecasting is also concerned with the effective presentation and use of forecasts.
Article
Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy. We discuss some of the major progress made so far, focusing on the discovery of less-is-more effects and the study of the ecological rationality of heuristics which examines in which environments a given strategy succeeds or fails, and why. Homo heuristicus has a biased mind and ignores part of the available information, yet a biased mind can handle uncertainty more efficiently and robustly than an unbiased mind relying on more resource-intensive and general-purpose processing strategies.
Article
Full-text available
How many judgment and decision making (JDM) researchers have not claimed to be building on Herbert Simon's work? We identify two of Simon's goals for JDM research: He sought to understand people's decision processes---the descriptive goal---and studied whether the same processes lead to good decisions---the prescriptive goal. To investigate how recent JDM research relates to these goals, we analyzed the articles published in the Journal of Behavioral Decision Making and in Judgment and Decision Making from 2006 to 2010. Out of 377 articles, 91 cite Simon or we judged them as directly relating to his goals. We asked whether these articles are integrative, in the following sense: For a descriptive article we asked if it contributes to building a theory that reconciles different conceptualizations of cognition such as neural networks and heuristics. For a prescriptive article we asked if it contributes to building a method that combines ideas of other methods such as heuristics and optimization models. Based on our subjective judgments we found that the proportion of integrative articles was 67% of the prescriptive and 52% of the descriptive articles. We offer suggestions for achieving more integration of JDM theories. The article concludes with the thesis that although JDM researchers work under Simon's spell, no one really knows what that spell is.
Article
Simple statistical forecasting rules, which are usually simplifications of classical models, have been shown to make better predictions than more complex rules, especially when the future values of a criterion are highly uncertain. In this article, we provide evidence that some of the fast and frugal heuristics that people use intuitively are able to make forecasts that are as good as or better than those of knowledge-intensive procedures. We draw from research on the adaptive toolbox and ecological rationality to demonstrate the power of using intuitive heuristics for forecasting in various domains including sport, business, and crime.
Article
Laypeople as well as professionals such as business managers and medical doctors often use psychological heuristics. Psychological heuristics are models for making inferences that (1) rely heavily on core human capacities (such as recognition, recall, or imitation); (2) do not necessarily use all available information and process the information they use by simple computations (such as lexicographic rules or aspiration levels); and (3) are easy to understand, apply, and explain. Psychological heuristics are a simple alternative to optimization models (where the optimum of a mathematical function that incorporates all available information is computed). I review studies in business, medicine, and psychology where computer simulations and mathematical analyses reveal conditions under which heuristics make better inferences than optimization and vice versa. The conditions involve concepts that refer to (i) the structure of the problem, (ii) the resources of the decision maker, or (iii) the properties of the models. I discuss open problems in the theoretical study of the concepts. Finally, I organize the current results tentatively in a tree for helping decision analysts decide whether to suggest heuristics or optimization to decision makers. I conclude by arguing for a multimethod, multidisciplinary approach to the theory and practice of inference and decision making.
Article
Full-text available
that bets on the fact that people's recognition knowledge of names is a proxy for their competitiveness: In sports, it predicts that the better-known team or player wins a game. We present two studies on the predictive power of recognition in forecasting soccer games (World Cup 2006 and UEFA Euro 2008) and analyze previously published results. The performance of the collective recognition heuristic is compared to two benchmarks: predictions based on official rankings and aggregated betting odds. Across three soccer and two tennis tournaments, the predictions based on recognition performed similar to those based on rankings; when compared with betting odds, the heuristic fared reasonably well. Forecasts based on rankings---but not on betting odds---were improved by incorporating collective recognition information. We discuss the use of recognition for forecasting in sports and conclude that aggregating across individual ignorance spawns collective wisdom.
ResearchGate has not been able to resolve any references for this publication.