Article

Transparent modeling of influenza incidence: Big data or a single data point from psychological theory?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Simple, transparent rules are often frowned upon while complex, black-box models are seen as holding greater promise. Yet in quickly-changing situations, simple rules can protect against overfitting and adapt quickly. We show that the surprisingly simple recency heuristic forecasts more accurately than Google Flu Trends which used big data analytics and a black-box algorithm. This heuristic predicts that “this week’s proportion of flu-related doctor visits equals the proportion from the most recent week”. It is based on psychological theory of how people deal with rapidly changing situations. Other theory-inspired heuristics have outperformed big data models in predicting outcomes such as U.S. presidential elections, or uncertain events such as consumer purchases, patient hospitalizations and terrorist attacks. Heuristics are transparent, clearly communicating the underlying rationale for their predictions. We advocate taking into account psychological principles that have evolved over millennia and using these as a benchmark when testing big data models.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... 463 We also observe variations in the drug types that cause overdose deaths in the three regions. Fig. 8(g-i) shows that horizons [101,102]. We refer to these two baselines as "recency heuristics" [101,103]. ...
... Fig. 8(g-i) shows that horizons [101,102]. We refer to these two baselines as "recency heuristics" [101,103]. In the first baseline (i.e., recency 484 heuristic I), the forecast for the number of overdose deaths in a specific age group in year Y is assumed to be the ...
... However, it may exhibit a higher degree of overshooting compared to the EnKF forecast, as observed in 2021.Since they rely on the most recent available observational data, model-free recency-type heuristics, such as recency heuristic I and II usually perform well when forecasting over short time horizons. Previous studies have demonstrated that recency heuristic I can provide more accurate forecasts than Google Flu Trends[101], and that recency heuristic II can compete favorably with CDC and ECDC COVID-19 ensemble forecasting models[102]. However, simple data-driven heuristics do not typically yield reliable long-term forecasts, as they lack the ability to account for underlying population-level dynamics. ...
Article
Full-text available
The drug overdose crisis in the United States continues to intensify. Fatalities have increased five-fold since 1999 reaching a record high of 108,000 deaths in 2021. The epidemic has unfolded through distinct waves of different drug types, uniquely impacting various age, gender, race and ethnic groups in specific geographical areas. One major challenge in designing interventions and efficient delivering of treatment is forecasting age-specific overdose patterns at the local level. To address this need, we develop a forecasting method that assimilates observational data obtained from the CDC WONDER database with an age-structured model of addiction and overdose mortality. We apply our method nationwide and to three select areas: Los Angeles County, Cook County and the five boroughs of New York City, providing forecasts of drug-overdose mortality and estimates of relevant epidemiological quantities, such as mortality and age-specific addiction rates.
... Purpose Katsikopoulos et al. (2021) found that the simple and easily understood recency heuristic-which uses a single historical observation to forecast week-ahead percentage of doctor visits associated with influenza symptoms-reduced forecast errors by nearly one-half compared to Google Flu Trends' (GFT's) complex and opaque machine learning model-which uses "big data". ...
... Damped trend models reduced absolute forecast errors by 13% relative to forecasts from the recency heuristic, by nearly 12% relative to forecasts from linear regression models estimated from the two most recent observations, and by roughly 54% relative to Google Flu Trends forecasts for the 440-week period examined by Katsikopoulos et al. (2021). See Table 1. ...
... a Changes in this revision from the original 23 January 2021 version are the addition of the word "not" in the "Limitations" section on 25 January, the note on the title of the original version of this research note that is referred to in Katsikopoulos et. al.'s (2022) reply, and a correction to the title of Katsikopoulos et. al. (2021) in the References section. ...
Experiment Findings
Full-text available
Katsikopoulos et al. (2021) found that the simple and easily understood recency heuristic-which uses a single historical observation to forecast week-ahead percentage of doctor visits associated with influenza symptoms-reduced forecast errors by nearly one-half compared to Google Flu Trends' (GFT's) complex and opaque machine learning model-which uses "big data". This research note examines whether the accuracy of forecasts can be further improved by using another simple forecasting method (Green & Armstrong, 2015) that takes account of the observation that infection rates can trend, and does so in a conservative way (Armstrong, Green, and Graefe, 2015) by damping recent trends toward zero.
... The association of living creatures and their natural environmental conditions is presented through the theories of ecological sciences (Rosindell et al., 2011). Similarly, the financial returns generated from social investments, with their cost and benefit relations, are expressed by finance theories (Katsikopoulos et al., 2022). Finally, the management and balanced usage of resources to meet the supply and demand conditions are expressed by economic theories (Daugaard & Ding, 2022). ...
... Ecological theory, Metabolic theory, Niche theory, Neutral theory, Environmental load theory, Arousal theory (Chave, 2004;Leigh, 2007;Palmer et al., 1997;Rosindell et al., 2011) Finance Portfolio theory, Capital structure theory, Utility theory, Non-rational choice theory (Katsikopoulos et al., 2022;Shehzad & Khan, 2024a) Economics ESG theory, Economic theory, Supply and demand theory, Keynesian economic theory (Daugaard & Ding, 2022;Dolderer et al., 2021) Portfolio, capital structure, utility, ratio analysis, and equilibrium theories are related to finance. Portfolio theory facilitates risk mitigation of stakeholders' concerns and focuses on maximizing returns or minimizing risks through diversification (Ando & Shah, 2016). ...
Article
Full-text available
The global concern of biodiversity loss highlights the significance of sustainable practices protecting natural resources, which require massive investment. However, scarce resources often hinder this, particularly in emerging economies. Biodiversity finance can be an anticipated solution to this burning issue that can ensure the sustainable management of the ecosystem. Still, due to the evolving nature of the concept, its application is restricted to different stages. The current study explores the impediments to biodiversity finance implementation in emerging economies like Pakistan. It is based on a qualitative research design and is conducted in two stages. Initially, an extensive literature review was conducted to clarify the concept of biodiversity finance and identify the impediments reported by prior researchers. Later, twenty-five semi-structured interviews were conducted with the members of financial institutions, NGOs, agencies, policymakers, investors, and subject experts. Finally, the interviews were transcribed and analyzed using NVivo-14. The identified impediments were categorized into seven major themes: conceptual, social, environmental, finance, economic, framework-related, and territorial, and also suggest the future perspective of the concept. The findings suggest that the major hindrances in biodiversity finance adoption are lack of conceptualization, economic and political instability, inadequate funds, social injustice, environmental deregulation, and excess usage of natural resources. The findings will be helpful for financial institutions, NGOs, and agencies in policy-making and framework formulation for successfully applying the biodiversity finance system.
... Heuristics or rule-of-thumb strategies may be especially suited under conditions of complexity and uncertainty and often outperform the hyped models of "big-data analytics" (Katsikopoulos, 2020). 1 These strategies have been investigated and demonstrated to improve safety in other volatile, dynamic and uncertain situations, such as in health, finance and macroeconomics, and facilitate communication with end-users (Katsikopoulos, 2020;Katsikopoulos et al., 2022). Consider the recency heuristic that forecasts next week's influenza incidence to equal this week's incidence: this often disparagingly called "naive" heuristic halved the error of once bigdata "poster child" Google Flu Trends in predicting the incidence of flu-related doctor visits (Katsikopoulos et al., 2022). ...
... Heuristics or rule-of-thumb strategies may be especially suited under conditions of complexity and uncertainty and often outperform the hyped models of "big-data analytics" (Katsikopoulos, 2020). 1 These strategies have been investigated and demonstrated to improve safety in other volatile, dynamic and uncertain situations, such as in health, finance and macroeconomics, and facilitate communication with end-users (Katsikopoulos, 2020;Katsikopoulos et al., 2022). Consider the recency heuristic that forecasts next week's influenza incidence to equal this week's incidence: this often disparagingly called "naive" heuristic halved the error of once bigdata "poster child" Google Flu Trends in predicting the incidence of flu-related doctor visits (Katsikopoulos et al., 2022). The same heuristic also predicted future demand in volatile and disruptive markets better than complex macroeconomic models that try to finetune on the past (Dosi et al., 2020). ...
Article
Full-text available
We reflect on the development of digital twins of the Earth, which we associate with a reductionist view of nature as a machine. The projects of digital twins deviate from contemporary scientific paradigms in the treatment of complexity and uncertainty, and does not engage with critical and interpretative social sciences. We contest the utility of digital twins for addressing climate change issues and discuss societal risks associated with the concept, including the twins' potential to reinforce economicism and governance by numbers, emphasizing concerns about democratic accountability. We propose a more balanced alternative, advocating for independent institutions to develop diverse models, prioritize communication with simple heuristic‐based models, collect comprehensive data from various sources, including traditional knowledge, and shift focus away from physics‐centered variables to inform climate action. We argue that the advancement of digital twins should hinge on stringent controls, favoring a nuanced, interdisciplinary, and democratic approach that prioritizes societal well‐being over blind pursuit of computational sophistication. This article is categorized under: Climate Models and Modeling > Earth System Models Climate Models and Modeling > Knowledge Generation with Models Climate, History, Society, Culture > Disciplinary Perspectives
... As complex models are not always feasible or preferable to build [145], there may be some interest in reducing complexity when modeling CKD in T2DM, e.g., if sufficient data on clinical outcomes are not available early in a trial or if the model is designed to facilitate communication with wider audiences. One approach adopted in the literature to reduce complexity is to focus on the CKD outcomes associated with the largest clinical-and economic-burden, usually ESKD [8]. ...
... Modelers should not lose sight, however, of using these data to populate or develop simpler models, e.g., those focusing on the clinically and economically most relevant endpoints, or to inform the link between surrogate endpoints and clinical outcomes [161]. Simpler models may not capture as many details but may still be sufficient to model outcomes reliably (in some cases even more accurately than complex models [145]), may be more familiar to clinical and health technology assessment audiences, and may be easier to communicate and interpret. The present review provides a comprehensive and contemporary overview of modeling approaches and data sources that modelers can use to explore different modeling options and to inform the development of any future models of CKD in patients with T2DM. ...
Article
Full-text available
Introduction: As novel therapies for chronic kidney disease (CKD) in type 2 diabetes mellitus (T2DM) become available, their long-term benefits should be evaluated using CKD progression models. Existing models offer different modeling approaches that could be reused, but it may be challenging for modelers to assess commonalities and differences between the many available models. Additionally, the data and underlying population characteristics informing model parameters may not always be evident. Therefore, this study reviewed and summarized existing modeling approaches and data sources for CKD in T2DM, as a reference for future model development. Methods: This systematic literature review included computer simulation models of CKD in T2DM populations. Searches were implemented in PubMed (including MEDLINE), Embase, and the Cochrane Library, up to October 2021. Models were classified as cohort state-transition models (cSTM) or individual patient simulation (IPS) models. Information was extracted on modeled kidney disease states, risk equations for CKD, data sources, and baseline characteristics of derivation cohorts in primary data sources. Results: The review identified 49 models (21 IPS, 28 cSTM). A five-state structure was standard among state-transition models, comprising one kidney disease-free state, three kidney disease states [frequently including albuminuria and end-stage kidney disease (ESKD)], and one death state. Five models captured CKD regression and three included cardiovascular disease (CVD). Risk equations most commonly predicted albuminuria and ESKD incidence, while the most predicted CKD sequelae were mortality and CVD. Most data sources were well-established registries, cohort studies, and clinical trials often initiated decades ago in predominantly White populations in high-income countries. Some recent models were developed from country-specific data, particularly for Asian countries, or from clinical outcomes trials. Conclusion: Modeling CKD in T2DM is an active research area, with a trend towards IPS models developed from non-Western data and single data sources, primarily recent outcomes trials of novel renoprotective treatments.
... Overall, on a 1-week forecasting horizon simple Euler forecasts can perform similarly to ensemble methods that are composed of a large number of more complex models. In agreement with [19], our results emphasize the importance of benchmarking complex forecasting models against simple forecasting baselines to further improve forecasting accuracy. Similar conclusions were drawn in a recent study [19] that compared Euler-like forecasts with those generated by Google Flu Trends. ...
... In agreement with [19], our results emphasize the importance of benchmarking complex forecasting models against simple forecasting baselines to further improve forecasting accuracy. Similar conclusions were drawn in a recent study [19] that compared Euler-like forecasts with those generated by Google Flu Trends. Our study also points towards recent findings on algorithm rejection and aversion [20] that found that "people have diminishing sensitivity to forecasting error" and that "people are less likely to use the best possible algorithm in decision domains that are more unpredictable". ...
Article
Full-text available
Background Forecasting new cases, hospitalizations, and disease-induced deaths is an important part of infectious disease surveillance and helps guide health officials in implementing effective countermeasures. For disease surveillance in the US, the Centers for Disease Control and Prevention (CDC) combine more than 65 individual forecasts of these numbers in an ensemble forecast at national and state levels. A similar initiative has been launched by the European CDC (ECDC) in the second half of 2021. Methods We collected data on CDC and ECDC ensemble forecasts of COVID-19 fatalities, and we compare them with easily interpretable “Euler” forecasts serving as a model-free benchmark that is only based on the local rate of change of the incidence curve. The term “Euler method” is motivated by the eponymous numerical integration scheme that calculates the value of a function at a future time step based on the current rate of change. Results Our results show that simple and easily interpretable “Euler” forecasts can compete favorably with both CDC and ECDC ensemble forecasts on short-term forecasting horizons of 1 week. However, ensemble forecasts better perform on longer forecasting horizons. Conclusions Using the current rate of change in incidences as estimates of future incidence changes is useful for epidemic forecasting on short time horizons. An advantage of the proposed method over other forecasting approaches is that it can be implemented with a very limited amount of work and without relying on additional data ( e.g. , data on human mobility and contact patterns) and high-performance computing systems.
... Our results suggest that easily interpretable methods like the Euler method, a model-free local-derivative-based forecasting benchmark, provide an effective alternative to more complex epidemic forecasting frameworks on short-term forecasting horizons. Similar conclusions were drawn in a recent study [14] that compared Euler-like forecasts with those generated by Google Flu Trends. Regularized Euler forecasts have smaller errors with respect to CDC ensemble forecasts on one-week forecasting horizons in about 61% of all cases. ...
... In agreement with [14], our results emphasize the importance of benchmarking complex forecasting models against simple forecasting baselines to further improve forecasting accuracy. Our study also points towards recent findings on algorithm rejection and aversion [15] that found that "people have diminishing sensitivity to forecasting error " and that "people are less likely to use the best possible algorithm in decision domains that are more unpredictable". ...
Preprint
Forecasting new cases, hospitalizations, and disease-induced deaths is an important part of infectious disease surveillance and helps guide health officials in implementing effective countermeasures. For disease surveillance in the U.S., the Centers for Disease Control and Prevention (CDC) combine more than 65 individual forecasts of these numbers in an ensemble forecast at national and state levels. We collected data on CDC ensemble forecasts of COVID-19 fatalities in the United States, and compare them with easily interpretable ``Euler'' forecasts serving as a model-free benchmark that is only based on the local rate of change of the incidence curve. The term ``Euler method'' is motivated by the eponymous numerical integration scheme that calculates the value of a function at a future time step based on the current rate of change. Our results show that CDC ensemble forecasts are not more accurate than ``Euler'' forecasts on short-term forecasting horizons of one week. However, CDC ensemble forecasts show a better performance on longer forecasting horizons. Using the current rate of change in incidences as estimates of future incidence changes is useful for epidemic forecasting on short time horizons. An advantage of the proposed method over other forecasting approaches is that it can be implemented with a very limited amount of work and without relying on additional data (e.g., human mobility and contact patterns) and high-performance computing systems.
... In contrast, simple models are much easier for a human to understand, and are capable of matching, or even outperforming, complex models (Sherden, 1997). Katsikopoulos et al. (2022) advocate using simple heuristics as a baseline against which the performance of complex models should be evaluated, and argue that using simple rules can yield algorithms that are both accurate and understandable. They propose a simple ''recency DOI ...
... A perfect prediction is rarely (if ever) required, let alone possible, and we may instead ask: how much better/worse is each model than the other candidates? A simple model can then be a useful baseline against which to evaluate other models, as Katsikopoulos et al. (2022) demonstrate here, and as used in the US CDC FluSight competition (Lutz et al., 2019). Decisions may also involve actions that modify the very process we are trying to predict, such as deciding whether to make facemasks compulsory to help reduce the spread of COVID-19. ...
... For example, models using fewer variables typically need less memory to be stored and fewer computational operations (time and memory) to make predictions. For instance, using the recency heuristic which relies only on the latest data point to predict next week's flurelated doctor visits is very resource efficient (123). Conversely, querying very large models with many components can be costly and slow (112). ...
Article
Full-text available
The preference for simple explanations, known as the parsimony principle, has long guided the development of scientific theories, hypotheses, and models. Yet recent years have seen a number of successes in employing highly complex models for scientific inquiry (e.g., for 3D protein folding or climate forecasting). In this paper, we reexamine the parsimony principle in light of these scientific and technological advancements. We review recent developments, including the surprising benefits of modeling with more parameters than data, the increasing appreciation of the context-sensitivity of data and misspecification of scientific models, and the development of new modeling tools. By integrating these insights, we reassess the utility of parsimony as a proxy for desirable model traits, such as predictive accuracy, interpretability, effectiveness in guiding new research, and resource efficiency. We conclude that more complex models are sometimes essential for scientific progress, and discuss the ways in which parsimony and complexity can play complementary roles in scientific modeling practice.
... For example, models using fewer variables typically need less memory to be stored and fewer computational operations (time and memory) to make predictions. For instance, using the recency heuristic which relies only on the latest data point to predict next week's flurelated doctor visits is very resource efficient (123). Conversely, querying very large models with many components can be costly and slow (112). ...
Preprint
Full-text available
Parsimony has long served as a criterion for selecting between scientific theories, hypotheses, and models. Yet recent years have seen an explosion of incredibly complex models, such as deep neural networks (e.g., for 3D protein folding) and multi-model ensembles (e.g., for climate forecasting). This perspective aims to re-examine the principle of model parsimony in light of the recent advances in science and technology. We review recent developments such as the discovery of double descent of prediction error, the increasing appreciation of the context-sensitivity of data and misspecification of scientific models, as well as the new types of models and modeling tools available to scientists. We integrate these results to reevaluate the utility of the parsimony principle as a proxy for desirable model traits, such as predictive accuracy, interpretability, utility in guiding future research, resource efficiency, and others. We highlight the need for a nuanced, context-dependent application of the parsimony principle, acknowledging situations where more complex models may be more appropriate or even necessary for achieving scientific goals.
... This research has also shown that heuristics, the simple rules that people use to make intuitive decisions, do not only lead to biases. Instead, under conditions of uncertainty, the intuitive heuristics naturally used by humans can outperform even the latest and most modern AI technologies that we have available (DeMiguel et al. 2009;Gigerenzer and Gaissmaier 2011;Gigerenzer and Goldstein 1996;Katsikopoulos et al. 2022;Wübben and Wangenheim 2008). The discovery of these so-called "less-is-more-effects" (that one can make better decisions with less information and complexity) is one of the most important findings in decision science in the last 30 years. ...
... Later Gigerenzer and his colleagues asked if the prediction of Google Flu Trends could beat the simple assumption next week's infections, in a given area, will be the same as last week's, a simple no change model (Katsikopoulos et al., 2022). They found this simple "recency heuristic" beat the complex calculations and tiresome data demands of the Google Flu Trends App. ...
... GFT trends overestimated flu prevalence by over 50% in 2011-2012, which some researchers blamed on the increased media coverage and google searches for "swine flu" and "bird flu" [1]. A recent study indicated that a simple heuristic model predicted flu incidence better than the GFT black box algorithm [2]. However, Google Trends may still have potential to be an affordable, timely, robust, and sensitive surveillance system [3] with refinement of search terms, monitoring and updating of the algorithm, and use of additional data streams [1,4]. ...
Article
Full-text available
Google Trends data can be informative for zoonotic disease incidences, including Lyme disease. However, the use of Google Trends for predictive purposes is underutilized. In this study, we demonstrate the potential to use Google Trends for zoonotic disease prediction by predicting monthly state-level Lyme disease case counts in the United States. We requested Lyme disease data for the years 2010–2021. We downloaded Google Trends search data on terms for Lyme disease, symptoms of Lyme disease, and diseases with similar symptoms to Lyme disease. For each search term, we built an expanding window negative binomial model that adjusted for seasonal differences using a lag term. Performance was measured by Root Mean Squared Errors (RMSEs) and the visual associations between observed and predicted case counts. The highest performing model had excellent predictive ability in some states, but performance varied across states. The highest performing models were for Lyme disease search terms, which indicates the high specificity of search terms. We outline challenges of using Google Trends data, including data availability and a mismatch between geographic units. We discuss opportunities for Google Trends data for One Health research, including prediction of additional zoonotic diseases and incorporating environmental and companion animal data. Lastly, we recommend that Google Trends be explored as an option for predicting other zoonotic diseases and incorporate other data streams that may improve predictive performance.
... Among the ERP's key successful demonstrations is that, when cross-validation is used, simple heuristics such as the recognition heuristic or the "take-the-best" heuristic outperformed complicated, computationally slow and greedy models such as multiple regression favored by economists (e.g., Gigerenzer & Brighton, 2009;Gigerenzer & Gaissmaier, 2011;Gigerenzer et al., 1999;Katsikopoulos et al., 2010;Todd et al., 2012). More recent work also favorably compares the performance of heuristics in the wild to increasingly popular machine learning algorithms, including cases where "Big Data" is available (Katsikopoulos et al., 2021a(Katsikopoulos et al., , 2021b. ...
Article
Full-text available
Over the past decades psychological theories have made significant headway into economics, culminating in the 2002 (partially) and 2017 Nobel prizes awarded for work in the field of Behavioral Economics. Many of the insights imported from psychology into economics share a common trait: the presumption that decision makers use shortcuts that lead to deviations from rational behaviour (the Heuristics-and-Biases program). Many economists seem unaware that this viewpoint has long been contested in cognitive psychology. Proponents of an alternative program (the Ecological-Rationality program) argue that heuristics need not be irrational, particularly when judged relative to characteristics of the environment. We sketch out the historical context of the antagonism between these two research programs and then review more recent work in the Ecological-Rationality tradition. While the heuristics-and-biases program is now well-established in (mainstream neo-classical) economics via Behavioral Economics, we show there is considerable scope for the Ecological-Rationality program to interact with economics. In fact, we argue that there are many existing, yet overlooked, bridges between the two, based on independently derived research in economics that can be construed as being aligned with the tradition of the Ecological-Rationality program. We close the paper with a discussion of the open challenges and difficulties of integrating the Ecological Rationality program with economics.
... Among the ERP's key successful demonstrations is that, when cross-validation is used, simple heuristics such as the recognition heuristic or the "take-the-best" heuristic outperformed complicated, computationally slow and greedy models such as multiple where "Big Data" is available (Katsikopoulos et al. 2021a(Katsikopoulos et al. , 2021b. ...
Preprint
Full-text available
Over the past decades psychological theories have made significant headway into economics, culminating in the 2002 (partially) and 2017 Nobel prizes awarded for work in the field of Behavioral Economics. Many of the insights imported from psychology into economics share a common trait: the presumption that decision makers use shortcuts that lead to deviations from rational behaviour (the Heuristics-and-Biases program). Many economists seem unaware that this viewpoint has long been contested in cognitive psychology. Proponents of an alternative program (the Ecological-Rationality program) argue that heuristics need not be irrational, particularly when judged relative to characteristics of the environment. We sketch out the historical context of the antagonism between these two research programs and then review more recent work in the Ecological-Rationality tradition. While the heuristics-and-biases program is now well-established in (mainstream neo-classical) economics via Behavioral Economics, we show there is considerable scope for the Ecological-Rationality program to interact with economics. In fact, we argue that there are many existing, yet overlooked, bridges between the two, based on independently derived research in economics that can be construed as being aligned with the tradition of the Ecological-Rationality program. We close the chapter with a discussion of the open challenges and difficulties of integrating the Ecological Rationality program with economics.
... In essence, it collects an instant trade with the promoter. Retailing entails a quick interaction with the consumer as well as the coordination of business endeavor actions ranging from the concept or configuration period of a thing or providing to its transport and set up-movement provision to the supporter [8]. The company has contributed to the economic development of numerous nations and is unquestionably one of the most rapidly changing and dynamic activities in the globe today. ...
... Note that recency heuristic in our current framework refers to the tendency to rely on recency information for innovativeness inference. It is differentiated from recency heuristic in cognitive psychology or, more generally, information processing-the usage of recency information as a recall strategy in frequency estimation (Curt & Zechmeister, 1984) or as a forecasting model in prediction of future (Katsikopoulos et al., 2022), which accounts for one's non-evaluative cognitive judgements (i.e., frequency recall). The mechanism of recency heuristic in non-evaluative contexts is that information entered in the latest position over represents the category and causes a bias where information entered most recently looks like it is happening more frequently, which is a non-evaluative heuristic. ...
Article
Full-text available
This research identifies recency heuristic utilized by consumers with limited prior knowledge for product innovativeness evaluation. Consumers with limited prior knowledge of a product category perceived a new product as more innovative when its release date was more recent, while consumers with prior knowledge remained uninfluenced by recency information (Study 1). The effect was replicated at the product level (Study 2). It further demonstrates two critical boundary conditions—when recency was either irrelevant information or a rationally legitimate evaluative tool, recency heuristic was inapplicable (Study 3). The present research draws attention to the role of recency in conceptualizing product innovativeness and further elaborates the understanding of how the construct of innovativeness is represented in consumers’ minds by focusing on the conceptual relationship of novelty and recency. It also contributes to the heuristic literature by proposing recency as an evaluative heuristic tool for innovativeness assessment. Results provide managers with practical insight into whether to highlight or downplay product release date information depending on their target audience and the level of product innovativeness. This article is protected by copyright. All rights reserved.
... Secondly, although the general trend seems to be toward the use of complex models to analyze large data sets for analysis (i.e., analytics and Big Data), there is actually evidence that using less data and simpler models sometimes result in better and more robust results. As an example, by using a single data point (the infection rate from the previous week), Katsikopoulos et al. (2022) were able to outperform the predictions made by Google Flu Trends, which made use of approximately 160 different factors. Similar examples can be found in multiple fields including medical diagnostics, financial investment, and security (Gigerenzer, 2022) Third, from a logical point of view, one can argue that use of an aggregated DSM for clustering is warranted in situations where the design decision relates to a higher rather than lower level of analysis. ...
... This faith in complexity, however, can ruin foresight if the stableworld assumption is violated; that is, under circumstances of high uncertainty and fluctuating boundary conditions. In such cases simple algorithms, that is, rules of thumb or heuristics can save the day (e.g., Katsikopoulos et al., 2020Katsikopoulos et al., , 2022. Throughout the book, Gigerenzer shows persuasively that human intelligence can hold its own against the feats of AI. ...
Article
Artificial intelligence, due to being heavily researched and funded, reaches new peaks of performance by the hour. In his new book, Gigerenzer (2022) addresses the predominantly positive perspective on AI with an advocacy for the uniqueness of the human intellect. He outlines strengths of human intelligence and the failures as well as dangers of AI. While the book presents an enlightening case for human intelligence, the author misses out on exploring a more productive approach: The synthesis of human intelligence and AI. In the present review, I introduce strengths and weaknesses of both types of intelligence and focus on the potential of synthetic cooperation between them. I support my plea for cooperation with two recent research ventures, namely, the regulation of digital social media platforms and predicting the societal effects of emerging innovations.
... The two no-integration models are the same as the nonnormative integration model, but with w set to 0 (report data, ignore prior) or 1 (report prior, ignore data). The former of these two models implements a strategy that is sometimes referred to as the "recency heuristic" (e.g., Katsikopoulos et al., 2022). ...
Article
Full-text available
Studies in perception have found that humans often behave in accordance with Bayesian principles, while studies in higher-level cognition tend to find the opposite. A key methodological difference is that perceptual studies typically focus on whether people weight sensory cues according to their precision (determined by sensory noise levels), while studies with cognitive tasks concentrate on explicit inverse inference from likelihoods to posteriors. Here, we investigate if lay-people spontaneously engage in precision weighting in three cognitive inference tasks that require combining prior information with new data. We peel the layers of the “intuitive Bayesian” by categorizing participants into four categories: (a) No appreciation for the need to consider both prior and data; (b) Consideration of both prior and data; (c) Appreciation of the need to weight the prior and data according to their precision; (d) Ability to explicitly distinguish the inverse probabilities and perform inferences from description (rather than experience). The results suggest that with a lenient coding criterion, 58% of the participants appreciated the need to consider both the prior and data, 25% appreciated the need to weight them with their precision, but only 12% correctly solved the tasks that required understanding of inverse probabilities. Hence, while many participants weigh the data against priors, as in perceptual studies, they seem to have difficulty with “unpacking” symbols into their real-world extensions, like frequencies and sample sizes, and understanding inverse probability. Regardless of other task differences, people thus have larger difficulty with aspects of Bayesian performance typically probed in “cognitive studies.”
... Again, the authors concluded that the more complicated models are not likely to produce more accurate forecasts than the simple, traditional models, at least in the case of annual tourist arrivals (Yu and Schwartz, 2006). Similar results are reported by a more recent study that compared a simple recency heuristic -a form of naïve forecasting -to the Google Flu Trends (GFT) model that uses Big Data analytics and a black-box algorithm (Katsikopoulos et al., 2022). The recency heuristic was found to outperform GFT in forecasting the proportion of flu-related doctor visits across the United States. ...
Article
We present a novel method for forecasting with limited information, that is for forecasting short time series. Our method is simple and intuitive; it relates to the most fundamental forecasting benchmark and is straightforward to implement. We present the technical details of the method and explain the nuances of how it works via two illustrative examples, with the use of employment‐related data. We find that our new method outperforms standard forecasting methods and thus offers considerable utility in applied management research. The implications of our findings suggest that forecasting short time series, of which one can find many examples in business and management, is viable and can be of considerable practical help for both research and practice – even when the information available to analysts and decision‐makers is limited.
... Other heuristics are verbalized in proverb-like ways: 'try not to walk alone' (respondent 27), 'look at the person behind the position' (respondent 24) and 'we don't send offers, we meet people face to face' (respondent 23). Katsikopoulos et al. (2021) posit that, aside from simplicity, a second reason why heuristics do well under uncertainty and in unstable environments is their transparency, which makes them easy to 'understand, memorize, teach, and execute' (Katsikopoulos et al., 2021, p. 150 simple rule, for the ones the simple rule is shared with, and for the ones impacted by its enactment. ...
Article
Full-text available
Managerial heuristics play an important role in decision‐making and positively contribute to strategy, innovation, organizational learning, and even the survival of a firm. Little is known, though, about the process through which heuristics emerge. Following a grounded theory approach, we develop a process model of how managers create and develop heuristics from experience. The 4‐step model ‐ dissonancing, realizing, crystallizing, and organizing ‐ captures the sequence of cognitive schemata that start with a flawed assumption, give rise to heuristics that tend to be born in pairs, and end with mature and shared heuristics. With these findings, we contribute to the literature on heuristics by offering a model for the process of their emergence, a view on how feelings initiate, guide, and strengthen this process, and a description of the role played by the environment, enriching the ecological rationality perspective.
... One set of alternative viewpoints is grounded in the influential Gigerenzer research program on fast-andfrugal heuristics and adaptive toolboxes (e.g., Goldstein & Gigerenzer, 2009). Within this line of research, for instance, Katsikopoulos et al. (2021) found support for a forecasting heuristic for predicting flu-related doctor visits: ''this week's proportion of flu-related doctor visits equals the proportion from the most recent week''. Such a heuristic is semantically similar to the comparison class model in Section 2.2, with references to the past and time, and could likely be detected via a similar model. ...
Article
Geopolitical forecasting tournaments have stimulated the development of methods for improving probability judgments of real-world events. But these innovations have focused on easier-to-quantify variables, like personnel selection, training, teaming, and crowd aggregation—bypassing messier constructs, like qualitative properties of forecasters’ rationales. Here, we adapt methods from natural language processing (NLP) and computational text analysis to identify distinctive reasoning strategies in the rationales of top forecasters, including: (a) cognitive styles, such as dialectical complexity, that gauge tolerance of clashing perspectives and efforts to blend them into coherent conclusions and (b) the use of comparison classes or base rates to inform forecasts. In addition to these core metrics, we explore metrics derived from the Linguistic Inquiry and Word Count (LIWC) program. Applying these tools to multiple tournaments and to forecasters of widely varying skill (from Mechanical Turkers to carefully culled “superforecasters”) revealed that: (a) top forecasters show higher dialectical complexity in their rationales and use more comparison classes; (b) experimental interventions, like training and teaming, that boost accuracy also influence NLP profiles of rationales, nudging them in a “superforecaster” direction.
Article
Managerial heuristics – simple methods for solving problems – are critical for key functions, such as deciding, strategizing, and organizing. Yet, research on managerial heuristics has been siloed into divergent streams, creating polarization among empirical findings and sparking numerous calls for integration. The goal of this review is to integrate different understandings of the construct, different processes examined by extant research, and divergent perspectives on heuristics’ performance into a coherent conceptual framework. We systematically reviewed 54 articles focusing on two complementary processes: the creation and the use of managerial heuristics. We discovered that research which describes the performance of heuristics as suboptimal focuses on the study of innate heuristics which are used reflexively; meanwhile, research which frames heuristics positively focuses on the study of learned heuristics which are used deliberately. We, thus, propose that the two perspectives on managerial heuristics are not contradictory but complementary. Based on this novel differentiation, we, first, aggregate the inputs and outcomes of creating and of using managerial heuristics into an integrative framework built around the manager's cognitive effort; second, we propose managerial heuristics as storage devices for managerial experience, time, cognitive effort and information about the environment; and third, we discuss implications for future research.
Article
Purpose The purpose of this paper is to develop a typology of heuristics in business relationships. We distinguish between four categories: (1) general heuristics used in the context of a business relationship but that may also (and are often) used in other contexts; (2) relational context heuristics that are typically used in a relational context; (3) relational information heuristics that rely on relational information and (4) genuine relational heuristics that use relational information and are applied in relational contexts. Design/methodology/approach We draw on existing literature on heuristics and business relationships to inform our conceptual paper. Findings We apply this typology and discuss specific heuristics that fall under the different categories of our typology. These include word-of-mouth, tit-for-tat, imitation, friendliness, recognition and trust. Research limitations/implications We contribute to the heuristics literature by providing a novel typology of heuristics in business relationships. Emphasizing the interdependence between heuristics and business relationships, we identify genuine relational heuristics that capture the bidirectional relationships between business relationships and heuristics. Second, we contribute to the business relationships literature by providing a conceptual framework for understanding the types of heuristics managers use in business relationships and by discussing examples of specific heuristics and how they are applied in relational contexts. Practical implications We contribute to practice by providing a simple framework for making sense out of the “universe” of heuristics for business relationships. Originality/value Our paper provides a novel typology for understanding heuristics in business relationships.
Article
The emphasis on artificial intelligence (AI) is rapidly increasing across many diverse aspects of society. This manuscript discusses some of the key topics related to the expansion of AI. These include a comparison of the unique cognitive capabilities of human intelligence with AI, and the potential risks of using AI in clinical medicine. The general public attitudes towards AI are also discussed, including patient perspectives. As the promotion of AI in high-risk situations such as clinical medicine expands, the limitations, risks and benefits of AI need to be better understood.
Article
How do firms set prices when faced with an uncertain market? We study the pricing strategies of car dealers for used cars using online data and interviews. We find that 97% of 628 dealers employ an aspiration-level heuristic similar to a Dutch auction. Dealers adapt the parameters of the heuristic—initial price, duration, and change in price—to their local market conditions, such as number of competitors, population density, and GDP per capita. At the same time, the aggregate market is described by a model of equilibrium price dispersion. Unlike the equilibrium model, the heuristic correctly predicts systematic pricing characteristics such as high initial price, price stickiness, and the “cheap twin paradox.” We also find first evidence that heuristic pricing can generate higher profits given uncertainty than the equilibrium strategy.
Article
Purpose Are there smart ways to find heuristics? What are the common principles behind heuristics? We propose an integrative definition of heuristics, based on insights that apply to all heuristics, and put forward meta-heuristics for discovering heuristics. Design/methodology/approach We employ Herbert Simon’s metaphor that human behavior is shaped by the scissors of the mind and its environment. We present heuristics from different domains and multiple sources, including scholarly literature, practitioner-reports and ancient texts. Findings Heuristics are simple, actionable principles for behavior that can take different forms, including that of computational algorithms and qualitative rules-of-thumb, cast into proverbs or folk-wisdom. We introduce heuristics for tasks ranging from management to writing and warfare. We report 13 meta-heuristics for discovering new heuristics and identify four principles behind them and all other heuristics: Those principles concern the (1) plurality, (2) correspondence, (3) connectedness of heuristics and environments and (4) the interdisciplinary nature of the scissors’ blades with respect to research fields and methodology. Originality/value We take a fresh look at Simon’s scissors-metaphor and employ it to derive an integrative perspective that includes a study of meta-heuristics.
Chapter
People often confuse intuition with a sixth sense or the arbitrary judgments of inept decision makers. In this book, Gerd Gigerenzer analyzes the war on intuition in the social sciences beginning with gendered perceptions of intuition as female, followed by opposition between biased intuition and logical rationality, popularized in two-system theories. Technological paternalism amplifies these views, arguing that human intuition should be replaced by perfect algorithms. In opposition to these beliefs, this book proposes that intuition is a form of unconscious intelligence based on years of experience that evolved to deal with uncertain and dynamic situations where logic and big data algorithms are of little benefit. Gigerenzer introduces the scientific study of intuition and shows that intuition is not irrational caprice but is instead based on smart heuristics. Researchers, students, and general readers with an interest in decision making, heuristics and biases, cognitive psychology, and behavioral public policy will benefit.
Article
Psychological artificial intelligence (AI) applies insights from psychology to design computer algorithms. Its core domain is decision-making under uncertainty, that is, ill-defined situations that can change in unexpected ways rather than well-defined, stable problems, such as chess and Go. Psychological theories about heuristic processes under uncertainty can provide possible insights. I provide two illustrations. The first shows how recency-the human tendency to rely on the most recent information and ignore base rates-can be built into a simple algorithm that predicts the flu substantially better than did Google Flu Trends's big-data algorithms. The second uses a result from memory research-the paradoxical effect that making numbers less precise increases recall-in the design of algorithms that predict recidivism. These case studies provide an existence proof that psychological AI can help design efficient and transparent algorithms.
Article
Full-text available
Big data analytics employs algorithms to uncover people's preferences and values, and support their decision making. A central assumption of big data analytics is that it can explain and predict human behavior. We investigate this assumption, aiming to enhance the knowledge basis for developing algorithmic standards in big data analytics. First, we argue that big data analytics is by design atheoretical and does not provide process‐based explanations of human behavior; thus, it is unfit to support deliberation that is transparent and explainable. Second, we review evidence from interdisciplinary decision science, showing that the accuracy of complex algorithms used in big data analytics for predicting human behavior is not consistently higher than that of simple rules of thumb. Rather, it is lower in situations such as predicting election outcomes, criminal profiling, and granting bail. Big data algorithms can be considered as candidate models for explaining, predicting, and supporting human decision making when they match, in transparency and accuracy, simple, process‐based, domain‐grounded theories of human behavior. Big data analytics can be inspired by behavioral and cognitive theory.
Article
Full-text available
Big data analytics employs statistical and machine learning algorithms to uncover people’s preferences and values, and support their decision making in areas such as business, politics, and law. A central assumption of producers and consumers of big data analytics is that it can indeed explain and predict human behavior. We investigate this assumption, aiming to enhance the knowledge basis for developing algorithmic standards in big data analytics. We make two contributions. First, we argue that big data analytics is by design atheoretical and does not provide process-based explanations of human behavior; thus making it unfit to support deliberation that is transparent and explainable to experts and laypeople. Second, we review evidence, from interdisciplinary decision science, showing that the accuracy of complex algorithms used in big data analytics is not consistently higher than that of simple rules of thumb. Rather, it has been found to be lower in situations such as predicting election outcomes, criminal profiling, and granting bail. Big data algorithms can be considered as candidate models for explaining, predicting, and supporting human decision making when they match, in transparency and accuracy, simple, process-based, domain-grounded theories of human behavior. Big data analytics can be inspired by behavioral and cognitive theory.
Article
Full-text available
Firefighters, emergency paramedics, and airplane pilots are able to make correct judgments and choices in challenging situations of scarce information and time pressure. Experts often attribute such successes to intuition and report that they avoid analysis. Similarly, laypeople can effortlessly perform tasks that confuse machine algorithms. OR should ideally respect human intuition while supporting and improving it with analytical modelling. We utilise research on intuitive decision making from psychology to build a model of mixing intuition and analysis over a set of interrelated tasks, where the choice of intuition or analysis in one task affects the choice in other tasks. In this model, people may use any analytical method, such as multi-attribute utility, or a single-cue heuristic, such as availability or recognition. The article makes two contributions. First, we study the model and derive a necessary and sufficient condition for the optimality of using a positive proportion of intuition (i.e., for some tasks): Intuition is more frequently accurate than analysis to a larger extent than analysis is more frequently accurate than guessing. Second, we apply the model to synthetic data and also natural data from a forecasting competition for a Wimbledon tennis tournament and a King's Fund study on how patients choose a London hospital: The optimal proportion of intuition is estimated to range from 25% to 53%. The accuracy benefit of using the optimal mix over analysis alone is estimated between 3% and 27%. Such improvements would be impactful over large numbers of choices as in public health.
Article
While intervention policies such as social distancing rules, lockdowns, and curfews may save lives during a pandemic, they impose substantial direct and indirect costs on societies. In this paper, we provide a mathematical model to assist governmental policymakers in managing the lost lives during a pandemic through controlling intervention levels. Our model is non-convex in decision variables, and we develop two heuristics to obtain fast and high-quality solutions. Our results indicate that when anticipated economic consequences are higher, healthcare overcapacity will emerge. When the projected economic costs of the pandemic are large and the illness severity is low, however, a no-intervention strategy may be preferable. As the severity of the infection rises, the cost of intervention climbs accordingly. The death toll also increases with the severity of both the economic consequences of interventions and the infection rate of the disease. Our models suggest earlier mitigation strategies that typically start before the saturation of the healthcare system when disease severity is high.
Article
We structure this response to the commentaries to our article “Transparent modeling of influenza incidence: Big data or a single data point from psychological theory?” around the concept of psychological AI, Herbert Simon’s classic idea of using insights from how people make decisions to make computers smart. The recency heuristic in Katsikopoulos, Şimşek, Buckmann, and Gigerenzer (2021) is one example of psychological AI. Here we develop another: the trend-recency heuristic. While the recency heuristic predicts that the next observation will equal the most recent observation, the trend-recency heuristic predicts that the next trend will equal the most recent trend. We compare the performance of these two recency heuristics with forecasting models that use trend damping for predicting flu incidence. Psychological AI prioritizes ecological rationality and transparency, and we provide a roadmap of how to study such issues. We also discuss how this transparency differs from explainable AI and how ecological rationality focuses on the comparative empirical study and theoretical analysis of different types of models.
Article
Full-text available
Intelligence evolved to cope with situations of uncertainty generated by nature, predators, and the behavior of conspecifics. To this end, humans and other animals acquired special abilities, including heuristics that allow for swift action in face of scarce information. In this article, I introduce the concept of embodied heuristics, that is, innate or learned rules of thumb that exploit evolved sensory and motor abilities in order to facilitate superior decisions. I provide a case study of the gaze heuristic, which solves coordination problems from intercepting prey to catching a fly ball. Various species have adapted this heuristic to their specific sensorimotor abilities, such as vision, echolocation, running, and flying. Humans have enlisted it for solving tasks beyond its original purpose, a process akin to exaptation. The gaze heuristic also made its way into rocket technology. I propose a systematic study of embodied heuristics as a research framework for situated cognition and embodied bounded rationality.
Article
Full-text available
We analyze the individual and macroeconomic impacts of heterogeneous expectations and action rules within an agent‐based model populated by heterogeneous, interacting firms. Agents have to cope with a complex evolving economy characterized by deep uncertainty resulting from technical change, imperfect information, coordination hurdles, and structural breaks. In these circumstances, we find that neither individual nor macroeconomic dynamics improve when agents replace myopic expectations with less naïve learning rules. Our results suggest that fast and frugal robust heuristics may not be a second‐best option but rather “rational” responses in complex and changing macroeconomic environments. (JEL C63, D8, E32, E6, O4)
Article
Full-text available
Background: Infectious disease forecasting aims to predict characteristics of both seasonal epidemics and future pandemics. Accurate and timely infectious disease forecasts could aid public health responses by informing key preparation and mitigation efforts. Main body: For forecasts to be fully integrated into public health decision-making, federal, state, and local officials must understand how forecasts were made, how to interpret forecasts, and how well the forecasts have performed in the past. Since the 2013-14 influenza season, the Influenza Division at the Centers for Disease Control and Prevention (CDC) has hosted collaborative challenges to forecast the timing, intensity, and short-term trajectory of influenza-like illness in the United States. Additional efforts to advance forecasting science have included influenza initiatives focused on state-level and hospitalization forecasts, as well as other infectious diseases. Using CDC influenza forecasting challenges as an example, this paper provides an overview of infectious disease forecasting; applications of forecasting to public health; and current work to develop best practices for forecast methodology, applications, and communication. Conclusions: These efforts, along with other infectious disease forecasting initiatives, can foster the continued advancement of forecasting science.
Article
Full-text available
Estimation of influenza-like illness (ILI) using search trends activity was intended to supplement traditional surveillance systems, and was a motivation behind the development of Google Flu Trends (GFT). However, several studies have previously reported large errors in GFT estimates of ILI in the US. Following recent release of time-stamped surveillance data, which better reflects real-time operational scenarios, we reanalyzed GFT errors. Using three data sources—GFT: an archive of weekly ILI estimates from Google Flu Trends; ILIf: fully-observed ILI rates from ILINet; and, ILIp: ILI rates available in real-time based on partial reporting—five influenza seasons were analyzed and mean square errors (MSE) of GFT and ILIp as estimates of ILIf were computed. To correct GFT errors, a random forest regression model was built with ILI and GFT rates from the previous three weeks as predictors. An overall reduction in error of 44% was observed and the errors of the corrected GFT are lower than those of ILIp. An 80% reduction in error during 2012/13, when GFT had large errors, shows that extreme failures of GFT could have been avoided. Using autoregressive integrated moving average (ARIMA) models, one- to four-week ahead forecasts were generated with two separate data streams: ILIp alone, and with both ILIp and corrected GFT. At all forecast targets and seasons, and for all but two regions, inclusion of GFT lowered MSE. Results from two alternative error measures, mean absolute error and mean absolute proportional error, were largely consistent with results from MSE. Taken together these findings provide an error profile of GFT in the US, establish strong evidence for the adoption of search trends based 'nowcasts' in influenza forecast systems, and encourage reevaluation of the utility of this data source in diverse domains.
Article
Full-text available
The M4 Competition follows on from the three previous M competitions, the purpose of which was to learn from empirical evidence both how to improve the forecasting accuracy and how such learning could be used to advance the theory and practice of forecasting. The aim of M4 was to replicate and extend the three previous competitions by: (a) significantly increasing the number of series, (b) expanding the number of forecasting methods, and (c) including prediction intervals in the evaluation process as well as point forecasts. This paper covers all aspects of M4 in detail, including its organization and running, the presentation of its results, the top-performing methods overall and by categories, its major findings and their implications, and the computational requirements of the various methods. Finally, it summarizes its main conclusions and states the expectation that its series will become a testing ground for the evaluation of new methods and the improvement of the practice of forecasting, while also suggesting some ways forward for the field.
Article
Full-text available
This article introduces this JBR Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods - including those in this special issue - found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy. Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com.
Article
Full-text available
Distinguishing between risk and uncertainty, this paper draws on the psychological literature on heuristics to consider whether and when simpler approaches may outperform more complex methods for modelling and regulating the financial system. We find that: (i) simple methods can sometimes dominate more complex modelling approaches for calculating banks’ capital requirements, especially if limited data are available for estimating models or the underlying risks are characterised by fat-tailed distributions; (ii) simple indicators often outperformed more complex metrics in predicting individual bank failure during the global financial crisis; and (iii) when combining information from different indicators to predict bank failure, ‘fast-and-frugal’ decision trees can perform comparably to standard, but more information-intensive, regression techniques, while being simpler and easier to communicate.
Article
Full-text available
The goal of influenza-like illness (ILI) surveillance is to determine the timing, location and magnitude of outbreaks by monitoring the frequency and progression of clinical case incidence. Advances in computational and information technology have allowed for automated collection of higher volumes of electronic data and more timely analyses than previously possible. Novel surveillance systems, including those based on internet search query data like Google Flu Trends (GFT), are being used as surrogates for clinically-based reporting of influenza-like-illness (ILI). We investigated the reliability of GFT during the last decade (2003 to 2013), and compared weekly public health surveillance with search query data to characterize the timing and intensity of seasonal and pandemic influenza at the national (United States), regional (Mid-Atlantic) and local (New York City) levels. We identified substantial flaws in the original and updated GFT models at all three geographic scales, including completely missing the first wave of the 2009 influenza A/H1N1 pandemic, and greatly overestimating the intensity of the A/H3N2 epidemic during the 2012/2013 season. These results were obtained for both the original (2008) and the updated (2009) GFT algorithms. The performance of both models was problematic, perhaps because of changes in internet search behavior and differences in the seasonality, geographical heterogeneity and age-distribution of the epidemics between the periods of GFT model-fitting and prospective use. We conclude that GFT data may not provide reliable surveillance for seasonal or pandemic influenza and should be interpreted with caution until the algorithm can be improved and evaluated. Current internet search query data are no substitute for timely local clinical and laboratory surveillance, or national surveillance based on local data collection. New generation surveillance systems such as GFT should incorporate the use of near-real time electronic health data and computational methods for continued model-fitting and ongoing evaluation and improvement.
Article
Full-text available
In this study, the authors used 111 time series to examine the accuracy of various forecasting methods, particularly time-series methods. The study shows, at least for time series, why some methods achieve greater accuracy than others for different types of data. The authors offer some explanation of the seemingly conflicting conclusions of past empirical research on the accuracy of forecasting. One novel contribution of the paper is the development of regression equations expressing accuracy as a function of factors such as randomness, seasonality, trend-cycle and the number of data points describing the series. Surprisingly, the study shows that for these 111 series simpler methods perform well in comparison to the more complex and statistically sophisticated ARMA models.
Article
Full-text available
A review of the literature indicates that linear models are frequently used in situations in which decisions are made on the basis of multiple codable inputs. These models are sometimes used (a) normatively to aid the decision maker, (b) as a contrast with the decision maker in the clinical vs statistical controversy, (c) to represent the decision maker "paramorphically" and (d) to "bootstrap" the decision maker by replacing him with his representation. Examination of the contexts in which linear models have been successfully employed indicates that the contexts have the following structural characteristics in common: each input variable has a conditionally monotone relationship with the output; there is error of measurement; and deviations from optimal weighting do not make much practical difference. These characteristics ensure the success of linear models, which are so appropriate in such contexts that random linear models (i.e., models whose weights are randomly chosen except for sign) may perform quite well. 4 examples involving the prediction of such codable output variables as GPA and psychiatric diagnosis are analyzed in detail. In all 4 examples, random linear models yield predictions that are superior to those of human judges. (52 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
In categorization tasks where resources such as time, information, and computation are limited, there is pressure to be accurate, and stakes are high–as when deciding if a patient is under high risk of having a disease or if a worker should undergo retraining–, it has been proposed that people use, or should use, simple adaptive heuristics. We introduce a family of deterministic, noncompensatory heuristics, called fast and frugal trees, and study them formally. We show that the heuristics require few resources and are also relatively accurate. First, we characterize fast and frugal trees mathematically as lexicographic heuristics and as noncompensatory linear models, and also show that they exploit cumulative dominance (the results are interpreted in the language of the paired comparison literature). Second, we show, by computer simulation, that the predictive accuracy of fast and frugal trees compares well with that of logistic regression (proposed as a descriptive model for categorization tasks performed by professionals) and of classification and regression trees (used, outside psychology, as prescriptive models).
Article
Full-text available
Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.
Article
Full-text available
We study the effectiveness of simple heuristics in multiattribute decision making. We consider the case of an additive separable utility function with nonnegative, nonincreasing attribute weights. In this case, cumulative dominance ensures that the so-called cumulative dominance compliant heuristics will choose a best alternative. For the case of binary attribute values and under two probabilistic models of the decision environment generalizing a simple Bernoulli model, we obtain the probabilities of simple and cumulative dominance. In contrast with the probability of simple dominance, the probability of cumulative dominance is shown to be large in many cases, explaining the effectiveness of cumulative dominance compliant heuristics in those cases. Additionally, for the subclass of the so-called fully cumulative dominance compliant heuristics, we obtain an upper bound for the expected loss that only depends on the weights being nonnegative and nonincreasing. The low values of the upper bound for cases in which the probability of cumulative dominance is not large provide an additional explanation for the effectiveness of fully cumulative dominance compliant heuristics. Examples of cumulative dominance compliant heuristics and fully cumulative dominance compliant heuristics are discussed, including the deterministic elimination by aspects (DEBA) heuristic that motivated our work.
Article
Full-text available
The unemployment levels disclosed by the US government, and the significant association between the job-search variables and the official unemployment data are discussed. There are two kinds of Internet resources are available for job seekers, corporate job-posting sites and employment agencies. Access to either kind of employment resource requires job seekers first to locate its job site, which is commonly done using search engines. The Internet is credited with overcoming information bottlenecks in key areas of the labor market, affecting how worker-firm matches are made, and how local markets shape demand. Another avenue for future research on job searches would be to obtain data from large job site portals, such as monster.com, rather than metasearch engines.
Article
Full-text available
Google Flu Trends (GFT) uses anonymized, aggregated internet search activity to provide near-real time estimates of influenza activity. GFT estimates have shown a strong correlation with official influenza surveillance data. The 2009 influenza virus A (H1N1) pandemic [pH1N1] provided the first opportunity to evaluate GFT during a non-seasonal influenza outbreak. In September 2009, an updated United States GFT model was developed using data from the beginning of pH1N1. We evaluated the accuracy of each U.S. GFT model by comparing weekly estimates of ILI (influenza-like illness) activity with the U.S. Outpatient Influenza-like Illness Surveillance Network (ILINet). For each GFT model we calculated the correlation and RMSE (root mean square error) between model estimates and ILINet for four time periods: pre-H1N1, Summer H1N1, Winter H1N1, and H1N1 overall (Mar 2009-Dec 2009). We also compared the number of queries, query volume, and types of queries (e.g., influenza symptoms, influenza complications) in each model. Both models' estimates were highly correlated with ILINet pre-H1N1 and over the entire surveillance period, although the original model underestimated the magnitude of ILI activity during pH1N1. The updated model was more correlated with ILINet than the original model during Summer H1N1 (r = 0.95 and 0.29, respectively). The updated model included more search query terms than the original model, with more queries directly related to influenza infection, whereas the original model contained more queries related to influenza complications. Internet search behavior changed during pH1N1, particularly in the categories "influenza complications" and "term for influenza." The complications associated with pH1N1, the fact that pH1N1 began in the summer rather than winter, and changes in health-seeking behavior each may have played a part. Both GFT models performed well prior to and during pH1N1, although the updated model performed better during pH1N1, especially during the summer months.
Article
Full-text available
Seasonal influenza epidemics are a major public health concern, causing tens of millions of respiratory illnesses and 250,000 to 500,000 deaths worldwide each year. In addition to seasonal influenza, a new strain of influenza virus against which no previous immunity exists and that demonstrates human-to-human transmission could result in a pandemic with millions of fatalities. Early detection of disease activity, when followed by a rapid response, can reduce the impact of both seasonal and pandemic influenza. One way to improve early detection is to monitor health-seeking behaviour in the form of queries to online search engines, which are submitted by millions of users around the world each day. Here we present a method of analysing large numbers of Google search queries to track influenza-like illness in a population. Because the relative frequency of certain queries is highly correlated with the percentage of physician visits in which a patient presents with influenza-like symptoms, we can accurately estimate the current level of weekly influenza activity in each region of the United States, with a reporting lag of about one day. This approach may make it possible to use search queries to detect influenza epidemics in areas with a large population of web search users.
Article
Full-text available
Much research has highlighted incoherent implications of judgmental heuristics, yet other findings have demonstrated high correspondence between predictions and outcomes. At the same time, judgment has been well modeled in the form of as if linear models. Accepting the probabilistic nature of the environment, the authors use statistical tools to model how the performance of heuristic rules varies as a function of environmental characteristics. They further characterize the human use of linear models by exploring effects of different levels of cognitive ability. They illustrate with both theoretical analyses and simulations. Results are linked to the empirical literature by a meta-analysis of lens model studies. Using the same tasks, the authors estimate the performance of both heuristics and humans where the latter are assumed to use linear models. Their results emphasize that judgmental accuracy depends on matching characteristics of rules and environments and highlight the trade-off between using linear models and heuristics. Whereas the former can be cognitively demanding, the latter are simple to implement. However, heuristics require knowledge to indicate when they should be used.
Article
Full-text available
This article provides an overview of recent results on lexicographic, linear, and Bayesian models for paired comparison from a cognitive psychology perspective. Within each class, we distinguish subclasses according to the computational complexity required for parameter setting. We identify the optimal model in each class, where optimality is defined with respect to performance when fitting known data. Although not optimal when fitting data, simple models can be astonishingly accurate when generalizing to new data. A simple heuristic belonging to the class of lexicographic models is Take The Best (Gigerenzer & Goldstein (1996) Psychol. Rev. 102: 684). It is more robust than other lexicographic strategies which use complex procedures to establish a cue hierarchy. In fact, it is robust due to its simplicity, not despite it. Similarly, Take The Best looks up only a fraction of the information that linear and Bayesian models require; yet it achieves performance comparable to that of models which integrate information. Due to its simplicity, frugality, and accuracy, Take The Best is a plausible candidate for a psychological model in the tradition of bounded rationality. We review empirical evidence showing the descriptive validity of fast and frugal heuristics.
Article
Epidemic forecasting has a dubious track-record, and its failures became more prominent with COVID-19. Poor data input, wrong modeling assumptions, high sensitivity of estimates, lack of incorporation of epidemiological features, poor past evidence on effects of available interventions, lack of transparency, errors, lack of determinacy, looking at only one or a few dimensions of the problem at hand, lack of expertise in crucial disciplines, groupthink and bandwagon effects and selective reporting are some of the causes of these failures. Nevertheless, epidemic forecasting is unlikely to be abandoned. Some (but not all) of these problems can be fixed. Careful modeling of predictive distributions rather than focusing on point estimates, considering multiple dimensions of impact, and continuously reappraising models based on their validated performance may help. If extreme values are considered, extremes should be considered for the consequences of multiple dimensions of impact so as to continuously calibrate predictive insights and decision-making. When major decisions (e.g. draconian lockdowns) are based on forecasts, the harms (in terms of health, economy, and society at large) and the asymmetry of risks need to be approached in a holistic fashion, considering the totality of the evidence.
Article
Following the highly restrictive measures adopted by many countries for combating the current pandemic, the number of individuals infected by SARS-CoV-2 and the associated number of deaths steadily decreased. This fact, together with the impossibility of maintaining the lockdown indefinitely, raises the crucial question of whether it is possible to design an exit strategy based on quantitative analysis. Guided by rigorous mathematical results, we show that this is indeed possible: we present a robust numerical algorithm which can compute the cumulative number of deaths that will occur as a result of increasing the number of contacts by a given multiple, using as input only the most reliable of all data available during the lockdown, namely the cumulative number of deaths.
Article
p>We study three heuristics for paired comparisons based on binary cues, which are all naïve in that they ignore possible dependencies between cues, but take different approaches: linear (tallying) and lexicographic (Take The Best, Minimalist). There is empirical evidence on the heuristics' descriptive adequacy and some first results on their accuracy. We present new analytical results on their relative accuracy. When cues are independent given the values of the objects on the criterion, there exists a linear decision rule, equivalent to naïve Bayes, which is optimal; we use this result to characterize the optimality of Take The Best and tallying. Also, tallying and Take The Best are more accurate than Minimalist. When cues are dependent and the number of cues and objects is psychologically plausible, Take The Best tends to be more accurate than tallying, but it is also possible that tallying, and Minimalist, are more accurate than Take The Best.</p
Article
Recently, academics have shown interest and enthusiasm in the development and implementation of stochastic customer base analysis models, such as the Pareto/NBD model and the BG/NBD model. Using the information these models provide, customer managers should be able to (1) distinguish active customers from inactive customers, (2) generate transaction forecasts for individual customers and determine future best customers, and (3) predict the purchase volume of the entire customer base. However, there is also a growing frustration among academics insofar as these models have not found their way into wide managerial application. To present arguments in favor of or against the use of these models in practice, the authors compare the quality of these models when applied to managerial decision making with the simple heuristics that firms typically use. The authors find that the simple heuristics perform at least as well as the stochastic models with regard to all managerially relevant areas, except for predictions regarding future purchases at the overall customer base level. The authors conclude that in their current state, stochastic customer base analysis models should be implemented in managerial practice with much care. Furthermore, they identify areas for improvement to make these models managerially more useful.
Article
Many decisions can be analyzed and supported by quantitative models. These models tend to be complex psychologically in that they require the elicitation and combination of quantities such as probabilities, utilities, and weights. They may be simplified so that they become more transparent, and lead to increased trust, reflection, and insight. These potential benefits of simplicity should be weighed against its potential costs, notably possible decreases in performance. We review and synthesize research that has used mathematical analyses and computer simulations to investigate if and when simple models perform worse, equal, or better than more complex models. Various research strands have pursued this, but have not reached the same conclusions: Work on frequently repeated decisions as in inference and forecasting—which typically are operational and involve one or a few decision makers—has put forth conditions under which simple models are more accurate than more complex ones, and some researchers have proposed that simple models should be preferred. On the other hand, work on more or less one-off decisions as in preference and multi-criteria analysis—which typically are strategic and involve group decision making and multiple stakeholders—has concluded that simple models can at best approximate satisfactorily the more complex models. We show how these conclusions can be reconciled. Additionally, we discuss the theory available for explaining the relative performance of simple and more complex models. Finally, we present an aid to help determine if a simple model should be used, or not, for a particular type of decision problem.
Article
Traditionally, forecasters focus on the development algorithms to identify optimal models and sets of parameters, optimal in the sense of within-sample fitting. However, this quest strongly assumes that optimally set parameters will also give the best extrapolations. The problem becomes even more pertinent when we consider the vast volumes of data to be forecast in the big data era. In this paper, we argue if this obsession to optimality always bares the respective fruits or do we spend too much time and effort in the pursuit of it. Could we better off by targeting for faster and robust systems that would aim for suboptimal forecasting solutions which, in turn, would not jeopardise the efficiency of the systems under use? This study throws light to that end by means of an empirical investigation. We show the trade-off between optimal versus suboptimal solutions in terms of forecasting performance versus computational cost. Finally, we discuss the implications of suboptimality and attempt to quantify the monetary savings as a result of suboptimal solutions.
Article
Several attempts to understand the success of simple decision heuristics have examined heuristics as an approximation to a linear decision rule. This research has identified three environmental structures that aid heuristics: dominance, cumulative dominance, and noncompensatoriness. This paper develops these ideas further and examines their empirical relevance in 51 natural environments. The results show that all three structures are prevalent, making it possible for simple rules to reach, and occasionally exceed, the accuracy of the linear decision rule, using less information and less computation.
Article
The residuals from a least squares regression equation are hardly any smaller than those from many other possible lines.
Article
Large errors in flu prediction were largely avoidable, which offers lessons for the use of big data.
Article
Does the manner in which results are presented in empirical studies affect perceptions of the predictability of the outcomes? Noting the predominant role of linear regression analysis in empirical economics, we asked 257 academic economists to make probabilistic inferences given different presentations of the outputs of this statistical tool. Questions concerned the distribution of the dependent variable conditional on known values of the independent variable. Answers based on the presentation mode that is standard in the literature led to an illusion of predictability; outcomes were perceived to be more predictable than could be justified by the model. In particular, many respondents failed to take the error term into account. Adding graphs did not improve inferences. Paradoxically, when only graphs were provided (i.e., no regression statistics), respondents were more accurate. The implications of our study suggest, inter alia, the need to reconsider how to present empirical results and the possible provision of easy-to-use simulation tools that would enable readers of empirical papers to make accurate inferences.
Article
Proper linear models are those in which predictor variables are given weights such that the resulting linear composite optimally predicts some criterion of interest; examples of proper linear models are standard regression analysis, discriminant function analysis, and ridge regression analysis. Research summarized in P. Meehl's (1954) book on clinical vs statistical prediction and research stimulated in part by that book indicate that when a numerical criterion variable (e.g., graduate GPA) is to be predicted from numerical predictor variables, proper linear models outperform clinical intuition. Improper linear models are those in which the weights of the predictor variables are obtained by some nonoptimal method. The present article presents evidence that even such improper linear models are superior to clinical intuition when predicting a numerical criterion from numerical predictors. In fact, unit (i.e., equal) weighting is quite robust for making such predictions. The application of unit weights to decide what bullet the Denver Police Department should use is described; some technical, psychological, and ethical resistances to using linear models in making social decisions are considered; and arguments that could weaken these resistances are presented. (50 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The past 25 years has seen phenomenal growth of interest in judgemental approaches to forecasting and a significant change of attitude on the part of researchers to the role of judgement. While previously judgement was thought to be the enemy of accuracy, today judgement is recognised as an indispensable component of forecasting and much research attention has been directed at understanding and improving its use. Human judgement can be demonstrated to provide a significant benefit to forecasting accuracy but it can also be subject to many biases. Much of the research has been directed at understanding and managing these strengths and weaknesses. An indication of the explosion of research interest in this area can be gauged by the fact that over 200 studies are referenced in this review.
Article
When can a single variable be more accurate in binary choice than multiple sources of information? We derive analytically the probability that a single variable (SV) will correctly predict one of two choices when both criterion and predictor are continuous variables. We further provide analogous derivations for multiple regression (MR) and equal weighting (EW) and specify the conditions under which the models differ in expected predictive ability. Key factors include variability in cue validities, intercorrelation between predictors, and the ratio of predictors to observations in MR. Theory and simulations are used to illustrate the differential effects of these factors. Results directly address why and when “one-reason” decision making can be more effective than analyses that use more information. We thus provide analytical backing to intriguing empirical results that, to date, have lacked theoretical justification. There are predictable conditions for which one should expect “less to be more.”
Article
We discuss and compare measures of accuracy of univariate time series forecasts. The methods used in the M-competition as well as the W-competition, and many of the measures recommended by previous authors on this topic, are found to be degenerate in commonly occurring situations. Instead, we propose that the mean absolute scaled error become the standard measure for comparing forecast accuracy across multiple time series. (c) 2006 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.
Article
This paper describes the M3-Competition, the latest of the M-Competitions. It explains the reasons for conducting the competition and summarizes its results and conclusions. In addition, the paper compares such results/conclusions with those of the previous two M-Competitions as well as with those of other major empirical studies. Finally, the implications of these results and conclusions are considered, their consequences for both the theory and practice of forecasting are explored and directions for future research are contemplated.
Article
Simple statistical forecasting rules, which are usually simplifications of classical models, have been shown to make better predictions than more complex rules, especially when the future values of a criterion are highly uncertain. In this article, we provide evidence that some of the fast and frugal heuristics that people use intuitively are able to make forecasts that are as good as or better than those of knowledge-intensive procedures. We draw from research on the adaptive toolbox and ecological rationality to demonstrate the power of using intuitive heuristics for forecasting in various domains including sport, business, and crime.
Article
The outcomes of matches in the 2005 Wimbledon Gentlemen's tennis competition were predicted by mere player name recognition. In a field study, amateur tennis players (n = 79) and laypeople (n = 105) indicated players' names they recognized, and predicted match outcomes. Predictions based on recognition rankings aggregated over all participants correctly predicted 70% of all matches. These recognition predictions were equal to or better than predictions based on official ATP rankings and the seedings of Wimbledon experts, while online betting odds led to more accurate forecasts. When applicable, individual amateurs and laypeople made accurate predictions by relying on individual name recognition. However, for cases in which individuals did not recognize either of the two players, their average prediction accuracy across all matches was low. The study shows that simple heuristics that rely on a few valid cues can lead to highly accurate forecasts.
Article
Laypeople as well as professionals such as business managers and medical doctors often use psychological heuristics. Psychological heuristics are models for making inferences that (1) rely heavily on core human capacities (such as recognition, recall, or imitation); (2) do not necessarily use all available information and process the information they use by simple computations (such as lexicographic rules or aspiration levels); and (3) are easy to understand, apply, and explain. Psychological heuristics are a simple alternative to optimization models (where the optimum of a mathematical function that incorporates all available information is computed). I review studies in business, medicine, and psychology where computer simulations and mathematical analyses reveal conditions under which heuristics make better inferences than optimization and vice versa. The conditions involve concepts that refer to (i) the structure of the problem, (ii) the resources of the decision maker, or (iii) the properties of the models. I discuss open problems in the theoretical study of the concepts. Finally, I organize the current results tentatively in a tree for helping decision analysts decide whether to suggest heuristics or optimization to decision makers. I conclude by arguing for a multimethod, multidisciplinary approach to the theory and practice of inference and decision making.
Article
Availability of human memories for specific items shows reliable relationships to frequency, recency, and pattern of prior exposures to the item. These relationships have defied a systematic theoretical treatment. A number of environmental sources (New York Times, parental speech, electronic mail) are examined to show that the probability that a memory will be needed also shows reliable relationships to frequency, recency, and pattern of prior exposures. Moreover, the environmental relationships are the same as the memory relationships. It is argued that human memory has the form it does because it is adapted to these environmental relationships. Models for both the environment and human memory are described. Among the memory phenomena addressed are the practice function, the retention function, the effect of spacing of practice, and the relationship between degree of practice and retention.
Article
Recent work has demonstrated that Web search volume can "predict the present," meaning that it can be used to accurately track outcomes such as unemployment levels, auto and home sales, and disease prevalence in near real time. Here we show that what consumers are searching for online can also predict their collective future behavior days or even weeks in advance. Specifically we use search query volume to forecast the opening weekend box-office revenue for feature films, first-month sales of video games, and the rank of songs on the Billboard Hot 100 chart, finding in all cases that search counts are highly predictive of future outcomes. We also find that search counts generally boost the performance of baseline models fit on other publicly available data, where the boost varies from modest to dramatic, depending on the application in question. Finally, we reexamine previous work on tracking flu trends and show that, perhaps surprisingly, the utility of search data relative to a simple autoregressive model is modest. We conclude that in the absence of other data sources, or where small improvements in predictive performance are material, search queries provide a useful guide to the near future.
Article
How do doctors make sound decisions when confronted with probabilistic data, time pressures and a heavy workload? One theory that has been embraced by many researchers is based on optimisation, which emphasises the need to integrate all information in order to arrive at sound decisions. This notion makes heuristics, which use less than complete information, appear as second-best strategies. In this article, we challenge this pessimistic view of heuristics. We introduce two medical problems that involve decision making to the reader: one concerns coronary care issues and the other macrolide prescriptions. In both settings, decision-making tools grounded in the principles of optimisation and heuristics, respectively, have been developed to assist doctors in making decisions. We explain the structure of each of these tools and compare their performance in terms of their facilitation of correct predictions. For decisions concerning both the coronary care unit and the prescribing of macrolides, we demonstrate that sacrificing information does not necessarily imply a forfeiting of predictive accuracy, but can sometimes even lead to better decisions. Subsequently, we discuss common misconceptions about heuristics and explain when and why ignoring parts of the available information can lead to the making of more robust predictions. Heuristics are neither good nor bad per se, but, if applied in situations to which they have been adapted, can be helpful companions for doctors and doctors-in-training. This, however, requires that heuristics in medicine be openly discussed, criticised, refined and then taught to doctors-in-training rather than being simply dismissed as harmful or irrelevant. A more uniform use of explicit and accepted heuristics has the potential to reduce variations in diagnoses and to improve medical care for patients.
Article
A trial of a decision-support tool to modify utilization of the coronary care unit (CCU) failed because utilization improved after explanation of the tool but before its actual employment in the trial. We investigated this unexpected phenomenon in light of an emerging theory of decision-making under uncertainty. A prospective trial of the decision-support intervention was performed on the Family Practice service at a 100-bed rural hospital. Cards with probability charts from the acute ischemic Heart Disease Predictive Instrument (HDPI) were distributed to residents on the service and withdrawn on alternate weeks. Residents were encouraged to consult the probability charts when making CCU placement decisions. The study decision was between placement in the CCU and in a monitored nursing bed. Analyses included all patients admitted during the intervention trial year for suspected acute cardiac ischemia (n = 89), plus patients admitted in two pretrial periods (n = 108 and 50) and one posttrial period (n = 45). In the intervention trial, HDPI use did not affect CCU utilization (odds ratio 1.046, P > .5). However, following the description of the instrument at a departmental clinical conference, CCU use markedly declined at least 6 months before the intervention trial (odds ratio 0.165, P < .001). Simply in learning about the instrument. residents achieved sensitivity and specificity equal to the instrument's optimum, whether or not they actually used it. Physicians introduced to a decision-support tool achieved optimal CCU utilization without actually performing probability estimations. This may have resulted from improved focus on relevant clinical factors identified by the tool. Teaching simple decision-making strategies might effectively reduce unnecessary CCU utilization.
Recency: Prediction with smart data
  • F Artinger
  • N Kozodoi
  • F Vonwangenheim
  • G Gigerenzer
When google got flu wrong: US outbreak foxes a leading web-based method for tracking seasonal flu
  • Butler
Google disease trends: an update
  • Copeland
Learning from small samples: An analysis of simple decision heuristics
  • Şimşek