Article

Psychological AI: Designing Algorithms Informed by Human Psychology

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Psychological artificial intelligence (AI) applies insights from psychology to design computer algorithms. Its core domain is decision-making under uncertainty, that is, ill-defined situations that can change in unexpected ways rather than well-defined, stable problems, such as chess and Go. Psychological theories about heuristic processes under uncertainty can provide possible insights. I provide two illustrations. The first shows how recency-the human tendency to rely on the most recent information and ignore base rates-can be built into a simple algorithm that predicts the flu substantially better than did Google Flu Trends's big-data algorithms. The second uses a result from memory research-the paradoxical effect that making numbers less precise increases recall-in the design of algorithms that predict recidivism. These case studies provide an existence proof that psychological AI can help design efficient and transparent algorithms.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models of biological vision. This conclusion is largely based on three sets of findings: (1) DNNs are more accurate than any other model in classifying images taken from various datasets, (2) DNNs do the best job in predicting the pattern of human errors in classifying objects taken from various behavioral datasets, and (3) DNNs do the best job in predicting brain signals in response to images taken from various brain datasets (e.g., single cell responses or fMRI data). However, these behavioral and brain datasets do not test hypotheses regarding what features are contributing to good predictions and we show that the predictions may be mediated by DNNs that share little overlap with biological vision. More problematically, we show that DNNs account for almost no results from psychological research. This contradicts the common claim that DNNs are good, let alone the best, models of human object recognition. We argue that theorists interested in developing biologically plausible models of human vision need to direct their attention to explaining psychological findings. More generally, theorists need to build models that explain the results of experiments that manipulate independent variables designed to test hypotheses rather than compete on making the best predictions. We conclude by briefly summarizing various promising modelling approaches that focus on psychological data.
Preprint
Full-text available
Near-term probabilistic forecasts for infectious diseases such as COVID-19 and influenza play an important role in public health communication and policymaking. From 2013-2019, the FluSight challenge run by the Centers for Disease Control and Prevention invited researchers to develop and submit forecasts using influenza-like illness (ILI) as a measure of influenza burden. Here we examine how several statistical models and an autoregressive neural network model perform for forecasting ILI during the COVID-19 pandemic, where historical patterns of ILI were highly disrupted. We find that the autoregressive neural network model which forecasted ILI well pre-COVID still performs well for some locations and forecast horizons, but its performance is highly variable, and performs poorly in many cases. We found that a simple exponential smoothing statistical model is in the top half of ranked models we evaluated nearly 75% of the time. Our results suggest that even simple statistical models may perform as well as or better than more complex machine learning models for forecasting ILI during the COVID-19 pandemic. We also created an ensemble model from the limited set of time series forecast models we created here. The limited ensemble model was rarely the best or the worst performing model compared to the rest of the models assessed, confirming previous observations from other infectious disease forecasting efforts on the less variable and generally favorable performance of ensemble forecasts. Our results support previous findings that no single modeling approach outperforms all other models across all locations, time points, and forecast horizons, and that ensemble forecasting consortia such as the COVID-19 Forecast Hub and FluSight continue to serve valuable roles in collecting, aggregating, and ensembling forecasts using fundamentally disparate modeling strategies.
Article
Full-text available
The black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable to be used in high-stakes scenarios such as medicine. It has been argued that explainable AI will engender trust with the health-care workforce, provide transparency into the AI decision making process, and potentially mitigate various kinds of bias. In this Viewpoint, we argue that this argument represents a false hope for explainable AI and that current explainability methods are unlikely to achieve these goals for patient-level decision support. We provide an overview of current explainability techniques and highlight how various failure cases can cause problems for decision making for individual patients. In the absence of suitable explainability methods, we advocate for rigorous internal and external validation of AI models as a more direct means of achieving the goals often associated with explainability, and we caution against having explainability be a requirement for clinically deployed models.
Article
Full-text available
Distinguishing between risk and uncertainty, this article draws on the psychological literature on heuristics to consider whether and when simpler approaches may outperform more complex methods for modeling and regulating the financial system. We find that: simple methods can sometimes dominate more complex modeling approaches for calculating banks’ capital requirements, especially when data are limited or underlying risks are fat-tailed; simple indicators often outperformed more complex metrics in predicting individual bank failure during the global financial crisis; when combining different indicators to predict bank failure, simple and easy-to-communicate “fast-and-frugal” decision trees can perform comparably to standard, but more information-intensive, regressions. Taken together, our analyses suggest that because financial systems are better characterized by uncertainty than by risk, simpler approaches to modeling and regulating financial systems can usefully complement more complex ones and ultimately contribute to a safer financial system.
Article
Full-text available
Mitigating the effects of disease outbreaks with timely and effective interventions requires accurate real-time surveillance and forecasting of disease activity, but traditional health care–based surveillance systems are limited by inherent reporting delays. Machine learning methods have the potential to fill this temporal “data gap,” but work to date in this area has focused on relatively simple methods and coarse geographic resolutions (state level and above). We evaluate the predictive performance of a gated recurrent unit neural network approach in comparison with baseline machine learning methods for estimating influenza activity in the United States at the state and city levels and experiment with the inclusion of real-time Internet search data. We find that the neural network approach improves upon baseline models for long time horizons of prediction but is not improved by real-time internet search data. We conduct a thorough analysis of feature importances in all considered models for interpretability purposes.
Article
Full-text available
Judges, doctors and managers are among those decision makers who must often choose a course of action under limited time, with limited knowledge and without the aid of a computer. Because data‐driven methods typically outperform unaided judgements, resource‐constrained practitioners can benefit from simple, statistically derived rules that can be applied mentally. In this work, we formalize long‐standing observations about the efficacy of improper linear models to construct accurate yet easily applied rules. To test the performance of this approach, we conduct a large‐scale evaluation in 22 domains and focus in detail on one: judicial decisions to release or detain defendants while they await trial. In these domains, we find that simple rules rival the accuracy of complex prediction models that base decisions on considerably more information. Further, comparing with unaided judicial decisions, we find that simple rules substantially outperform the human experts. To conclude, we present an analytical framework that sheds light on why simple rules perform as well as they do.
Article
Full-text available
We analyze the individual and macroeconomic impacts of heterogeneous expectations and action rules within an agent‐based model populated by heterogeneous, interacting firms. Agents have to cope with a complex evolving economy characterized by deep uncertainty resulting from technical change, imperfect information, coordination hurdles, and structural breaks. In these circumstances, we find that neither individual nor macroeconomic dynamics improve when agents replace myopic expectations with less naïve learning rules. Our results suggest that fast and frugal robust heuristics may not be a second‐best option but rather “rational” responses in complex and changing macroeconomic environments. (JEL C63, D8, E32, E6, O4)
Article
Full-text available
How predictable are life trajectories? We investigated this question with a scientific mass collaboration using the common task method; 160 teams built predictive models for six life outcomes using data from the Fragile Families and Child Wellbeing Study, a high-quality birth cohort study. Despite using a rich dataset and applying machine-learning methods optimized for prediction, the best predictions were not very accurate and were only slightly better than those from a simple benchmark model. Within each outcome, prediction error was strongly associated with the family being predicted and weakly associated with the technique used to generate the prediction. Overall, these results suggest practical limits to the predictability of life outcomes in some settings and illustrate the value of mass collaborations in the social sciences.
Article
Full-text available
Due to the advanced features in recent smartphones and context-awareness in mobile technologies, users’ diverse behavioral activities with their phones and associated contexts are recorded through the device logs. Behavioral patterns of smartphone users may vary greatly between individuals in different contexts—for example, temporal, spatial, or social contexts. However, an individual’s phone usage behavior may not be static in the real-world changing over time. The volatility of usage behavior will also vary from user-to-user. Thus, an individual’s recent behavioral patterns and corresponding machine learning rules are more likely to be interesting and significant than older ones for modeling and predicting their phone usage behavior. Based on this concept of recency, in this paper, we present an approach for mining recency-based personalized behavior, and name it “RecencyMiner” for short, utilizing individual’s contextual smartphone data, in order to build a context-aware personalized behavior prediction model. The effectiveness of RecencyMiner is examined by considering individual smartphone user’s real-life contextual datasets. The experimental results show that our proposed recency-based approach better predicts individual’s phone usage behavior than existing baseline models, by minimizing the error rate in various context-aware test cases.
Article
Full-text available
Algorithms for predicting recidivism are commonly used to assess a criminal defendant’s likelihood of committing a crime. These predictions are used in pretrial, parole, and sentencing decisions. Proponents of these systems argue that big data and advanced machine learning make these analyses more accurate and less biased than humans. We show, however, that the widely used commercial risk assessment software COMPAS is no more accurate or fair than predictions made by people with little or no criminal justice expertise. We further show that a simple linear predictor provided with only two features is nearly equivalent to COMPAS with its 137 features.
Chapter
Full-text available
A general theory is testable not directly but through consequences it implies when it is taken together with auxiliary hypotheses. The test can be weaker or stronger depending, in particular, on the extent to which the consequences tested are specifically entailed by the theory (as opposed to being mostly entailed by the auxiliary hypotheses and being equally compatible with other general theories). The earliest experimental work based on Relevance Theory (Jorgensen, Miller and Sperber, 1984; Happé 1993) tested and confirmed Sperber and Wilson’s (1981) echoic account of irony (and much experimental work done since on irony has broadly confirmed it and refined it further). While this account of irony is part and parcel of Relevance Theory, it is nevertheless compatible with different pragmatic approaches. The experimental confirmation of this account, therefore, provides only weak support for Relevance Theory as a whole. More recent experimental work has made explicit, tested and confirmed other and more specific and central consequences of Relevance Theory (e.g. Sperber, Cara and Girotto, 1995; Politzer, 1996; Gibbs and Moise, 1997; Hardman, 1998; Nicolle and Clark, 1999; Matsui, 2000, 2001; Girotto, Kemmelmeir, Sperber and Van der Henst, 2001; Noveck, 2001; Noveck, Bianco and Castry, 2001; Van der Henst, Sperber and Politzer, 2002, Van der Henst, Carles and Sperber, 2002, Noveck and Posada, 2003; Ryder and Leinonen, 2003).
Article
Full-text available
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
Article
Full-text available
The study of scientific discovery—where do new ideas come from?—has long been denigrated by philosophers as irrelevant to analyzing the growth of scientific knowledge. In particular, little is known about how cognitive theories are discovered, and neither the classical accounts of discovery as either probabilistic induction (e.g., H. Reichenbach, 1938) or lucky guesses (e.g., K. Popper, 1959), nor the stock anecdotes about sudden "eureka" moments deepen the insight into discovery. A heuristics approach is taken in this review, where heuristics are understood as strategies of discovery less general than a supposed unique logic discovery but more general than lucky guesses. This article deals with how scientists' tools shape theories of mind, in particular with how methods of statistical inference have turned into metaphors of mind. The tools-to-theories heuristic explains the emergence of a broad range of cognitive theories, from the cognitive revolution of the 1960s up to the present, and it can be used to detect both limitations and new lines of development in current cognitive theories that investigate the mind as an "intuitive statistician." (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
This article introduces this JBR Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods - including those in this special issue - found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy. Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com.
Article
Full-text available
A review of the literature indicates that linear models are frequently used in situations in which decisions are made on the basis of multiple codable inputs. These models are sometimes used (a) normatively to aid the decision maker, (b) as a contrast with the decision maker in the clinical vs statistical controversy, (c) to represent the decision maker "paramorphically" and (d) to "bootstrap" the decision maker by replacing him with his representation. Examination of the contexts in which linear models have been successfully employed indicates that the contexts have the following structural characteristics in common: each input variable has a conditionally monotone relationship with the output; there is error of measurement; and deviations from optimal weighting do not make much practical difference. These characteristics ensure the success of linear models, which are so appropriate in such contexts that random linear models (i.e., models whose weights are randomly chosen except for sign) may perform quite well. 4 examples involving the prediction of such codable output variables as GPA and psychiatric diagnosis are analyzed in detail. In all 4 examples, random linear models yield predictions that are superior to those of human judges. (52 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
We deal with risk versus uncertainty, a distinction that is of fundamental importance for cognitive neuroscience yet largely neglected. In a world of risk ("small world"), all alternatives, consequences, and probabilities are known. In uncertain ("large") worlds, some of this information is unknown or unknowable. Most of cognitive neuroscience studies exclusively study the neural correlates for decisions under risk (e.g., lotteries), with the tacit implication that understanding these would lead to an understanding of decision making in general. First, we show that normative strategies for decisions under risk do not generalize to uncertain worlds, where simple heuristics are often the more accurate strategies. Second, we argue that the cognitive processes for making decisions in a world of risk are not the same as those for dealing with uncertainty. Because situations with known risks are the exception rather than the rule in human evolution, it is unlikely that our brains are adapted to them. We therefore suggest a paradigm shift toward studying decision processes in uncertain worlds and provide first examples.
Article
Full-text available
Google Flu Trends (GFT) uses anonymized, aggregated internet search activity to provide near-real time estimates of influenza activity. GFT estimates have shown a strong correlation with official influenza surveillance data. The 2009 influenza virus A (H1N1) pandemic [pH1N1] provided the first opportunity to evaluate GFT during a non-seasonal influenza outbreak. In September 2009, an updated United States GFT model was developed using data from the beginning of pH1N1. We evaluated the accuracy of each U.S. GFT model by comparing weekly estimates of ILI (influenza-like illness) activity with the U.S. Outpatient Influenza-like Illness Surveillance Network (ILINet). For each GFT model we calculated the correlation and RMSE (root mean square error) between model estimates and ILINet for four time periods: pre-H1N1, Summer H1N1, Winter H1N1, and H1N1 overall (Mar 2009-Dec 2009). We also compared the number of queries, query volume, and types of queries (e.g., influenza symptoms, influenza complications) in each model. Both models' estimates were highly correlated with ILINet pre-H1N1 and over the entire surveillance period, although the original model underestimated the magnitude of ILI activity during pH1N1. The updated model was more correlated with ILINet than the original model during Summer H1N1 (r = 0.95 and 0.29, respectively). The updated model included more search query terms than the original model, with more queries directly related to influenza infection, whereas the original model contained more queries related to influenza complications. Internet search behavior changed during pH1N1, particularly in the categories "influenza complications" and "term for influenza." The complications associated with pH1N1, the fact that pH1N1 began in the summer rather than winter, and changes in health-seeking behavior each may have played a part. Both GFT models performed well prior to and during pH1N1, although the updated model performed better during pH1N1, especially during the summer months.
Article
Full-text available
Scientific discovery has long been explained in terms of theory, data, and little else. We propose a new approach to scientific discovery in which tools play a central role by suggesting themselves as scientific theories, by way of what we call the tools-to-theories heuristic of scientific discovery. In this article, we extend our previous analysis of statistical tools that became theories of mind to the computer and its impact on psychological theorizing. We first show how a conceptual separation of intelligence and calculation in the early 19th century made mechanical computation, and later the electronic computer, conceivable. We next show how in this century, when computers finally became standard laboratory tools, the computer was proposed-and eventually adopted-as a model of mind. Thus, we travel the full circle from mind to computer and back.
Article
Full-text available
Seasonal influenza epidemics are a major public health concern, causing tens of millions of respiratory illnesses and 250,000 to 500,000 deaths worldwide each year. In addition to seasonal influenza, a new strain of influenza virus against which no previous immunity exists and that demonstrates human-to-human transmission could result in a pandemic with millions of fatalities. Early detection of disease activity, when followed by a rapid response, can reduce the impact of both seasonal and pandemic influenza. One way to improve early detection is to monitor health-seeking behaviour in the form of queries to online search engines, which are submitted by millions of users around the world each day. Here we present a method of analysing large numbers of Google search queries to track influenza-like illness in a population. Because the relative frequency of certain queries is highly correlated with the percentage of physician visits in which a patient presents with influenza-like symptoms, we can accurately estimate the current level of weekly influenza activity in each region of the United States, with a reporting lag of about one day. This approach may make it possible to use search queries to detect influenza epidemics in areas with a large population of web search users.
Article
Full-text available
Some theorists, ranging from W. James (1890) to contemporary psychologists, have argued that forgetting is the key to proper functioning of memory. The authors elaborate on the notion of beneficial forgetting by proposing that loss of information aids inference heuristics that exploit mnemonic information. To this end, the authors bring together 2 research programs that take an ecological approach to studying cognition. Specifically, they implement fast and frugal heuristics within the ACT-R cognitive architecture. Simulations of the recognition heuristic, which relies on systematic failures of recognition to infer which of 2 objects scores higher on a criterion value, demonstrate that forgetting can boost accuracy by increasing the chances that only 1 object is recognized. Simulations of the fluency heuristic, which arrives at the same inference on the basis of the speed with which objects are recognized, indicate that forgetting aids the discrimination between the objects' recognition speeds.
Article
Full-text available
The assumption that people possess a repertoire of strategies to solve the inference problems they face has been raised repeatedly. However, a computational model specifying how people select strategies from their repertoire is still lacking. The proposed strategy selection learning (SSL) theory predicts a strategy selection process on the basis of reinforcement learning. The theory assumes that individuals develop subjective expectations for the strategies they have and select strategies proportional to their expectations, which are then updated on the basis of subsequent experience. The learning assumption was supported in 4 experimental studies. Participants substantially improved their inferences through feedback. In all 4 studies, the best-performing strategy from the participants' repertoires most accurately predicted the inferences after sufficient learning opportunities. When testing SSL against 3 models representing extensions of SSL and against an exemplar model assuming a memory-based inference process, the authors found that SSL predicted the inferences most accurately.
Article
Heuristics are fast, frugal, and accurate strategies that enable rather than limit decision making under uncertainty. Uncertainty, as opposed to calculable risk, is characteristic of most organizational contexts. We review existing research and offer a descriptive and prescriptive theoretical framework to integrate the current patchwork of heuristics scattered across various areas of organizational studies. Research on the adaptive toolbox is descriptive, identifying the repertoire of heuristics on which individuals, teams, and organizations rely. Research on ecological rationality is prescriptive, specifying the conditions under which a given heuristic performs well, that is, when it is smart. Our review finds a relatively small but rapidly developing field. We identify promising future research directions, including research on how culture shapes the use of heuristics and how heuristics shape organizational culture. We also outline an educational program for managers and leaders that follows the general approach of “Don't avoid heuristics—learn how to use them.”
Article
We structure this response to the commentaries to our article “Transparent modeling of influenza incidence: Big data or a single data point from psychological theory?” around the concept of psychological AI, Herbert Simon’s classic idea of using insights from how people make decisions to make computers smart. The recency heuristic in Katsikopoulos, Şimşek, Buckmann, and Gigerenzer (2021) is one example of psychological AI. Here we develop another: the trend-recency heuristic. While the recency heuristic predicts that the next observation will equal the most recent observation, the trend-recency heuristic predicts that the next trend will equal the most recent trend. We compare the performance of these two recency heuristics with forecasting models that use trend damping for predicting flu incidence. Psychological AI prioritizes ecological rationality and transparency, and we provide a roadmap of how to study such issues. We also discuss how this transparency differs from explainable AI and how ecological rationality focuses on the comparative empirical study and theoretical analysis of different types of models.
Article
Those designing autonomous systems that interact with humans will invariably face questions about how humans think and make decisions. Fortunately, computational cognitive science offers insight into human decision-making using tools that will be familiar to those with backgrounds in optimization and control (e.g., probability theory, statistical machine learning, and reinforcement learning). Here, we review some of this work, focusing on how cognitive science can provide forward models of human decision-making and inverse models of how humans think about others’ decision-making. We highlight relevant recent developments, including approaches that synthesize black box and theory-driven modeling, accounts that recast heuristics and biases as forms of bounded optimality, and models that characterize human theory of mind and communication in decision-theoretic terms. In doing so, we aim to provide readers with a glimpse of the range of frameworks, methodologies, and actionable insights that lie at the intersection of cognitive science and control research. Expected final online publication date for the Annual Review of Control, Robotics, and Autonomous Systems, Volume 5 is May 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
Simple, transparent rules are often frowned upon while complex, black-box models are seen as holding greater promise. Yet in quickly-changing situations, simple rules can protect against overfitting and adapt quickly. We show that the surprisingly simple recency heuristic forecasts more accurately than Google Flu Trends which used big data analytics and a black-box algorithm. This heuristic predicts that “this week’s proportion of flu-related doctor visits equals the proportion from the most recent week”. It is based on psychological theory of how people deal with rapidly changing situations. Other theory-inspired heuristics have outperformed big data models in predicting outcomes such as U.S. presidential elections, or uncertain events such as consumer purchases, patient hospitalizations and terrorist attacks. Heuristics are transparent, clearly communicating the underlying rationale for their predictions. We advocate taking into account psychological principles that have evolved over millennia and using these as a benchmark when testing big data models.
Article
Convolutional neural networks (CNNs) were inspired by early findings in the study of biological vision. They have since become successful tools in computer vision and state-of-the-art models of both neural activity and behavior on visual tasks. This review highlights what, in the context of CNNs, it means to be a good model in computational neuroscience and the various ways models can provide insight. Specifically, it covers the origins of CNNs and the methods by which we validate them as models of biological vision. It then goes on to elaborate on what we can learn about biological vision by understanding and experimenting on CNNs and discusses emerging opportunities for the use of CNNs in vision research beyond basic object recognition.
Article
Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision. There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision. It can be argued that instead efforts should be directed at building inherently interpretable models in the first place, in particular where they are applied in applications that directly affect human lives, such as in healthcare and criminal justice.
Article
A curious fact about the communication of numerical information is that speakers often choose to use approximate or rounded expressions, even when more precise information is available (for instance, reporting the time as 'three thirty' when one's watch reads 3:27). It has been proposed that this tendency towards rounding is driven by a desire to reduce hearers' processing costs, a specific claim being that rounded values produce the same cognitive effect at less cognitive effort than non-round values (Van der Henst, Carles and Sperber, 2002). To date, however, the posited processing advantage for roundness has not been experimentally substantiated. Focusing on the domain of temporal expressions, we report on two experiments that demonstrate that rounded clock times are easier to remember and manipulate than their non-round counterparts, a finding that provides evidence for the influence of processing considerations on numerical expression choice. We further find a role for domain-specific granularity of measurement.
Book
How do people make decisions when time is limited, information unreliable, and the future uncertain? Based on the work of Herbert A. Simon and with the help of colleagues around the world, the Adaptive Behavior and Cognition (ABC) Group at the Max Planck Institute for Human Development in Berlin has developed a research program on simple heuristics, also known as fast and frugal heuristics. These heuristics are efficient cognitive processes that ignore information and exploit the structure of the environment. In contrast to the widely held view that less complex processing necessarily reduces accuracy, the analytical and empirical analyses of fast and frugal heuristics demonstrate that less information and computation can in fact improve accuracy. These results represent an existence proof that cognitive processes capable of successful performance in the real world do not need to satisfy the classical norms of rationality. Thus, simple heuristics embody ecological rather than logical rationality. By providing a fresh look at how the mind works as well as the nature of rational behavior, the simple heuristics approach has stimulated a large body of research, led to fascinating applications in diverse fields from law to medicine to business to sports, and instigated controversial debates in psychology, philosophy, and economics. This book contains key chapters that have been previously published in journals across many disciplines. These chapters present theory, real-world applications, and a sample of the large number of existing experimental studies that provide evidence for people's adaptive use of simple heuristics.
Article
Large errors in flu prediction were largely avoidable, which offers lessons for the use of big data.
The general problem of forming composite variables from components is prevalent in many types of research. A major aspect of this problem is the weighting of components. Assuming that composites are a linear function of their components, composites formed by using standard linear regression are compared to those formed by simple unit weighting schemes, i.e., where predictor variables are weighted by 1.0. The degree of similarity between the two composites, expressed as the minimum possible correlation between them, is derived. This minimum correlation is found to be an increasing function of the intercorrelation of the components and a decreasing function of the number of predictors. Moreover, the minimum is fairly high for most applied situations. The predictive ability of the two methods is compared. For predictive purposes, unit weighting is a viable alternative to standard regression methods because unit weights: (1) are not estimated from the data and therefore do not “consume” degrees of freedom; (2) are “estimated” without error (i.e., they have no standard errors); (3) cannot reverse the “true” relative weights of the variables. Predictive ability of the two methods is examined as a function of sample size and number of predictors. It is shown that unit weighting will be superior to regression in certain situations and not greatly inferior in others. Various implications for using unit weighting are discussed and applications to several decision making situations are illustrated.
Article
Address at the banquet of the Twelfth National Meeting of the Operations Research Society of America, Pittsburgh, Pennsylvania, November 14, 1957. Mr. Simon presented the paper; its content is a joint product of the authors. In this, they rely on the precedent of Genesis 27:22, “The voice is Jacob's voice, but the hands are the hands of Esau.”
Article
US outbreak foxes a leading web-based method for tracking seasonal flu.
Article
Experimental decision-making research often uses a task in which participants are presented with alternatives from which they must choose. Although tasks of this type may be useful in determining measures (e.g., preference) related to explicitly stated alternatives, they neglect an important aspect of many real-world decision-making environments—namely, the option-generation process. The goal of the present research is to extend previous literature that fills this void by presenting a model that attempts to describe the link between the use of different strategies and the subsequent option-generation process, as well as the resulting choice characteristics. Specifically, we examine the relationship between strategy use, number and order of generated options, choice quality, and dynamic inconsistency. “Take The First” is presented as a heuristic that operates in ill-defined tasks, based on our model assumptions. An experiment involving a realistic (sports) situation was conducted on suitable participants (athletes) to test the predictions of the model. Initial results support the model’s key predictions: strategies producing fewer generated options result in better and more consistent decisions.
Article
The human visual system can rapidly and accurately derive the three-dimensional orientation of surfaces by using variations in image intensity alone. This ability to perceive shape from shading is one of the most important yet poorly understood aspects of human vision. Here we present several findings which may help reveal computational mechanisms underlying this ability. First, we find that perception of shape from shading is a global operation which assumes that there is only one light source illuminating the entire visual image. This implies that if two identical objects are viewed simultaneously and illuminated from different angles, then we would be able to perceive three-dimensional shape accurately in only one of them at a time. Second, three-dimensional shapes that are defined exclusively by shading can provide tokens for the perception of apparent motion, suggesting that the motion mechanism is remarkably versatile in the kinds of inputs it can use. Lastly, the occluding edges which delineate an object from its background can also powerfully influence the perception of three-dimensional shape from shading.
Google disease trends: An update
  • P Copeland
  • R Romano
  • T Zhang
  • G Hecht
  • D Zigmond
  • C Stefansen