Article

Empirical evidence for resource-rational anchoring and adjustment

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

People's estimates of numerical quantities are systematically biased towards their initial guess. This anchoring bias is usually interpreted as sign of human irrationality, but it has recently been suggested that the anchoring bias instead results from people's rational use of their finite time and limited cognitive resources. If this were true, then adjustment should decrease with the relative cost of time. To test this hypothesis, we designed a new numerical estimation paradigm that controls people's knowledge and varies the cost of time and error independently while allowing people to invest as much or as little time and effort into refining their estimate as they wish. Two experiments confirmed the prediction that adjustment decreases with time cost but increases with error cost regardless of whether the anchor was self-generated or provided. These results support the hypothesis that people rationally adapt their number of adjustments to achieve a near-optimal speed-accuracy tradeoff. This suggests that the anchoring bias might be a signature of the rational use of finite time and limited cognitive resources rather than a sign of human irrationality.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... First, we used this model to simulate people's judgments in previously conducted anchoring experiments and found that it captured a wide range of anchoring biases and how they are affected by numerous factors, including time pressure, the extremity of the anchor, uncertainty, knowledge, and people's motivation to be accurate . Second, we designed two experiments specifically to test the model's prediction that the anchoring bias should increase with time pressure but decrease with error cost (Lieder et al., 2018c). The first experiment confirmed this prediction in a task where people generated their own anchors (see Figure 4.7), and the second experiment confirmed it in a task where people's anchors were provided by leading questions. ...
... Second, the optimal incentives may change what the resource-rational heuristic is in the first place because they change the environment from D to D ⊕ i (Equation 8.4). If so, people will likely adapt their strategy accordingly (Krueger et al., 2024;Lieder et al., 2018c;Lieder and Griffiths, 2017). For instance, when the stakes are higher, people think harder (Lieder et al., 2018c) and may consider some information they would otherwise ignore (Krueger et al., 2024). ...
... If so, people will likely adapt their strategy accordingly (Krueger et al., 2024;Lieder et al., 2018c;Lieder and Griffiths, 2017). For instance, when the stakes are higher, people think harder (Lieder et al., 2018c) and may consider some information they would otherwise ignore (Krueger et al., 2024). Critically, the predictions of this resource-rational theory of the effects of incentives may be substantially different -and more accurate than -the predictions of the homo economicus model. ...
Preprint
Full-text available
A new approach to understanding irrational behavior that provides a framework for deriving new models of human cognition. What does it mean to act rationally? Mathematicians, economists, and statisticians have argued that a rational actor chooses actions that maximize their expected utility. And yet people routinely act in ways that violate this prescription. Our limited time and computational resources mean that it is often unrealistic to consider all options in order to choose the one that has the greatest utility. This book suggests a different approach to understanding irrational behavior: resource-rational analysis. By reframing questions of rational action in terms of how we should make the best use of our limited resources, the book offers a new take on fundamental questions at the heart of cognitive psychology, behavioral economics, and the design of artificial intelligence systems. The book presents a formal framework for applying resource-rational analysis to understand and improve human behavior, a set of tools developed by the authors to make this easier, and examples of how they have used this approach to revisit classic questions about human cognition, pose new ones, and enhance human rationality. The book will be a valuable resource for psychologists, economists, and philosophers as well as neuroscientists studying human brains and minds and computer scientists working to reproduce such systems in machines.
... For instance, predictions tend to exhibit optimism, while probability judgments are often neutral or vary in direction. Critically, past work has generally not considered the time course of these biases: While these are often characterised as static effects, recent work on sequential sampling models has proposed potential temporal dynamics in mental processes, meaning early responses may systematically differ from later ones (Lieder, Griffiths, M. Huys, & Goodman, 2018;Zhu et al., 2024). This particularly applies to the first value considered: research on anchoring effects (Tversky & Kahneman, 1974) has suggested the starting point of the sampling process may be influenced by especially salient external values (Lieder, Griffiths, M. Huys, & Goodman, 2018;Spicer et al., 2022) and so may be particularly prone to utility biases. ...
... Critically, past work has generally not considered the time course of these biases: While these are often characterised as static effects, recent work on sequential sampling models has proposed potential temporal dynamics in mental processes, meaning early responses may systematically differ from later ones (Lieder, Griffiths, M. Huys, & Goodman, 2018;Zhu et al., 2024). This particularly applies to the first value considered: research on anchoring effects (Tversky & Kahneman, 1974) has suggested the starting point of the sampling process may be influenced by especially salient external values (Lieder, Griffiths, M. Huys, & Goodman, 2018;Spicer et al., 2022) and so may be particularly prone to utility biases. Furthermore, individual differences have received limited attention. ...
... This classification occurred because the distribution failed to account for the asymmetry in overestimation between the gain and loss domains, highlighting the requirement of the quantitative distributions for other hypotheses.The analysis of starting point biases and their interaction with domains revealedvariability across experiments. For uniformly distributed risky events (Experiments 1, 3, and 4), we found decisive evidence of a starting point effect on the values, aligning with empirical findings on the anchoring effect(Lieder, Griffiths, M. Huys, & Goodman, 2018).Regarding the interaction with domains, Experiment 3 provided strong evidence for an interaction, while Experiments 1 and 4 showed anecdotal or substantial evidence against such an effect. However, further analysis of Experiment 3, focusing exclusively on participants who were uninfluenced by the domains, revealed only anecdotal evidence supporting a utility-dependent starting point bias (the BF of domain × starting points is 1.64 in ...
Preprint
Full-text available
Does the utility of an outcome influence people’s assessment of risk and uncertainty? Growing evidence suggests that people often rely on mental simulations to evaluate probability and risky events. However, prior experimental findings offer conflicting predictions about how utility biases this mental sampling process. Across four experiments (total N=206, with Experiment 4 pre-registered), we investigated the influence of utility using a random generation paradigm. These responses were then compared to probability judgments and predictions. While we identified individual differences, the majority of participants exhibited neutrality, with no systematic impact of utility on their sampling distributions. Nevertheless, biases emerged under specific conditions, including a preference for smaller or more probable outcomes as the starting point of simulations and optimism in single-response predictions. Additionally, we found evidence suggesting that probability judgments, predictions, and random generation tasks may rely on a shared underlying mental process. Our findings suggest that models of judgment and decision-making should account for individual differences in utility influences, particularly distinguishing between unbiased sampling and optimistic sampling—the selective over-representation of high-utility outcomes.
... Some more recent anchoring models rely on Bayesian updating processes, including the anchor integration model (Turner & Schley, 2016) and the resource-rational anchoring and adjustment model (Lieder et al., 2018a(Lieder et al., , 2018b. These models are particularly promising because they make quantitative predictions about anchoring effects, extending the qualitative predictions of prior theories. ...
... The more recent models of anchoring based on Bayesian inference were also developed to account for results from single-anchor paradigms (Lieder et al., 2018a(Lieder et al., , 2018bTurner & Schley, 2016). Traditionally, Bayesian updating predicts no influence of information order (Slovic & Lichtenstein, 1971). ...
... Some recent anchoring models rely on Bayesian updating processes, including the anchor integration model (Turner & Schley, 2016) and the resource-rational anchoring and adjustment model (Lieder et al., 2018a(Lieder et al., , 2018b. Like previous accounts of anchoring effects described above, these recent models were developed to account for results from singleanchor paradigms. ...
... 312-313) but smaller anchoring effects according to the Selective Accessibility Model (Englich & Soder, 2009). Previous findings either supported the Insufficient Adjustment Model (Lieder et al., 2018b;Yik et al., 2019) or spoke against both accounts (Chaxel, 2014). Second, anchoring extremeness is related to anchoring effects in a positive way, according to the Insufficient Adjustment Model, but based on the Selective Accessibility Model, anchors that are too high or too low should lead to smaller anchoring effects (i.e., an inverted u-shaped relationship between anchor extremeness and anchoring effect size). ...
... Although the use of anchors as estimates can be deemed rational in settings with limited resources (e.g., Lieder et al., 2018a;Meub et al., 2013), overcoming the influence of anchors leads to more accurate estimates in most paradigms (e.g., Lieder et al., 2018b;Mussweiler et al., 2000). In order to motivate participants not to let themselves be biased by anchors in their estimates, researchers have rewarded participants for giving accurate estimates. ...
... 312-313), but according to the Selective Accessibility Model, more time allows for more thorough processing of the anchor, which should increase the size of anchoring effects (Englich & Soder, 2009). Past findings were either in favor of insufficient adjustment (Lieder et al., 2018b;Yik et al., 2019), indicating that anchoring effects are stronger under time pressure, or they were null findings (Chaxel, 2014). We could not consider this moderator in our meta-analysis because there were too few instances of anchoring under time pressure. ...
Preprint
Anchoring effects are among the largest and easiest to replicate in social psychology. However, the ship on which anchoring research is floating has become brittle with respect to three domains: (a) the relationship between paradigm features (e.g., choice of anchor or type of scale) and effect sizes is mostly unknown, (b) there are numerous contradictory findings on moderators (e.g., role of incentives for accurate estimates), and (c) it is unclear which of the many theories on anchoring effects is best supported by the evidence overall. Method: To increase clarity and transparency in the field of anchoring effects research, we meta-analyzed the most comprehensive collection of openly available anchoring effects data in existence today. Results: We found large but also heterogeneous anchoring effects, g = 0.689, 95% CI [0.589, 0.789], σ² = 0.21, Ntotal = 17,708, k = 393. (a) Anchors communicated as randomly determined did not have effects, and some effects reversed when visual response scales were used. (b) For many previously contradictory results, we found null effects (e.g., incentives did not affect the magnitude of anchoring effects). (c) Overall, theories of anchoring that stress rationality and nonlinearity were best supported by the results. Discussion: By applying an open and robust empirical procedure, we provide anchoring researchers with a new ship. We recommend that future research increase trust in findings through preregistration, reduce heterogeneity between anchoring effects through standardization, and better explain the impact of moderators. Data and analyses are available online (https://osf.io/4t5mu/).
... Our efforts are simultaneously guided by two well-supported observations about human judgment and decision-making under risk: (a) Mounting evidence suggests that people often use very few samples in probabilistic judgments and reasoning (e.g., Battaglia et al., 2013;Bonawitz et al., 2014;Gershman, Horvitz, & Tenenbaum, 2015;Gershman, Vul, & Tenenbaum, 2012;Griffiths et al., 2012;Hertwig & Pleskac, 2010;Lake et al., 2017;Lieder, Griffiths, Huys, & Goodman, 2018;Vul et al., 2014), and (b) people overestimate the probability of extreme events in their judgments (e.g., Barberis, 2013;Burns, Chiu, & Wu, 2010;Tversky & Kahneman, 1973;Ungemach, Chater, & Stewart, 2009). Unlike the model proposed here, previous explanations of the St. Petersburg paradox fail to respect at least one of these observations (see Section 2). ...
... Recent work has provided mounting evidence suggesting that people often use very few samples in probabilistic judgments and reasoning (e.g., Battaglia et al., 2013;Bonawitz et al., 2014;Gershman, Horvitz, & Tenenbaum, 2015;Gershman, Vul, & Tenenbaum, 2012;Griffiths et al., 2012;Hertwig & Pleskac, 2010;Lake et al., 2017;Lieder, Griffiths, Huys, & Goodman, 2018;Vul et al., 2014). Consistent with this finding, in the present study we assume that bidders draw only one sample (s = 1; see Eqs. (Gershman et al., 2012;Moreno-Bote et al., 2011), developmental changes in cognition (Bonawitz, Denison, Griffths, & Gopnik, 2014), category learning (Sanborn et al., 2010), example generation (Nobandegani & Shultz, 2017, and accounting for many cognitive biases (Dasgupta et al., 2016). ...
... Crucially, our explanation retains the well-supported assumption that people overestimate the probability of extreme events in their judgment and decision-making (Lieder Tversky & Kahneman, 1973), and it is in accordance with mounting evidence suggesting that people use only a few samples in probabilistic judgments and reasoning (e.g., Battaglia et al., 2013;Bonawitz et al., 2014;Gershman, Horvitz, & Tenenbaum, 2015;Gershman, Vul, & Tenenbaum, 2012;Griffiths et al., 2012;Hertwig & Pleskac, 2010;Lake et al., 2017;Lieder, Griffiths, Huys, & Goodman, 2018;Vul et al., 2014). There have been several recent studies (see , for a review) attempting to show that many well-known (purportedly irrational) behavioral effects and cognitive biases can be understood as optimal behavior subject to computational and cognitive limitations (rational minimalist program, Nobandegani, 2017;Griffiths, Lieder, & Goodman, 2015). ...
Article
The St. Petersburg paradox is a centuries‐old philosophical puzzle concerning a lottery with infinite expected payoff for which people are only willing to pay a small amount to play. Despite many attempts and several proposals, no generally accepted resolution is yet at hand. In this work, we present the first resource‐rational, process‐level explanation of this paradox, demonstrating that it can be accounted for by a variant of normative expected utility valuation which acknowledges cognitive limitations. Specifically, we show that Nobandegani et al.'s (2018) metacognitively rational model, sample‐based expected utility (SbEU), can account for major experimental findings on this paradox. Crucially, our resolution is consistent with two empirically well‐supported assumptions: (a) People use only a few samples in probabilistic judgments and decision‐making, and (b) people tend to overestimate the probability of extreme events in their judgment. Our work seeks to understand the St. Petersburg gamble as a particularly risky gamble whose process‐level explanation is consistent with a broader process‐level model of human decision‐making under risk.
... This raises the question why this system exists at all. Recent theoretical work provided a normative justification for some of the heuristics of System 1 by showing that they are qualitatively consistent with the rational use of limited cognitive resources (Griffiths et al., 2015;Lieder, Griffiths, & Hsu, 2018;Lieder, Griffiths, Huys, & Goodman, 2018a, 2018b) -especially when the stakes are low and time is scarce and precious. Thus, System 1 and System 2 appear to be rational for different kinds of situations. ...
... The resulting models have shed new light on the debate about human rationality (Lieder, Griffiths, Huys, & Goodman, 2018a, 2018bLieder, Krueger, & Griffiths, 2017;Lieder, Griffiths, Huys, & Goodman, 2018b;Lieder, Griffiths, & Hsu, 2018;Griffiths et al., 2015). While this approach has so far focused on one individual strategy at a time, the research presented here extends it to deriving optimal cognitive architectures comprising multiple systems or strategies for a wider range of problems. ...
... The resulting models have shed new light on the debate about human rationality (Lieder, Griffiths, Huys, & Goodman, 2018a, 2018bLieder, Krueger, & Griffiths, 2017;Lieder, Griffiths, Huys, & Goodman, 2018b;Lieder, Griffiths, & Hsu, 2018;Griffiths et al., 2015). While this approach has so far focused on one individual strategy at a time, the research presented here extends it to deriving optimal cognitive architectures comprising multiple systems or strategies for a wider range of problems. ...
Preprint
Full-text available
Highly influential "dual-process" accounts of human cognition postulate the coexistence of a slow accurate system with a fast error-prone system. But why would there be just two systems rather than, say, one or 93? Here, we argue that a dual-process architecture might be neither arbitrary nor irrational, but might instead reflect a rational tradeoff between the cognitive flexibility afforded by multiple systems and the time and effort required to choose between them. We investigate what the optimal set and number of cognitive systems would be depending on the structure of the environment. We find that the optimal number of systems depends on the variability of the environment and the difficulty of deciding when which system should be used. Furthermore, when having two systems is optimal, then the first system is fast but error-prone and the second system is slow but accurate. Our findings thereby provide a rational reinterpretation of dual-process theories.
... Recent work illustrates that this approach can be used to discover and make sense of people's heuristics for judgment (Lieder, Griffiths, Huys, & Goodman, 2018a), decision-making (Lieder, Griffiths, Huys, & Goodman, 2018a;Lieder, Griffiths, & Hsu, 2018), goal pursuit (Prystawski, Mohnert, Tosic, & Lieder, 2021), and memory and cognitive control (Howes et al., 2016). The resulting models have shed new light on the debate about human rationality (Lieder, Griffiths, Huys, & Goodman, 2018a, 2018bLieder, Krueger, & Griffiths, 2017;Lieder, Griffiths, Huys, & Goodman, 2018b;Lieder, Griffiths, & Hsu, 2018;Griffiths et al., 2015). ...
... Recent work illustrates that this approach can be used to discover and make sense of people's heuristics for judgment (Lieder, Griffiths, Huys, & Goodman, 2018a), decision-making (Lieder, Griffiths, Huys, & Goodman, 2018a;Lieder, Griffiths, & Hsu, 2018), goal pursuit (Prystawski, Mohnert, Tosic, & Lieder, 2021), and memory and cognitive control (Howes et al., 2016). The resulting models have shed new light on the debate about human rationality (Lieder, Griffiths, Huys, & Goodman, 2018a, 2018bLieder, Krueger, & Griffiths, 2017;Lieder, Griffiths, Huys, & Goodman, 2018b;Lieder, Griffiths, & Hsu, 2018;Griffiths et al., 2015). ...
Article
Full-text available
Highly influential “dual-process” accounts of human cognition postulate the coexistence of a slow accurate system with a fast error-prone system. But why would there be just two systems rather than, say, one or 93? Here, we argue that a dual-process architecture might reflect a rational tradeoff between the cognitive flexibility afforded by multiple systems and the time and effort required to choose between them. We investigate what the optimal set and number of cognitive systems would be depending on the structure of the environment. We find that the optimal number of systems depends on the variability of the environment and the difficulty of deciding when which system should be used. Furthermore, we find that there is a plausible range of conditions under which it is optimal to be equipped with a fast system that performs no deliberation (“System 1”) and a slow system that achieves a higher expected accuracy through deliberation (“System 2”). Our findings thereby suggest a rational reinterpretation of dual-process theories.
... Instead, resource rationality is the optimal algorithm under this constraint (Step 3) which then yield testable predictions (Step 4). In this way, resource-rational analysis reinterprets cognitive biases as an optimal (rational) tradeoff between external task demands and internal cognitive constraints (e.g., cost of error in judgment vs. time cost to reduce this error) [18]. This rational interpretation reconciles with Gigerenzer's criticism of cognitive biases as irrational use of heuristics as rational [8]. ...
... This occurs as participants have zero adjustments and favor their anchor (provided or self-generated) as time costs are critical. To test this model, Lieder et al. [18] developed an empirical experiment on MTurk for estimating bus arrival under four different scenarios. They find strong evidence for resource rationality adjustment as the degree of anchoring bias varied based on different time and error costs. ...
Preprint
Cognitive biases are systematic errors in judgment. Researchers in data visualizations have explored whether cognitive biases transfer to decision-making tasks with interactive data visualizations. At the same time, cognitive scientists have reinterpreted cognitive biases as the product of resource-rational strategies under finite time and computational costs. In this paper, we argue for the integration of resource-rational analysis through constrained Bayesian cognitive modeling to understand cognitive biases in data visualizations. The benefit would be a more realistic "bounded rationality" representation of data visualization users and provides a research roadmap for studying cognitive biases in data visualizations through a feedback loop between future experiments and theory
... First, Lieder et al. (2018b) applied this model to simulate people's judgments in previously conducted anchoring experiment and found that it captured a wide range of empirical phenomena including insufficient adjustment from anchors, an increase in anchoring bias with the extremity of anchors, and the effects of uncertainty and incentives on the magnitude of the bias. Second, Lieder, Griffiths, Huys, and Goodman (2018a) designed two experiments specifically to test the model's prediction that the anchoring bias should increase with time pressure but decrease with error cost. The first experiment confirmed this prediction in a task where people generated their own anchors, and the second experiment confirmed it in a task where people's anchors were provided by leading questions. ...
... Research in decision-making has begun to unravel how cost-benefit computations shape how long information is sampled for (Drugowitsch et al., 2012;Gluth et al., 2012;Kobayashi et al., 2021;Tajima et al., 2016), and satisficing, whereby decision-makers settle for the first option that satisfies a minimum criterion, i.e. is good enough, is a well-established concept in the field of judgment and decision-making (Bruckner et al., 2020). This framework leads to an anchor and adjust framework for learning that has also been linked to counternormative biases in decisions from description (Kahneman and Tversky, 1972), but might also be thought of as a way to rationally allocate resources for effective learning (Lieder et al., 2018). There is also an established distinction between goal-directed decision-making, as we have covered above, and habitual decision-making (Balleine and O'Doherty, 2010;Niv, 2009). ...
... Although there is suggestive evidence that people complete tasks in a resource-rational manner [96][97][98][99], there are few mechanistic accounts of how this may occur. In our proposal, people can flexibly determine how to allot their cognitive resources in response to a task based on both the importance of the task and the time available before a response is required. ...
Article
Full-text available
We propose the “runtime learning” hypothesis which states that people quickly learn to perform unfamiliar tasks as the tasks arise by using task-relevant instances of concepts stored in memory during mental training. To make learning rapid, the hypothesis claims that only a few class instances are used, but these instances are especially valuable for training. The paper motivates the hypothesis by describing related ideas from the cognitive science and machine learning literatures. Using computer simulation, we show that deep neural networks (DNNs) can learn effectively from small, curated training sets, and that valuable training items tend to lie toward the centers of data item clusters in an abstract feature space. In a series of three behavioral experiments, we show that people can also learn effectively from small, curated training sets. Critically, we find that participant reaction times and fitted drift rates are best accounted for by the confidences of DNNs trained on small datasets of highly valuable items. We conclude that the runtime learning hypothesis is a novel conjecture about the relationship between learning and memory with the potential for explaining a wide variety of cognitive phenomena.
... In fact, the effect of information endorsement being comparatively small was entirely expected given it is a relatively less important factor, especially when it comes to the correction. Specifically, in the present study, when participants encountered corrections, they had already formed a level of belief in the misinformation, which may have served as an anchor during correction processing (Hogarth & Einhorn, 1992;Lieder et al., 2018). Moreover, the most salient information provided by the fact-check is whether the initial claim was corrected or affirmed, as opposed to the fact-check's level of endorsement. ...
Article
Full-text available
General Audience Summary Misinformation can be created and spread on social media platforms with relative ease. Additionally, unlike traditional media, common cues of information credibility, such as source expertise and trustworthiness, are often unavailable on social media platforms. As such, determining what to believe versus what to disregard can be difficult. To reduce the effort required to assess information credibility, people may rely on mental shortcuts (or heuristics). Specifically, it is well established that people often look to others, or the majority, when making decisions about what to believe and how to behave. As such, people may rely on engagement metrics on social media (e.g., the number of “likes” or “shares” a post has) to gauge information credibility. Across two experiments, we investigated how level of social endorsement (specifically, whether a social media post has a high or low number of “likes”) influenced the extent to which people believed misinformation, and updated their belief based on subsequent fact-checks. We found that people had greater belief in misinformation with a high versus low level of likes, even after they received a fact-check. Further, fact-checks with a high number of likes reduced belief in misinformation more than fact-checks with a low number of likes. However, evidence for the persistence of the effect of fact-check endorsement on belief updating was mixed. These findings suggest that people may rely on engagement metrics, specifically number of likes, when appraising misinformation on social media. This influence of endorsement information may be reason for concern, particularly given that engagement information can be maliciously manipulated, and misleading and conspiratorial claims often have characteristics designed to enhance their level of endorsement.
... In the absence of any better information, the anchor is usually the most recently remembered prices which are likely to be important determinants of prices today. The empirical study of Lieder et al. (2018) suggests that the anchoring bias results from people's rational use of their finite time and limited cognitive resources, rather than human irrationality. Furnham and Boo (2011) provide a detailed discussion of the anchoring effect. ...
Article
Full-text available
Purpose. When many anomalies challenge efficiency market hypothesis and rationality, behavioral finance theories are developed to investigate the psychological effects on human behaviors and how their cognitive biases explain why the market is inefficient and anomalies exist. Behavioral finance is a fast-growing branch of financial economics, making this review paper beneficial to academics for developing leading-edge usages of financial theory that behavioral finance underlies and undertaking empirical studies on behavioral finance models. This review paper indoctrinates readers into the introductory concepts of behavioral finance with their prominent literature and empirical evidence. Design/methodology/approach. In this review paper, we swiftly familiarize readers with the introductory concepts of behavioral finance and their salient readings with some empirical evidence. Findings. This paper lays the solid foundation of behavioral finance theory and is the centerpiece of modern financial economics, which is useful to academics for developing cutting-edge treatments of financial theory that EMH and behavioral finance underpin and for undertaking empirical studies on the behavioral bias in the financial markets. Practical Implications. This paper is furthermore helpful to investors in making investment products and strategy choices that suit their risk preferences and behavioral traits predicted from behavioral models. This paper also provides the recent empirical evidence of behavioral finance in literature. The readers can then follow the research methods to undertake empirical studies on this field.
... In fact, the effect of information endorsement being comparatively small was entirely expected given it is a relatively less important factor, especially when it comes to the correction. Specifically, in the present study, when participants encountered corrections, they had already formed a level of belief in the misinformation, which may have served as an anchor during correction processing (Hogarth & Einhorn, 1992;Lieder et al., 2018). Moreover, the most salient information provided by the fact-check is whether the initial claim was corrected or affirmed, as opposed to the fact-check's level of endorsement. ...
Preprint
Reliance on misinformation often persists in the face of corrections. However, the role of social factors on people’s reliance on corrected misinformation has received little attention. In two experiments, we investigated the extent to which social endorsement of misinformation and corrections influences belief updating. In both experiments misinformation and fact-checks were presented as social-media posts, and social endorsement was manipulated via the number of “likes”. In Experiment 1, social endorsement of the initial misinformation had a significant influence on belief; participants believed misinformation with high social endorsement more than misinformation with low endorsement. This effect was observed pre-fact-check and post-fact-check. High social endorsement of the fact-checks was associated with reduced misinformation belief; however, evidence for the persistence of this effect was mixed. These findings were replicated in Experiment 2. Our findings indicate that social endorsement can moderate our beliefs in misinformation and the fact-checks designed to correct these beliefs.
... Bayesian statistics have been found to reliably model human cognition, and further allow for the principled incorporation of 'irrational' behavior [11,22,23]. Bayesian modelling has been used to measure the change of people's beliefs on visualization viewing [8,21], and has also been extrapolated to define a signal-detection approach to reason about visualization-based inference [14]. ...
... Bayesian statistics have been found to reliably model human cognition, and further allow for the principled incorporation of 'irrational' behavior [11,22,23]. Bayesian modelling has been used to measure the change of people's beliefs on visualization viewing [8,21], and has also been extrapolated to define a signal-detection approach to reason about visualization-based inference [14]. ...
Preprint
Full-text available
Alluvial diagrams are a popular technique for visualizing flow and relational data. However, successfully reading and interpreting the data shown in an alluvial diagram is likely influenced by factors such as data volume, complexity, and chart layout. To understand how alluvial diagram consumption is impacted by its visual features, we conduct two crowdsourced user studies with a set of alluvial diagrams of varying complexity, and examine (i) participant performance on analysis tasks, and (ii) the perceived complexity of the charts. Using the study results, we employ Bayesian modelling to predict participant classification of diagram complexity. We find that, while multiple visual features are important in contributing to alluvial diagram complexity, interestingly the importance of features seems to depend on the type of complexity being modeled, i.e. task complexity vs. perceived complexity.
... This belief-updating mechanism can be seen as an instance of an anchoring-and-adjustment process (Chapman & Johnson, 2002;Epley & Gilovich, 2001, 2006Hogarth & Einhorn, 1992;Tversky & Kahneman, 1974). Anchoring-and-adjustment models have recently been discussed as a resource-efficient way to combine the result of different cognitive functions (Albrecht et al., 2020;Lieder et al., 2018;Millroth et al., 2019). In our model, people average the probabilities of two sequentially presented pieces of evidence (Hogarth & Einhorn, 1992) but then adjust the probability as a result of a similarity bias. ...
Article
Full-text available
People often take nondiagnostic information into account when revising their beliefs. A probability judgment decreases due to nondiagnostic information represents the well-established “dilution effect” observed in many domains. Surprisingly, the opposite of the dilution effect called the “confirmation effect” has also been observed frequently. The present work provides a unified cognitive model that allows both effects to be explained simultaneously. The suggested similarity-updating model incorporates two psychological components: first, a similarity-based judgment inspired by categorization research, and second, a weighting-and-adding process with an adjustment following a similarity-based confirmation mechanism. Four experimental studies demonstrate the model’s predictive accuracy for probability judgments and belief revision. The participants received a sample of information from one of two options and had to judge from which option the information came. The similarity-updating model predicts that the probability judgment is a function of the similarity of the sample to the options. When one is presented with a new sample, the previous probability judgment is updated with a second probability judgment by taking a weighted average of the two and adjusting the result according to a similarity-based confirmation. The model describes people’s probability judgments well and outcompetes a Bayesian cognitive model and an alternative probability-theory-plus-noise model. The similarity-updating model accounts for several qualitative findings, namely, dilution effects, confirmation effects, order effects, and the finding that probability judgments are invariant to sample size. In sum, the similarity-updating model provides a plausible account of human probability judgment and belief revision.
... In Section 2, anchoring bias explains that, if we start with a small anchor, the estimated score is smaller than its actual score, and if we start with a large anchor, the estimated score is larger than its corresponding actual score. Furthermore, several studies in the field of behavioural psychology show that, when the actual value is increasing (or decreasing) with respect to the anchor, the magnitude of anchoring bias will increase (or decrease) as well (Griffiths et al., 2015;Lieder et al., 2018). These scholars argue that anchoring bias is not caused by human irrationality, it is instead due to human resource-rationality, with the bias being a result of a rational trade-off between the time one needs to spend adjusting and the cost of error due to insufficient adjustment. ...
Article
Full-text available
The aim of this study is to look at anchoring bias – one of the main cognitive biases – in two multi-attribute decision-making methods, SMART and Swing. First, the existence of anchoring bias in these two methods for eliciting attribute weights and attribute values is theorised. Then, a special experiment is designed to compare the results estimated by the respondents and the actual results to measure potential anchoring bias. Data were collected from a sample of university students. The statistical analyses indicate the existence of anchoring bias in the two methods. It is also interesting to see that the impact of anchoring bias in estimates provided by the decision-makers on the obtained weights and values depends on the method that is used. These findings have significant implications for the actual decision-makers. Future research may consider the potential existence of cognitive biases in other multi-attribute decision-making methods and focus on developing mitigation strategies.
... Another implication of MCMC, under the assumption that a small number of hypotheses are sampled, is that inferences will tend to show anchoring effects (i.e., a systematic bias towards the initial hypotheses in the Markov chain). Lieder and colleagues have shown how this idea can account for a wide variety of anchoring effects observed in human cognition (Lieder, Griffiths, & Goodman, 2012;Lieder et al., 2017b). For example, priming someone with an arbitrary number (e.g., the last 4 digits of their social security number) will bias a subsequent judgment (e.g., about the birth date of Gandhi), because the arbitrary number influences the initialization of the Markov chain. ...
Preprint
Full-text available
Bayesian models of cognition assume that people compute probability distributions over hypotheses. However, the required computations are frequently intractable or prohibitively expensive. Since people often encounter many closely related distributions, selective reuse of computations (amortized inference) is a computationally efficient use of the brain’s limited resources. We present three experiments that provide evidence for amortization in human probabilistic reasoning. When sequentially answering two related queries about natural scenes, participants’ responses to the second query systematically depend on the structure of the first query. This influence is sensitive to the content of the queries, only appearing when the queries are related. Using a cognitive load manipulation, we find evidence that people amortize summary statistics of previous inferences, rather than storing the entire distribution. These findings support the view that the brain trades off accuracy and computational cost, to make efficient use of its limited cognitive resources to approximate probabilistic inference.
... Once the anchor value is posited, all following arguments, estimates, and attitudes are determined with the anchor. As Lieder et al. (2018b) report, anchoring occurs when interpreting future informa-tion. We were interested in the emotional dynamics involved in the strategic decision to adopt a reference point in the form of an anchor. ...
Article
Full-text available
We studied the effect of two inconsistent emotions, fear and hope, in strategic decision-making during a competition. We sought to examine which emotion will be more related to whether decision makers accurately and objectively estimate their rival We developed a nuanced perspective on the effects of trait anxiety on rival estimation by integrating it with the competition shadow. Using a competition simulation and basing on data from 221 individuals across two countries, we found support for a predicted effect of trait anxiety on rival estimation. Several theoretical implications are discussed.
... ) (Einhorn & Hogarth, 1985 (Ganzach, 1996) (Joyce & Biddle, 1981a) (Lieder et al., 2018) (McFadden, 1999 (Mussweiler et al., 2004) (Tversky & Kahneman, 1974) ...
Technical Report
Full-text available
Background: To begin our research into the effects of human decision-making bias in cyber security, we explored an extensive list of decision-making biases, focusing on those with rigorous scientific research and robust empirical findings. From this literature review, a list was created of 87 biases along with definitions, study examples, and references for major and related works. These were presented to the cyber security professionals who related cyber examples. These examples are being compiled into a document that will be submitted for peer-review. While the list presented here is not exhaustive, we believe the survey of relevant biases detailed in the spreadsheet can provide utility to the community.
... As such, SbEU has a firm rational basis, which acknowledges the cognitive limitations people are faced with. Our efforts are simultaneously guided by two wellsupported observations about judgment and decisionmaking under risk: (1) mounting evidence suggests that people often use very few samples in probabilistic judgments and reasoning (Vul et al. 2014;Battaglia et al. 2013;Lake et al. 2017;Gershman, Horvitz, and Tenenbaum 2015;Hertwig and Pleskac 2010;Griffiths et al. 2012;Bonawitz et al. 2014;Lieder et al. 2018a), and (2) people overestimate the probability of extreme events in their judgments (Tversky and Kahneman 1973;Ungemach, Chater, and Stewart 2009;Burns, Chiu, and Wu 2010;Barberis 2013;Lieder et al. 2018b). Unlike SbEU, previous explanations of the St. Petersburg paradox fail to respect at least one of these observations. ...
Article
The St. Petersburg paradox is a centuries-old puzzle concerning a lottery with infinite expected payoff on which people are only willing to pay a small amount to play. Despite many attempts and several proposals, no generally-accepted resolution is yet at hand. In a recent paper, we show that this paradox can be understood in terms of the mind optimally using its limited computational resources (Nobandegani et al. 2019). Specifically, we show that the St. Petersburg paradox can be accounted for by a variant of normative expected-utility valuation which acknowledges cognitive limitations: sample-based expected utility (Nobandegani et al. 2018). SbEU provides a unified, algorithmic explanation of major experimental findings on this paradox. We conclude by discussing the implications of our work for algorithmically understanding human cognition and for developing human-like artificial intelligence.
... Similar to the heuristics and biases encountered in perceptual judgment, the reliance on shared community beliefs reflects a need to optimize one's limited resources for individual cognition and reasoning. In a variety of cognitive tasks and situations, the manifestation of biases such as anchoring may reflect the rational use of these resources, accounting for the costs of additional computation against the diminishing improvements in outcome they provide (Lieder, Griffiths, Huys, & Goodman, 2018). The use of comparatively cheap heuristics may predispose humans to systemic biases in certain cases, but the cost of these biases is outweighed by the benefits of saving limited cognitive resources. ...
Article
Although rationalization about one's own beliefs and actions can improve an individual's future decisions, beliefs can provide other benefits unrelated to their epistemic truth value, such as group cohesion and identity. A model of resource-rational cognition that accounts for these benefits may explain unexpected and seemingly irrational thought patterns, such as belief polarization.
... First, although heuristics are often and justifiably discussed in the context of errors and biases, researchers would be remiss to ignore the fact that these mental shortcuts serve people well under many circumstances. Inferring the size of a population based on two samples with a small overlap appears to be one of these circumstances, a possibility that dovetails with recent research suggesting that when time and cognitive resources are limited, reliance on the anchoring heuristic may, in fact, be rational (Lieder, Griffiths, Huys, & Goodman, 2018a, 2018b. Secondly, these findings suggest that different mechanisms underlie successful versus unsuccessful inferences. ...
Article
Success in the physical and social worlds often requires knowledge of population size. However, many populations cannot be observed in their entirety, making direct assessment of their size difficult, if not impossible. Nevertheless, an unobservable population size can be inferred from observable samples. We measured people’s ability to make such inferences and their confidence in these inferences. Contrary to past work suggesting insensitivity to sample size and failures in statistical reasoning, inferences of populations size were accurate—but only when observable samples indicated a large underlying population. When observable samples indicated a small underlying population, inferences were systematically biased. This error, which cannot be attributed to a heuristics account, was compounded by a metacognitive failure: Confidence was highest when accuracy was at its worst. This dissociation between accuracy and confidence was confirmed by a manipulation that shifted the magnitude and variability of people’s inferences without impacting their confidence. Together, these results (a) highlight the mental acuity and limits of a fundamental human judgment and (b) demonstrate an inverse relationship between cognition and metacognition.
... Ideal observer models in vision are one example in which this approach has been applied successfully (Geisler, 2011). More generally, Bayes-optimal models can be further pursued by investigating simpler algorithms that approximate the ideal strategy, or by imposing bounded rationality constraints, such as limited computational resources, on ideal observer agents (Courville and Daw, 2008;Nassar et al., 2010;Collins and Frank, 2012;Daw and Courville, 2007;Lieder et al., 2018). ...
Article
Full-text available
Computational modeling of behavior has revolutionized psychology and neuroscience. By fitting models to experimental data we can probe the algorithms underlying behavior, find neural correlates of computational variables and better understand the effects of drugs, illness and interventions. But with great power comes great responsibility. Here, we offer ten simple rules to ensure that computational modeling is used with care and yields meaningful insights. In particular, we present a beginner-friendly, pragmatic and details-oriented introduction on how to relate models to data. What, exactly, can a model tell us about the mind? To answer this, we apply our rules to the simplest modeling techniques most accessible to beginning modelers and illustrate them with examples and code available online. However, most rules apply to more advanced techniques. Our hope is that by following our guidelines, researchers will avoid many pitfalls and unleash the power of computational modeling on their own data.
... 20, 2019; producing sensitivity to the initialization. For example, probability judgments are influenced by different ways of unpacking the sub-hypotheses of a disjunctive query (Dasgupta et al., 2017) or providing incidental information that serves as an "anchor" (Lieder, Griffiths, Huys, & Goodman, 2018a, 2018b. ...
Preprint
Full-text available
Bayesian theories of cognition assume that people can integrate probabilities rationally. However, several empirical findings contradict this proposition: human probabilistic inferences are prone to systematic deviations from optimality. Puzzlingly, these deviations sometimes go in opposite directions. Whereas some studies suggest that people under-react to prior probabilities (base rate neglect), other studies find that people under-react to the likelihood of the data (conservatism). We argue that these deviations arise because the human brain does not rely solely on a general-purpose mechanism for approximating Bayesian inference that is invariant across queries. Instead, the brain is equipped with a recognition model that maps queries to probability distributions. The parameters of this recognition model are optimized to get the output as close as possible, on average, to the true posterior. Because of our limited computational resources, the recognition model will allocate its resources so as to be more accurate for high probability queries than for low probability queries. By adapting to the query distribution, the recognition model ''learns to infer''. We show that this theory can explain why and when people under-react to the data or the prior, and a new experiment demonstrates that these two forms of under-reaction can be systematically controlled by manipulating the query distribution. The theory also explains a range of related phenomena: memory effects, belief bias, and the structure of response variability in probabilistic reasoning. We also discuss how the theory can be integrated with prior sampling-based accounts of approximate inference.
... Recently, there has been a renewed interest in modelling decision-making with computational constraints [59,60] both in the computer science and the neuroscience literature, where there is growing evidence that the human brain might exploit sampling [22,[61][62][63][64][65] for approximate inference and decision-making [66,67]. Such sampling models have been used for example to explain anchoring biases in choice tasks, because MCMC has finite mixing times and therefore exhibits a dependence on the prior distribution [68,69]. In particular, the idea of using the (expected) relative entropy or the mutual information as a computational cost has been suggested several times in the literature [2,3,23,33,[70][71][72]. ...
Article
Full-text available
Living organisms from single cells to humans need to adapt continuously to respond to changes in their environment. The process of behavioural adaptation can be thought of as improving decision-making performance according to some utility function. Here, we consider an abstract model of organisms as decision-makers with limited information-processing resources that trade off between maximization of utility and computational costs measured by a relative entropy, in a similar fashion to thermodynamic systems undergoing isothermal transformations. Such systems minimize the free energy to reach equilibrium states that balance internal energy and entropic cost. When there is a fast change in the environment, these systems evolve in a non-equilibrium fashion because they are unable to follow the path of equilibrium distributions. Here, we apply concepts from non-equilibrium thermodynamics to characterize decision-makers that adapt to changing environments under the assumption that the temporal evolution of the utility function is externally driven and does not depend on the decision-maker’s action. This allows one to quantify performance loss due to imperfect adaptation in a general manner and, additionally, to find relations for decision-making similar to Crooks’ fluctuation theorem and Jarzynski’s equality. We provide simulations of several exemplary decision and inference problems in the discrete and continuous domains to illustrate the new relations.
Chapter
Full-text available
Zusammenfassung Zunächst wird ein Überblick der verhaltensökonomischen Einflussfaktoren geliefert, wonach die Risikoeinstellung und der Ankereffekt ausführlicher betrachtet werden. Das Entscheidungsmodell wird dann in Bezug auf das Risiko analysiert. Für das Ankerverhalten folgt ein detaillierter Literaturüberblick.
Article
Almaatouq et al.'s prescription for more integrative experimental designs is welcome but does not address an equally important problem: Lack of adequate theories. We highlight two features theories ought to satisfy: “Well-specified” and “grounded.” We discuss the importance of these features, some positive exemplars, and the complementarity between the target article's prescriptions and improved theorizing.
Article
In this letter, we argue that an economic perspective on the mind has played—and should continue to play—a central role in the development of cognitive science. Viewing cognition as the productive application of mental resources puts cognitive science and economics on a common conceptual footing, paving the way for closer collaboration between the two disciplines. This will enable cognitive scientists to more readily repurpose economic concepts and analytical tools for the study of mental phenomena, while at the same time, enriching our understanding of the modern economy, which is increasingly driven by mental, rather than physical, production.
Preprint
Full-text available
After considering a more or less random number (i.e., an anchor), people’s subsequent estimates are biased toward that number. Such anchoring phenomena have been explained via an adjustment process that ends too early. We present a formalized version of the insufficient adjustment model, which captures the idea that decreasing the time that people have to adjust from anchors draws their estimates closer to the anchors. In four independent studies (N = 898), we could not confirm this effect of time on anchoring. Moreover, anchoring effects vanished in the two studies that deviated from classical paradigms by using a visual scale or a two-alternative forced-choice paradigm to allow faster responses. Although we propose that the current version of the insufficient adjustment model should be discarded, we believe that adjustment models hold the most potential for the future of anchoring research, and we make suggestions for what these might look like.
Article
The finding that human decision-making is systematically biased continues to have an immense impact on both research and policymaking. Prevailing views ascribe biases to limited computational resources, which require humans to resort to less costly resource-rational heuristics. Here, we propose that many biases in fact arise due to a computationally costly way of coping with uncertainty—namely, hierarchical inference—which by nature incorporates information that can seem irrelevant. We show how, in uncertain situations, Bayesian inference may avail of the environment’s hierarchical structure to reduce uncertainty at the cost of introducing bias. We illustrate how this account can explain a range of familiar biases, focusing in detail on the halo effect and on the neglect of base rates. In each case, we show how a hierarchical-inference account takes the characterization of a bias beyond phenomenological description by revealing the computations and assumptions it might reflect. Furthermore, we highlight new predictions entailed by our account concerning factors that could mitigate or exacerbate bias, some of which have already garnered empirical support. We conclude that a hierarchical inference account may inform scientists and policy makers with a richer understanding of the adaptive and maladaptive aspects of human decision-making.
Article
Electronic health record (EHR) systems allow physicians to automate the process of entering patient data relative to manual entry in traditional paper-based records. However, such automated data entry can lead to increased reimbursement requests by hospitals from Medicare by overstating the complexity of patients. The EHR module that has been alleged to increase reimbursements is the Computerized Physician Order Entry (CPOE) system, which populates patient charts with default templates and allows physicians to copy and paste data from previous charts of the patient and other patients’ records. To combat increased reimbursements by hospitals from Medicare, the Centers for Medicare & Medicaid Services implemented the Recovery Audit Program first as a pilot in six states between 2005 and 2009 and then, nationwide in the entire United States in 2010. We examine whether the adoption of CPOE systems by hospitals is associated with an increase in reported patient complexity and if the Recovery Audit Program helped to attenuate this relationship. We find that the adoption of CPOE systems significantly increases patient complexity reported by hospitals, corresponding to an estimate of $1 billion increase in Medicare reimbursements per year. This increase was attenuated when hospitals were regulated by the Recovery Audit Program. Notably, those recovery auditors who developed the ability to identify the use of default templates, copied and pasted data, and cloned records were the most effective in reducing increased reimbursements. These findings have implications on how to combat Medicare reimbursements paid by taxpayer dollars with the Recovery Audit Program and how this information technology (IT) audit can prevent the misuse of information systems to create artificial business value of IT by hospitals. Contributions to information systems and healthcare research, practice, and public policy are discussed. This paper was accepted by Chris Forman, information systems.
Article
Full-text available
Bayesian theories of cognition assume that people can integrate probabilities rationally. However, several empirical findings contradict this proposition: human probabilistic inferences are prone to systematic deviations from optimality. Puzzlingly, these deviations sometimes go in opposite directions. Whereas some studies suggest that people underreact to prior probabilities (base rate neglect), other studies find that people underreact to the likelihood of the data (conservatism). We argue that these deviations arise because the human brain does not rely solely on a general-purpose mechanism for approximating Bayesian inference that is invariant across queries. Instead, the brain is equipped with a recognition model that maps queries to probability distributions. The parameters of this recognition model are optimized to get the output as close as possible, on average, to the true posterior. Because of our limited computational resources, the recognition model will allocate its resources so as to be more accurate for high probability queries than for low probability queries. By adapting to the query distribution, the recognition model learns to infer. We show that this theory can explain why and when people underreact to the data or the prior, and a new experiment demonstrates that these two forms of underreaction can be systematically controlled by manipulating the query distribution. The theory also explains a range of related phenomena: memory effects, belief bias, and the structure of response variability in probabilistic reasoning. We also discuss how the theory can be integrated with prior sampling-based accounts of approximate inference.
Article
Humans frequently make inferences about uncertain future events with limited data. A growing body of work suggests that infants and other primates make surprisingly sophisticated inferences under uncertainty. First, we ask what underlying cognitive mechanisms allow young learners to make such sophisticated inferences under uncertainty. We outline three possibilities, the logic, probabilistic, and heuristics views, and assess the empirical evidence for each. We argue that the weight of the empirical work favors the probabilistic view, in which early reasoning under uncertainty is grounded in inferences about the relationship between samples and populations as opposed to being grounded in simple heuristics. Second, we discuss the apparent contradiction between this early-emerging sensitivity to probabilities with the decades of literature suggesting that adults show limited use of base-rate and sampling principles in their inductive inferences. Third, we ask how these early inductive abilities can be harnessed for improving later mathematics education and inductive inference. We make several suggestions for future empirical work that should go a long way in addressing the many remaining open questions in this growing research area.
Article
Bayesian models of cognition assume that people compute probability distributions over hypotheses. However, the required computations are frequently intractable or prohibitively expensive. Since people often encounter many closely related distributions, selective reuse of computations (amortized inference) is a computationally efficient use of the brain's limited resources. We present three experiments that provide evidence for amortization in human probabilistic reasoning. When sequentially answering two related queries about natural scenes, participants' responses to the second query systematically depend on the structure of the first query. This influence is sensitive to the content of the queries, only appearing when the queries are related. Using a cognitive load manipulation, we find evidence that people amortize summary statistics of previous inferences, rather than storing the entire distribution. These findings support the view that the brain trades off accuracy and computational cost, to make efficient use of its limited cognitive resources to approximate probabilistic inference.
Article
Full-text available
The human brain has the impressive capacity to adapt how it processes information to high-level goals. While it is known that these cognitive control skills are malleable and can be improved through training, the underlying plasticity mechanisms are not well understood. Here, we develop and evaluate a model of how people learn when to exert cognitive control, which controlled process to use, and how much effort to exert. We derive this model from a general theory according to which the function of cognitive control is to select and configure neural pathways so as to make optimal use of finite time and limited computational resources. The central idea of our Learned Value of Control model is that people use reinforcement learning to predict the value of candidate control signals of different types and intensities based on stimulus features. This model correctly predicts the learning and transfer effects underlying the adaptive control-demanding behavior observed in an experiment on visual attention and four experiments on interference control in Stroop and Flanker paradigms. Moreover, our model explained these findings significantly better than an associative learning model and a Win-Stay Lose-Shift model. Our findings elucidate how learning and experience might shape people’s ability and propensity to adaptively control their minds and behavior. We conclude by predicting under which circumstances these learning mechanisms might lead to self-control failure.
Technical Report
Full-text available
This technical report compares alternative computational models of numerical estimation using Bayesian model selection. We find that people's estimates are best explained by a resource-rational model of anchoring and adjustment according to which the number of adjustments increases with error cost but decreases with time cost so as to achieve an optimal speed-accuracy tradeoff.
Article
Full-text available
Cognitive biases, such as the anchoring bias, pose a serious challenge to rational accounts of human cognition. We investigate whether rational theories can meet this challenge by taking into account the mind’s bounded cognitive resources. We asked what reasoning under uncertainty would look like if people made rational use of their finite time and limited cognitive resources. To answer this question, we applied a mathematical theory of bounded rationality to the problem of numerical estimation. Our analysis led to a rational process model that can be interpreted in terms of anchoring-and-adjustment. This model provided a unifying explanation for ten anchoring phenomena including the differential effect of accuracy motivation on the bias towards provided versus self-generated anchors. Our results illustrate the potential of resource-rational analysis to provide formal theories that can unify a wide range of empirical results and reconcile the impressive capacities of the human mind with its apparently irrational cognitive biases.
Technical Report
Full-text available
This technical report compares alternative computational models of numerical estimation using Bayesian model selection. We find that people's estimates are best explained by a resource-rational model of anchoring and adjustment according to which the number of adjustments increases with error cost but decreases with time cost so as to achieve an optimal speed-accuracy tradeoff.
Article
Full-text available
Cognitive biases, such as the anchoring bias, pose a serious challenge to rational accounts of human cognition. We investigate whether rational theories can meet this challenge by taking into account the mind’s bounded cognitive resources. We asked what reasoning under uncertainty would look like if people made rational use of their finite time and limited cognitive resources. To answer this question, we applied a mathematical theory of bounded rationality to the problem of numerical estimation. Our analysis led to a rational process model that can be interpreted in terms of anchoring-and-adjustment. This model provided a unifying explanation for ten anchoring phenomena including the differential effect of accuracy motivation on the bias towards provided versus self-generated anchors. Our results illustrate the potential of resource-rational analysis to provide formal theories that can unify a wide range of empirical results and reconcile the impressive capacities of the human mind with its apparently irrational cognitive biases.
Article
Full-text available
In spite of its familiar phenomenology, the mechanistic basis for mental effort remains poorly understood. Although most researchers agree that mental effort is aversive and stems from limitations in our capacity to exercise cognitive control, it is unclear what gives rise to those limitations and why they result in an experience of control as costly. The presence of these control costs also raises further questions regarding how best to allocate mental effort to minimize those costs and maximize the attendant benefits. This review explores recent advances in computational modeling and empirical research aimed at addressing these questions at the level of psychological process and neural mechanism, examining both the limitations to mental effort exertion and how we manage those limited cognitive resources. We conclude by identifying remaining challenges for theoretical accounts of mental effort as well as possible applications of the available findings to understanding the causes of and potential solutions for apparent failures to exert the mental effort required of us.
Article
Full-text available
Bayesian inference provides a unifying framework for addressing problems in machine learning, artificial intelligence, and robotics, as well as the problems facing the human mind. Unfortunately, exact Bayesian inference is intractable in all but the simplest models. Therefore minds and machines have to approximate Bayesian inference. Approximate inference algorithms can achieve a wide range of time-accuracy tradeoffs, but what is the optimal tradeoff? We investigate timeaccuracy tradeoffs using the Metropolis-Hastings algorithm as a metaphor for the mind's inference algorithm(s). We find that reasonably accurate decisions are possible long before the Markov chain has converged to the posterior distribution, i.e. during the period known as "burn-in". Therefore the strategy that is optimal subject to the mind's bounded processing speed and opportunity costs may perform so few iterations that the resulting samples are biased towards the initial value. The resulting cognitive process model provides a rational basis for the anchoringand- adjustment heuristic. The model's quantitative predictions are tested against published data on anchoring in numerical estimation tasks.
Article
Full-text available
ABSTRACT—Onewaytomakejudgmentsunderuncertainty is to anchor,on information,that comes to mind and adjust until a plausible estimate is reached. This anchoring-and- adjustment,heuristic is assumed,to underlie many,intuitive judgments, and insufficient adjustment is commonly in- voked to explain judgmental biases. However, despite extensive research on anchoring effects, evidence for adjustment-based,anchoring,biases has only recently been provided,andthecauses ofinsufficient adjustmentremain unclear. This research was designed to identify the origins of insufficient adjustment. The results of two sets of ex- periments,indicate that adjustments,from,self-generated anchor,values tend to be insufficient because they termi- nate once a plausible value is reached,(Studies 1a and 1b) unless one is able and willing to search for a more accurate estimate (Studies 2a‐2c). One strategy for estimating unknown,quantities is to start with
Article
Full-text available
Two experiments examined the impact of financial incentives and forewarnings on judgmental anchoring effects, or the tendency for judgments of uncertain qualities to be biased in the direction of salient anchor values. Previous research has found no effect of either manipulation on the magnitude of anchoring effects. We argue, however, that anchoring effects are produced by multiple mechanisms—one involving an effortful process of adjustment from “self-generated” anchors, and another involving the biased recruitment of anchor-consistent information from “externally provided” anchors—and that only the former should be influenced by incentives and forewarning. Two studies confirmed these predictions, showing that responses to “self-generated” anchors are influenced by both incentives and forewarnings whereas responses to “externally provided” anchors are not. Discussion focuses on the implications of these effects for debiasing efforts. Copyright © 2005 John Wiley & Sons, Ltd.
Article
Full-text available
Increasing accuracy motivation (e.g., by providing monetary incentives for accuracy) often fails to increase adjustment away from provided anchors, a result that has led researchers to conclude that people do not effortfully adjust away from such anchors. We challenge this conclusion. First, we show that people are typically uncertain about which way to adjust from provided anchors and that this uncertainty often causes people to believe that they have initially adjusted too far away from such anchors (Studies 1a and 1b). Then, we show that although accuracy motivation fails to increase the gap between anchors and final estimates when people are uncertain about the direction of adjustment, accuracy motivation does increase anchor-estimate gaps when people are certain about the direction of adjustment, and that this is true regardless of whether the anchors are provided or self-generated (Studies 2, 3a, 3b, and 5). These results suggest that people do effortfully adjust away from provided anchors but that uncertainty about the direction of adjustment makes that adjustment harder to detect than previously assumed. This conclusion has important theoretical implications, suggesting that currently emphasized distinctions between anchor types (self-generated vs. provided) are not fundamental and that ostensibly competing theories of anchoring (selective accessibility and anchoring-and-adjustment) are complementary.
Article
Full-text available
One way to make judgments under uncertainty is to anchor on information that comes to mind and adjust until a plausible estimate is reached. This anchoring-and-adjustment heuristic is assumed to underlie many intuitive judgments, and insufficient adjustment is commonly invoked to explain judgmental biases. However, despite extensive research on anchoring effects, evidence for adjustment-based anchoring biases has only recently been provided, and the causes of insufficient adjustment remain unclear. This research was designed to identify the origins of insufficient adjustment. The results of two sets of experiments indicate that adjustments from self-generated anchor values tend to be insufficient because they terminate once a plausible value is reached (Studies 1a and 1b) unless one is able and willing to search for a more accurate estimate (Studies 2a-2c).
Preprint
Cognitive biases, such as the anchoring bias, pose a serious challenge to rational accounts of human cognition. We investigate whether rational theories can meet this challenge by taking into account the mind’s bounded cognitive resources. We asked what reasoning under uncertainty would look like if people made rational use of their finite time and limited cognitive resources. To answer this question, we applied a mathematical theory of bounded rationality to the problem of numerical estimation. Our analysis led to a rational process model that can be interpreted in terms of anchoring-and-adjustment. This model provided a unifying explanation for ten anchoring phenomena including the differential effect of accuracy motivation on the bias towards provided versus self-generated anchors. Our results illustrate the potential of resource-rational analysis to provide formal theories that can unify a wide range of empirical results and reconcile the impressive capacities of the human mind with its apparently irrational cognitive biases.
Article
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Article
The role of conversational processes in quantitative judgment is addressed. In three studies, precise numbers (e.g., 29.75)hadastrongerinfluenceonsubsequentestimatesthanroundnumbers(e.g.,29.75) had a stronger influence on subsequent estimates than round numbers (e.g., 30), but only when they were presented by a human communicator whose contributions could be assumed to observe the Gricean maxims of cooperative conversational conduct. Numeric precision exerted no influence when the numbers were presented as the result of an automated procedure that lacks communicative intent (Study 1) or when the level of precision was pragmatically irrelevant for the estimation task (Study 2). (c) 2013 Elsevier Inc. All rights reserved.
Article
The authors describe a method for the quantitative study of anchoring effects in estimation tasks. A calibration group provides estimates of a set of uncertain quantities. Subjects in the anchored condition first judge whether a specified number (the anchor) is higher or lower than the true value before estimating each quantity. The anchors are set at predetermined percentiles of the distribution of estimates in the calibration group (15th and 85th percentiles in this study). This procedure permits the transformation of anchored estimates into percentiles in the calibration group, allows pooling of results across problems, and provides a natural measure of the size of the effect. The authors illustrate the method by a demonstration that the initial judgment of the anchor is susceptible to an anchoring-like bias and by an analysis of the relation between anchoring and subjective confidence.
Article
This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty.
Article
This article reviews a diverse set of proposals for dual processing in higher cognition within largely disconnected literatures in cognitive and social psychology. All these theories have in common the distinction between cognitive processes that are fast, automatic, and unconscious and those that are slow, deliberative, and conscious. A number of authors have recently suggested that there may be two architecturally (and evolutionarily) distinct cognitive systems underlying these dual-process accounts. However, it emerges that (a) there are multiple kinds of implicit processes described by different theorists and (b) not all of the proposed attributes of the two kinds of processing can be sensibly mapped on to two systems as currently conceived. It is suggested that while some dual-process theories are concerned with parallel competing processes involving explicit and implicit knowledge systems, others are concerned with the influence of preconscious processes that contextualize and shape deliberative reasoning and decision-making.
Reverse-engineering resource-efficient algorithms
  • F Lieder
  • N D Goodman
  • T L Griffiths
Lieder, F., Goodman, N. D., & Griffiths, T. L. (2013). Reverse-engineering resource-efficient algorithms [Paper presented at NIPS-2013 Workshop Resource-Efficient ML, Lake Tahoe, USA].
Decision traps: Ten barriers to brilliant decision-making and how to overcome them
  • J E Russo
  • Schoemaker
  • Pjh
Russo, J. E., & Schoemaker, P. J. H. (1989). Decision traps: Ten barriers to brilliant decision-making and how to overcome them. Simon & Schuster.
Decision making and rationality in the modern world
  • K E Stanovich
Stanovich, K. E. (2009). Decision making and rationality in the modern world.