Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Confirmation bias is defined as searching for and assimilating information in a way that favours existing beliefs. We show that confirmation bias emerges as a natural consequence of boundedly rational belief updating by presenting the BIASR model (Bayesian updating with an Independence Approximation and Source Reliability). In this model, an individual’s beliefs about a hypothesis and the source reliability form a Bayesian network. Upon receiving information, an individual simultaneously updates beliefs about the hypothesis in question and the reliability of the information source. If the individual updates rationally then this introduces numerous dependencies between beliefs, the tracking of which represents an unrealistic demand on memory. We propose that human cognition overcomes this memory limitation by assuming independence between beliefs, evidence for which is provided in prior research. We show how a Bayesian belief updating model incorporating this independence approximation generates many types of confirmation bias, including biased evaluation, biased assimilation, attitude polarisation, belief perseverance and confirmation bias in the selection of sources.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... They are much more common online due, in part, to the ease of finding like-minded people or groups [8] and disconnecting from those with conflicting opinions [9,10]. Once formed, echo chamber members may assume that each piece of information shared within the chamber is reliable when, in fact, it is biased by their selection into the chamber [11,12]. With repeated exposure to seemingly confirmatory information, the views of the echo chamber may be adopted and when these views are incorrect, a false belief may be formed [12][13][14]. ...
... Once formed, echo chamber members may assume that each piece of information shared within the chamber is reliable when, in fact, it is biased by their selection into the chamber [11,12]. With repeated exposure to seemingly confirmatory information, the views of the echo chamber may be adopted and when these views are incorrect, a false belief may be formed [12][13][14]. ...
Article
Full-text available
The Internet and social media have facilitated the spread of misinformation and the formation of echo chambers online. These echo chambers may facilitate the adoption of false beliefs and associated costs, but the mechanism of their formation remains a matter of debate. Based on Spiral of Silence Theory, sanctions against opposing views in the form of toxic online behaviour may enable not only the suppression of minority views but also the formation of echo chambers as those with suppressed minority views may attempt to find like-minded individuals who they can safely share their opinions with while avoiding toxic reprisals from those with an opposing view. In the current paper, we introduce the Pro- and Anti-Science Opinions Model (PASOM)—an agent-based model where agents decide between a pro- or anti-science view on a single science-based topic. PASOM uniquely allows agents to choose whether to interact toxically or persuasively. Initial simulations showed that toxic behaviour in the model could push agents into echo chambers and drive agents to adopt strong pro- or anti-science views with most agents in all simulations finishing in an echo chamber. Subsequent simulations demonstrated the importance of toxic behaviour in the outcomes by reducing propensity to behave toxically and sensitivity to toxic behaviour, which resulted in concurrent reductions in echo chamber formation. Finally, simulation outcomes were compared to previously reported social media data and were able to successful reproduce outcomes observed in the empirical data. The various results suggest that toxic behaviour and people’s responses to it may be important factors in the formation of echo chambers and differences between social media platforms and topics.
... This seems like a reasonable stance. It is also the basis of Bayesian approaches where new data (somehow) emerges and we can update our beliefs and knowledge accordingly-providing us an "optimal way to update beliefs given new evidence" (Pilgrim et al., 2024). This is indeed the implicit stance of cognitive approaches that focus on computational and probabilistic belief updating (e.g., Dasgupta et al., 2020). ...
... We argue that the positive aspects of data-belief asymmetry are also extremely important to understand, where humans rightly fail to update beliefs based on (existing) data, and where their seemingly delusional belief turns out to correct. Importantly, in this context, the degree or strength of belief does not need to-as is done in extant models of cognition (e.g., Pilgrim et al., 2024;Pinker, 2021)-be directly tied to commensurate or symmetric data or evidence. Beliefs have a causal role of their own and can be measured by our propensity to act on them (Ramsey, 1931). ...
Preprint
Full-text available
Scholars argue that AI can generate genuine novelty and new knowledge, and in turn, that AI and computational models of cognition will replace human decision making under uncertainty. We disagree. We argue that AI’s data-based prediction is different from human theory-based causal logic and reasoning. We highlight problems with the decades-old analogy between computers and minds as input-output devices, using large language models (LLMs) as an example. Human cognition is better conceptualized as a form of theory-based causal reasoning rather than AI’s emphasis on information processing and data-based prediction. AI uses a probability-based approach to knowledge and is largely backward-looking and imitative, while human cognition is forward-looking and capable of generating genuine novelty. We introduce the idea of “data-belief asymmetries” to highlight the difference between AI and human cognition, using the example of “heavier-than-air flight” to illustrate our arguments. Theory-based causal reasoning provides a cognitive mechanism for humans to “intervene” in the world and to engage in directed experimentation to generate new data. Throughout the article we discuss the implications of our argument for understanding the origins of novelty, new knowledge, and decision making under uncertainty.
... This seems like a reasonable stance. It is also the basis of Bayesian approaches in which new data (somehow) emerges and we can update our beliefs and knowledge accordingly, providing us an "optimal way to update beliefs given new evidence" (Pilgrim et al. 2024). This is indeed the implicit stance of cognitive approaches that focus on computational and probabilistic belief updating (e.g., Dasgupta et al. 2020). ...
Article
Full-text available
Scholars argue that artificial intelligence (AI) can generate genuine novelty and new knowledge and, in turn, that AI and computational models of cognition will replace human decision making under uncertainty. We disagree. We argue that AI’s data-based prediction is different from human theory-based causal logic and reasoning. We highlight problems with the decades-old analogy between computers and minds as input–output devices, using large language models as an example. Human cognition is better conceptualized as a form of theory-based causal reasoning rather than AI’s emphasis on information processing and data-based prediction. AI uses a probability-based approach to knowledge and is largely backward looking and imitative, whereas human cognition is forward-looking and capable of generating genuine novelty. We introduce the idea of data–belief asymmetries to highlight the difference between AI and human cognition, using the example of heavier-than-air flight to illustrate our arguments. Theory-based causal reasoning provides a cognitive mechanism for humans to intervene in the world and to engage in directed experimentation to generate new data. Throughout the article, we discuss the implications of our argument for understanding the origins of novelty, new knowledge, and decision making under uncertainty.
... This seems like a reasonable stance. It is also the basis of Bayesian approaches where new data (somehow) emerges, and where we can update our beliefs and knowledge accordingly-providing us an "optimal way to update beliefs given new evidence" (Pilgrim et al., 2024). This is indeed the implicit stance of cognitive approaches that focus on computational and probabilistic belief updating (e.g., Dasgupta et al., 2020). ...
Article
Full-text available
Scholars argue that AI can generate genuine novelty and new knowledge, and in turn, that AI and computational models of cognition will replace human decision making under uncertainty. We disagree. We argue that AI’s data-based prediction is different from human theory-based causal logic and reasoning. We highlight problems with the decades-old analogy between computers and minds as input-output devices, using large language models (LLMs) as an example. Human cognition is better conceptualized as a form of theory-based causal reasoning rather than AI’s emphasis on information processing and data-based prediction. AI uses a probability-based approach to knowledge and is largely backward-looking and imitative, while human cognition is forward-looking and capable of generating genuine novelty. We introduce the idea of “data-belief asymmetries” to highlight the difference between AI and human cognition, using the example of “heavier-than-air flight” to illustrate our arguments. Theory-based causal reasoning provides a cognitive mechanism for humans to “intervene” in the world and to engage in directed experimentation to generate new data. Throughout the article we discuss the implications of our argument for understanding the origins of novelty, new knowledge, and decision making under uncertainty.
Article
Full-text available
Cognitive dysfunction, and the resulting social behaviours, contribute to major social problems, ranging from polarisation to the spread of conspiracy theories. Most previous studies have explored these problems at a specific scale: individual, group, or societal. This study develops a synthesis that links models of cognitive failures at these three scales. First, cognitive limits and innate drives can lead to dysfunctional cognition in individuals. Second, cognitive biases and social effects further influence group behaviour. Third, social networks cause cascading effects that increase the intensity and scale of dysfunctional group behaviour. Advances in communications and information technology, especially the Internet and AI, have exacerbated established problems by accelerating the spread of false beliefs and false interpretations on an unprecedented scale, and have become an enabler for emergent effects hitherto only seen on a smaller scale. Finally, this study explores mechanisms used to manipulate people's beliefs by exploiting these biases and behaviours, notably gaslighting, propaganda, fake news, and promotion of conspiracy theories.
Article
Full-text available
This research paper examines how disinformation campaigns harm self-determination processes by examining the psychological mechanisms by which public opinion can be manipulated, including confirmation bias and emotional manipulation. This body of work takes on task of understanding how these tactics can affect political choices, from voting behavior to the support of a protest, hampering the development of democracy. The paper reveals real-world consequences of disinformation on self-determination movements through a case study of a selection of significant events, such as Brexit referendum and 2016 U.S. presidential election. It is vital to have high-level international legal frameworks and accountability mechanisms to address this problem. The proposals include establishing international conventions, increasing transparency in social media, strengthening public media literacy & establishing partnerships among stakeholders. This study concludes that protecting self-determination rights demands shared work to address misinformation, ensuring that people can interact suitably with right information for participation in democratic procedures. Addressing these challenges ensures respect for the integrity of public discourse and fundamental rights of the people worldwide in an increasingly complex information landscape.
Preprint
Large language models (LLMs) have been shown to face hallucination issues due to the data they trained on often containing human bias; whether this is reflected in the decision-making process of LLM agents remains under-explored. As LLM Agents are increasingly employed in intricate social environments, a pressing and natural question emerges: Can LLM Agents leverage hallucinations to mirror human cognitive biases, thus exhibiting irrational social intelligence? In this paper, we probe the irrational behavior among contemporary LLM agents by melding practical social science experiments with theoretical insights. Specifically, We propose CogMir, an open-ended Multi-LLM Agents framework that utilizes hallucination properties to assess and enhance LLM Agents' social intelligence through cognitive biases. Experimental results on CogMir subsets show that LLM Agents and humans exhibit high consistency in irrational and prosocial decision-making under uncertain conditions, underscoring the prosociality of LLM Agents as social entities, and highlighting the significance of hallucination properties. Additionally, CogMir framework demonstrates its potential as a valuable platform for encouraging more research into the social intelligence of LLM Agents.
Article
Full-text available
Human reasoning in hypothesis-testing tasks like Wason's (1966, 1968) selection task has been depicted as prone to systematic biases. However, performance on this task has been assessed against a now outmoded falsificationist philosophy of science. Therefore, the experimental data is reassessed in the light of a Bayesian model of optimal data selection in inductive hypothesis testing. The model provides a rational analysis (Anderson, 1990) of the selection task that fits well with people's performance on both abstract and thematic versions of the task. The model suggests that reasoning in these tasks may be rational rather than subject to systematic bias.
Article
Full-text available
After viewing identical samples of major network television coverage of the Beirut massacre, both pro-Israeli and pro-Arab partisans rated these programs, and those responsible for them, as being biased against their side. This hostile media phenomenon appears to involve the operation of two separate mechanisms. First, partisans evaluated the fairness of the media's sample of facts and arguments differently: in light of their own divergent views about the objective merits of each side's case and their corresponding views about the nature of unbiased coverage. Second, partisans reported different perceptions and recollections about the program content itself; that is, each group reported more negative references to their side than positive ones, and each predicted that the coverage would sway nonpartisans in a hostile direction. Within both partisan groups, furthermore, greater knowledge of the crisis was associated with stronger perceptions of media bias. Charges of media bias, we concluded, may reflect more than self-serving attempts to secure preferential treatment. They may result from the operation of basic cognitive and perceptual mechanisms, mechanisms that should prove relevant to perceptions of fairness or objectivity in a wide range of mediation and negotiation contexts.
Article
Full-text available
Proposes that several biases in social judgment result from a failure to consider possibilities at odds with beliefs and perceptions of the moment. Individuals who are induced to consider the opposite position, therefore, should display less bias in social judgment. In 2 experiments, with 150 undergraduates, this reasoning was applied to 2 domains––biased assimilation of new evidence on social issues and biased hypothesis testing of personality impressions. Ss were induced to consider the opposite through explicit instructions to do so and through stimulus materials that made opposite possibilities more salient. In both experiments, the induction of a consider-the-opposite strategy had greater corrective effect than more demand-laden alternative instructions to be as fair and unbiased as possible. Results are consistent with previous research on perseverance, hindsight, and logical problem solving, and they suggest an effective method of retraining social judgment.
Article
Full-text available
Polarization and extremism are often viewed as the product of psychological biases or social influences, yet they still occur in the absence of any bias or irrational thinking. We show that individual decision-makers implementing optimal dynamic decision strategies will become polarized, forming extreme views relative to the true information in their environment by virtue of how they sample new information. Extreme evidence enables decision makers to stop considering new information, whereas weak or moderate evidence is unlikely to trigger a decision and is thus under-sampled. We show that this information polarization effect arises empirically across choice domains including politically-charged, affect-rich and affect-poor, and simple perceptual decisions. However, this effect can be disincentivized by asking participants to make a judgment about the difference between two options (estimation) rather than deciding. We experimentally test this intervention by manipulating participants' inference goals (decision vs inference) in an information sampling task. We show that participants in the estimation condition collect more information, hold less extreme views, and are less polarized than those in the decision condition. Estimation goals therefore offer a theoretically-motivated intervention that could be used to alleviate polarization and extremism in situations where people traditionally intend to decide.
Article
Full-text available
Psychological studies show that the beliefs of two agents in a hypothesis can diverge even if both agents receive the same evidence. This phenomenon of belief polarisation is often explained by invoking biased assimilation of evidence, where the agents’ prior views about the hypothesis affect the way they process the evidence. We suggest, using a Bayesian model, that even if such influence is excluded, belief polarisation can still arise by another mechanism. This alternative mechanism involves differential weighting of the evidence arising when agents have different initial views about the reliability of their sources of evidence. We provide a systematic exploration of the conditions for belief polarisation in Bayesian models which incorporate opinions about source reliability, and we discuss some implications of our findings for the psychological literature.
Article
Full-text available
Confirmation bias is one of the most widely discussed epistemically problematic cognitions, challenging reliable belief formation and the correction of inaccurate views. Given its problematic nature, it remains unclear why the bias evolved and is still with us today. To offer an explanation, several philosophers and scientists have argued that the bias is in fact adaptive. I critically discuss three recent proposals of this kind before developing a novel alternative, what I call the ‘reality-matching account’. According to the account, confirmation bias evolved because it helps us influence people and social structures so that they come to match our beliefs about them. This can result in significant developmental and epistemic benefits for us and other people, ensuring that over time we don’t become epistemically disconnected from social reality but can navigate it more easily. While that might not be the only evolved function of confirmation bias, it is an important one that has so far been neglected in the theorizing on the bias.
Article
Full-text available
Bayesian theories of cognition assume that people can integrate probabilities rationally. However, several empirical findings contradict this proposition: human probabilistic inferences are prone to systematic deviations from optimality. Puzzlingly, these deviations sometimes go in opposite directions. Whereas some studies suggest that people underreact to prior probabilities (base rate neglect), other studies find that people underreact to the likelihood of the data (conservatism). We argue that these deviations arise because the human brain does not rely solely on a general-purpose mechanism for approximating Bayesian inference that is invariant across queries. Instead, the brain is equipped with a recognition model that maps queries to probability distributions. The parameters of this recognition model are optimized to get the output as close as possible, on average, to the true posterior. Because of our limited computational resources, the recognition model will allocate its resources so as to be more accurate for high probability queries than for low probability queries. By adapting to the query distribution, the recognition model learns to infer. We show that this theory can explain why and when people underreact to the data or the prior, and a new experiment demonstrates that these two forms of underreaction can be systematically controlled by manipulating the query distribution. The theory also explains a range of related phenomena: memory effects, belief bias, and the structure of response variability in probabilistic reasoning. We also discuss how the theory can be integrated with prior sampling-based accounts of approximate inference.
Article
Full-text available
The paper introduces, compares and contrasts formal models of source reliability proposed in the epistemology literature, in particular the prominent models of Bovens and Hartmann (2003) and Olsson (2011). All are Bayesian models seeking to provide normative guidance, yet they differ subtly in assumptions and resulting behavior. Models are evaluated both on conceptual grounds and through simulations, and the relationship between models is clarified. The simulations both show surprising similarities and highlight relevant differences between these models. Most importantly, however, our evaluations reveal that important normative concerns arguably remain unresolved. The philosophical implications of this for testimony are discussed.
Article
Full-text available
The rational status of the Bayesian calculus for revising likelihoods is compromised by the common but still unfamiliar phenomenon of information distortion. This bias is the distortion in the evaluation of a new datum toward favoring the currently preferred option in a decision or judgment. While the Bayesian calculus requires the independent combination of the prior probability and a new datum, information distortion invalidates such independence (because the prior influences the datum). Although widespread, information distortion has not generally been recognized. First, individuals are not aware when they themselves commit this bias. In addition, it is often hidden in more obvious suboptimal phenomena. Finally, the Bayesian calculus is usually explained only with undistortable data like colored balls drawn randomly. Partly because information distortion is unrecognized by the individuals exhibiting it, no way has been devised for eliminating it. Partial reduction is possible in some situations such as presenting all data simultaneously rather than sequentially with revision after each datum. The potential dangers of information distortion are illustrated for three professional revision tasks: forecasting, predicting consumer choices from internet data, and statistical inference from experimental results. The optimality of the Bayesian calculus competes with people's natural desire that their belief systems remain coherent in the face of new data. Information distortion provides this coherence by biasing those data toward greater agreement with the currently preferred position—but at the cost of Bayesian optimality.
Article
Full-text available
This paper examines the basic question of how we can come to form accurate beliefs about the world when we do not fully know how good or bad our evidence is. Here we show, using simulations with otherwise optimal agents, the cost of misjudging the quality of our evidence, and compare different strategies for correctly estimating that quality, such as outcome, and expectation-based updating. We identify conditions under which misjudgment of evidence quality can nevertheless lead to accurate beliefs, as well as those conditions where no strategy will help. These results indicate both where people will nevertheless succeed and where they will fail when information quality is degraded.
Article
Full-text available
Much of what we believe we know, we know through the testimony of others (Coady, 1992). While there has been long-standing evidence that people are sensitive to the characteristics of the sources of testimony, for example in the context of persuasion, researchers have only recently begun to explore the wider implications of source reliability considerations for the nature of our beliefs. Likewise, much remains to be established concerning what factors influence source reliability. In this paper, we examine, both theoretically and empirically, the implications of using message content as a cue to source reliability. We present a set of experiments examining the relationship between source information and message content in people's responses to simple communications. The results show that people spontaneously revise their beliefs in the reliability of the source on the basis of the expectedness of a source's claim and, conversely, adjust message impact by perceived reliability; hence source reliability and message content have a bi-directional relationship. The implications are discussed for a variety of psychological, philosophical and political issues such as belief polarization and dual-route models of persuasion.
Article
Full-text available
Why do humans reason? Many animals draw inferences, but reasoning—the tendency to produce and respond to reason-giving performances—is biologically unusual, and demands evolutionary explanation. Mercier and Sperber (Behav Brain Sci 34:57–111, 2011) advance our understanding of reason’s adaptive function with their argumentative theory of reason (ATR). On this account, the “function of reason is argumentative… to devise and evaluate arguments intended to persuade.” ATR, they argue, helps to explain several well-known cognitive biases. In this paper, I develop a neighboring hypothesis called the intention alignment model (IAM) and contrast it with ATR. I conjecture that reasoning evolved primarily because it helped social hominins more readily and fully align their intentions. We use reasons to advance various proximal ends, but in the main, we do it to overwrite the beliefs and desires of others: to get others to think like us. Reason afforded our ancestors a powerful way to build and maintain the shared outlooks necessary for a highly collaborative existence. Yes, we sometimes argue so as to gain argumentative advantage over others, or otherwise advantage ourselves at the expense of those we argue with, but more often, we reason in ways that are mutually advantageous. In fact, there are excellent reasons for thinking this must be so. IAM, I suggest, neatly explains the available evidence, while also providing a more coherent account of reason’s origins.
Article
Full-text available
Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.
Article
Full-text available
A basic challenge for probabilistic models of cognition is explaining how probabilistically correct solutions are approximated by the limited brain, and how to explain mismatches with human behavior. An emerging approach to solving this problem is to use the same approximation algorithms that were been developed in computer science and statistics for working with complex probabilistic models. Two types of approximation algorithms have been used for this purpose: sampling algorithms, such as importance sampling and Markov chain Monte Carlo, and variational algorithms, such as mean-field approximations and assumed density filtering. Here I briefly review this work, outlining how the algorithms work, how they can explain behavioral biases, and how they might be implemented in the brain. There are characteristic differences between how these two types of approximation are applied in brain and behavior, which points to how they could be combined in future research. Copyright © 2015 The Author. Published by Elsevier Inc. All rights reserved.
Article
Full-text available
Belief polarization occurs when 2 people with opposing prior beliefs both strengthen their beliefs after observing the same data. Many authors have cited belief polarization as evidence of irrational behavior. We show, however, that some instances of polarization are consistent with a normative account of belief revision. Our analysis uses Bayesian networks to characterize different kinds of relationships between hypotheses and data, and distinguishes between cases in which normative reasoners with opposing beliefs should both strengthen their beliefs, cases in which both should weaken their beliefs, and cases in which one should strengthen and the other should weaken his or her belief. We apply our analysis to several previous studies of belief polarization and present a new experiment that suggests that people tend to update their beliefs in the directions predicted by our normative account. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Article
Full-text available
Two studies examined (a) whether biased assimilation and attitude polarization occur in the processing of stereotype-relevant scientific information and (b) the role of affect in these processes. In Study 1, individuals high or low in prejudice toward homosexuals read two fictitious studies, one confirming and one disconfirming the stereotype of homosexuality. Study 2 replicated Study 1 using a sample including individuals with moderate attitudes about homosexuality. Evidence of biased assimilation was found. Participants perceived research consistent with their attitude about homosexuality as more convincing than research inconsistent with their attitude. Evidence of attitude polarization was also found but was restricted to measures of perceived attitude change. Finally, participants reported more negative affective reactions after attitude-inconsistent than attitude-consistent information, and evidence was found that these affective reactions mediated biased processing. Implications of the results for biased assimilation, attitude polarization, and the resiliency of prejudicial attitudes are discussed.
Article
Full-text available
The deficit-model of science communication assumes increased communication about science issues will move public opinion toward the scientific consensus. However, in the case of climate change, public polarization about the issue has increased in recent years, not diminished. In this study, we draw from theories of motivated reasoning, social identity, and persuasion to examine how science-based messages may increase public polarization on controversial science issues such as climate change. Exposing 240 adults to simulated news stories about possible climate change health impacts on different groups, we found the influence of identification with potential victims was contingent on participants’ political partisanship. This partisanship increased the degree of political polarization on support for climate mitigation policies and resulted in a boomerang effect among Republican participants. Implications for understanding the role of motivated reasoning within the context of science communication are discussed.
Article
Full-text available
Perhaps the simplest and the most basic qualitative law of probability is the conjunction rule: The probability of a conjunction, P(A&B), cannot exceed the probabilities of its constituents, P(A) and P(B), because the extension (or the possibility set) of the conjunction is included in the extension of its constituents. Judgments under uncertainty, however, are often mediated by intuitive heuristics that are not bound by the conjunction rule. A conjunction can be more representative that one of its constituents, and instances of a specific category can be easier to imagine or to retrieve than instances of a more inclusive category. The representativeness and availability heuristics therefore can make a conjunction appear more probable than one of its constituents. This phenomenon is demonstrated in a variety of contexts, including estimation of word frequency, personality judgment, medical prognosis, decision under risk, suspicion of criminal acts, and political forecasting. Systematic violations of the conjunction rule are observed in judgments of lay people and of experts in both between- and within-Ss comparisons. Alternative interpretations of the conjunction fallacy are discussed, and attempts to combat it are explored. (48 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Cognitive dissonance theory assumes that man is a rationalizing animal, actively defending himself by means of distortion and denial against information which contradicts deeply held beliefs. In contrast, recent critiques of dissonance theory by D. J. Bem (1967) and others picture man as a rational, if fallible, information processor. A study is reported in which 50 adolescent female high school students were given a chance to commit themselves publicly to a religious belief and were then faced with information which seemed to disconfirm that belief. Consistent with dissonance interpretations of earlier field studies, Ss who both expressed belief and accepted the veracity of the disconfirming information subsequently expressed a significant increase in intensity of belief. This reaction was not found among Ss who either had not expressed initial belief or had not accepted the veracity of the disconfirming information. Possible limitations on the generality of these results are emphasized. (18 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
People who hold strong opinions on complex social issues are likely to examine relevant empirical evidence in a biased manner. They are apt to accept "confirming" evidence at face value while subjecting "disconfirming" evidence to critical evaluation, and, as a result, draw undue support for their initial positions from mixed or random empirical findings. Thus, the result of exposing contending factions in a social dispute to an identical body of relevant empirical evidence may be not a narrowing of disagreement but rather an increase in polarization. To test these assumptions, 48 undergraduates supporting and opposing capital punishment were exposed to 2 purported studies, one seemingly confirming and one seemingly disconfirming their existing beliefs about the deterrent efficacy of the death penalty. As predicted, both proponents and opponents of capital punishment rated those results and procedures that confirmed their own beliefs to be the more convincing and probative ones, and they reported corresponding shifts in their beliefs as the various results and procedures were presented. The net effect of such evaluations and opinion shifts was the postulated increase in attitude polarization. (28 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A number of philosophers and psychologists stress the importance of disconfirmation in reasoning and suggest that people are instead prone to a general deleterious "confirmation bias." In particular, it is suggested that people tend to test those cases that have the best chance of verifying current beliefs rather than those that have the best chance of falsifying them. We show, however, that many phenomena labeled "confirmation bias" are better understood in terms of a general positive test strategy. With this strategy, there is a tendency to test cases that are expected (or known) to have the property of interest rather than those expected (or known) to lack that property. We show that the positive test strategy can be a very good heuristic for determining the truth or falsity of a hypothesis under realistic conditions. It can, however, lead to systematic errors or inefficiencies. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Explores the potential of Bayesian inference as a theoretical framework for describing how people evaluate hypotheses. First, a set of logically possible forms of non-Bayesian behavior is identified. Second, existing research is reviewed in a variety of areas to see whether these possibilities are ever realized. The analysis shows that in some situations several apparently distinct phenomena are usefully viewed as special cases of the same kind of behavior, whereas in other situations previous investigations have conferred a common label (e.g., confirmation bias) to several distinct phenomena. A number of attributions of judgmental bias are called into question, and it is suggested that in some cases the bias is different than what has previously been claimed, whereas in others there may be no bias at all. (89 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Investigated the effects of varying distributions of success and failure on attributions of intellectual ability. In Exp. I-IV undergraduate Ss confronted a stimulus person who solved 15 out of 30 problems in a random, descending, or ascending success pattern. In Exp. V only the descending and ascending patterns were compared. Contrary to prediction, the performer who showed improvement (ascending success) was not consistently judged to be more able than the performer with randomly spaced successes. The performer with a descending success rate, however, was consistently judged to be more intelligent and was expected to outperform those with either ascending or random patterns. Memory for past performance was uniformly distorted in favor of recalling more success for the descending performer and less success for the ascending and random performers. Neither this measure nor ratings of intelligence required, for their discriminating effects, that S himself solve the problems in parallel with the person being judged. In the final experiment S himself performed in an improving, deteriorating, or random but stable fashion, and estimated his future performance. Under these circumstances, the ascending performer was more confident about his ability than the descending or random performer, reversing the picture of the 1st 5 experiments. Results are discussed in terms of the salience of early information in attributing ability and the role of social comparison processes. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Four studies examined the generality of attitude polarization (C. G. Lord et al, 1979). Biased assimilation of essays on 2 controversial issues was substantial and correlated with reported attitude change. Polarization was observed for reported attitude change on capital punishment and generally stronger in Ss with extreme than moderate attitudes. Polarization was not indicated in a pre–post measurement design. For affirmative action, reported polarization was not observed. The hypothesis that Ss reporting polarization would subsequently write particularly strong essays was not supported, although those reporting depolarization wrote relatively weak essays. The results suggest the relevance of individual differences in reported attitude change but do not confirm the powerful inferences frequently drawn regarding the pervasive, undesirable consequences of self-reported attitude polarization. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A total of 130 Ss in 2 experiments within a debriefing paradigm examined the perseverance of social theories. Ss were initially given 2 case studies suggestive of either a positive or a negative relationship between risk taking and success as a firefighter. Some Ss were asked to provide a written explanation of the relationship; others were not. Experimental Ss were thoroughly debriefed concerning the fictitious nature of the initial case studies; some Ss were not debriefed. Subsequent assessments of Ss' personal beliefs about the relationship indicated that even when initially based on weak data, social theories can survive the total discrediting of that initial evidential base. Correlational and experimental results suggest that such unwarranted theory perseverance may be mediated, in part, by the cognitive process of formulating causal scenarios or explanations. Normative issues and the cognitive processes underlying perseverance are examined, and possible techniques for overcoming unwarranted theory perseverance are discussed. (20 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Examined the process leading to the confirmation of a perceiver's expectancies about another when the social label that created the expectancy provides poor or tentative evidence about another's true dispositions or capabilities. Ss were 67 undergraduates. One group was led to believe that a child came from a high SES background; the other group, that the child came from a low SES background. Nothing in the SES data conveyed information directly relevant to the child's ability level, and when asked, both groups reluctantly rated the child's ability level to be approximately at grade level. Two other groups received the SES information and then witnessed a videotape of the child taking an academic test. Although the videotaped series was identical for all Ss, those who had information that the child came from a high SES rated her abilities well above grade level, whereas those for whom the child was identified as coming from a lower-class background rated her abilities as below grade level. Both groups cited evidence from the ability test to support their conclusions. Findings are interpreted as suggesting that some "stereotype" information creates not certainties but hypotheses about the stereotyped individual. However, these hypotheses are often tested in a biased fashion that leads to their false confirmation. (33 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Chapter
The Nature of Insight brings together diverse perspectives, including recent theories and discoveries, to examine the nature and origins of insightful thinking, as well as the history of theory and research on the topic and the methods used to study it. There are chapters by the leading experts in this field, including Mihaly Csikszentmihalyi, Ronald Finke, Howard Gruber, Marcel Just, David Meyer, David Perkins, Dean Simonton, and Robert Weisberg, among others. The Nature of Insight is divided into five main parts. Following an introduction that reviews the history and methods of the field, part II looks at how people solve challenging puzzles whose answers cannot be obtained through ordinary means. Part III focuses on how people come up with ideas for new inventions, while part IV explores the thinking of some of the most insightful people in the history of civilization. Part V considers metaphors such as evolution and investment as bases for understanding insight. An epilogue integrates all these approaches. Contributors R.E. Mayer, R.L. Dominowsk, P. Dallob, C.M. Seifert, D.E. Meyer, N. Davidson, A.J. Patalano, I. Yaniv, J.E. Davidson, R.W. Weisberg, M.L. Gick, R.S. Lockhart, S.M. Smith, R.A. Finke, M.I. Isaak, M.A. Just, M. Csikszentmihalyi, K. Sawyer, K. Dunbar, H.E. Gruber, M.F. Ippolito, R.D. Tweney, D.K. Simonton, D.N. Perkins, R.J. Sternberg, T.I. Lubart Bradford Books imprint
Preprint
Bayesian principles show up across many domains of human cognition, but wishful thinking—where beliefs are updated in the direction of desired outcomes rather than what the evidence implies—seems to threaten the universality of Bayesian approaches to the mind. In this paper, we show that Bayesian optimality and wishful thinking are, despite first appearances, compatible. The setting of opposing goals can cause two groups of people with identical prior beliefs to reach opposite conclusions about the same evidence through fully Bayesian calculations. We show that this is possible because, when people set goals, they receive privileged information in the form of affective experiences, and this information systematically supports goal-consistent conclusions. We ground this idea in a formal, Bayesian model in which affective prediction errors drive wishful thinking. We obtain empirical support for our model across five studies.
Article
Background In recent history mass vaccination has proved essential to dealing with pandemics. However, the effectiveness of a vaccine depends on the number of people willing to take it. One approach to encouraging uptake is to publish information about safety and effectiveness. But confirmation bias research in other domains suggests that people may evaluate this information through the lens of their existing beliefs. Methods This study used a simple 2 × 2 design to investigate whether people’s (n = 3899) existing beliefs influenced their ability to correctly evaluate data from a fictional trial presented in a frequency table. Treatment groups saw different trial outcomes (intervention effective versus ineffective and trial related versus unrelated to vaccines). Results Results provided robust evidence for confirmation bias in the domain of vaccines: people made systematic errors (P < 0.01) when evaluating evidence that was inconsistent with their prior beliefs. This pattern emerged among people with both pro-vaccination and anti-vaccination attitudes. Errors were attributed to confirmation bias because no such differences were detected when participants evaluated data unrelated to vaccines. Conclusions People are prone to misinterpreting evidence about vaccines in ways that reflect their underlying beliefs. Confirmation bias is an important consideration for vaccine communication.
Article
Human beliefs have remarkable robustness in the face of disconfirmation. This robustness is often explained as the product of heuristics or motivated reasoning. However, robustness can also arise from purely rational principles when the reasoner has recourse to ad hoc auxiliary hypotheses. Auxiliary hypotheses primarily function as the linking assumptions connecting different beliefs to one another and to observational data, but they can also function as a “protective belt” that explains away disconfirmation by absorbing some of the blame. The present article traces the role of auxiliary hypotheses from philosophy of science to Bayesian models of cognition and a host of behavioral phenomena, demonstrating their wide-ranging implications.
Article
Participants: Among Australians, consensus information partially neutralized the influence of worldview, with free-market supporters showing a greater increase in acceptance of human-caused global warming relative to free-market opponents. In contrast, while consensus information overall had a positive effect on perceived consensus among U.S. participants, there was a reduction in perceived consensus and acceptance of human-caused global warming for strong supporters of unregulated free markets. Fitting a Bayes net model to the data indicated that under a Bayesian framework, free-market support is a significant driver of beliefs about climate change and trust in climate scientists. Further, active distrust of climate scientists among a small number of U.S. conservatives drives contrary updating in response to consensus information among this particular group.
Chapter
This chapter considers the question of how learning adapts to changing environments, with particular reference to animal studies of operant and classical conditioning. It discusses a variety of probabilistic models, with different assumptions concerning the environment; and contrasts this type of model with a model by Kruschke (2006) which carries out local, approximate, Bayesian inference. It further suggests that it may be too early to incorporate mechanistic limitations into models of conditioning - enriching the understanding of the environment, and working with a 'pure' Bayesian rational analysis for that environment, may provide an alternative, and perhaps theoretically more elegant, way forward.
Article
Two consumer choice experiments reveal distortion of product information. When relatively equivocal information about two hypothetical brands is acquired one attribute at a time, the evaluation of a subsequent attribute is distorted to support the brand that emerges as the leader. This distortion in favor of the leading brand occurs in the absence of any prior brand preference and even when no choice is required. in the latter case, brand preference is formed spontaneously and privately. The magnitude of this predecisional information distortion is roughly double the well-known postdecisional distortion due to cognitive dissonance. A second study shows that, even when the product information is diagnostic, substantial distortion remains. Furthermore, when the diagnostic information leads to a reversal of the currently preferred brand, distortion reappears in support of the new leading brand. The implications of predecisional distortion of product information are discussed for the presentation order of brands, the presentation format of product attributes, and the potential bias in preference assessment techniques, such as conjoint measurement, that rely on pairwise choices.
Chapter
In this chapter, we provide a historical overview of research on bias in human cognition, ranging from early work in psychology through the detailed, quantitative examinations of belief revision in the 1960s, the Heuristic and Biases program initiated by Kahneman and Tversky, and bias focused research in personality and social psychology. Different notions of “bias” are identified and compared with the notion of bias in statistics, machine learning, and signal detection theory. Comparison with normative models then forms the basis for a critical look at the evidence that people succumb to moti- vated reasoning aimed at enabling them “to believe what they want to believe.”
Book
Probabilistic models have much to offer to philosophy. We continually receive information from a variety of sources: from our senses, from witnesses, from scientific instruments. When considering whether we should believe this information, we assess whether the sources are independent, how reliable they are, and how plausible and coherent the information is. Bovens and Hartmann provide a systematic Bayesian account of these features of reasoning. Simple Bayesian networks allow us to model alternative assumptions about the nature of the information sources. Measurement of the coherence of information is a controversial matter: arguably, the more coherent a set of information is, the more confident we may be that its content is true, other things being equal. The authors offer a new treatment of coherence which respects this claim and shows its relevance to scientific theory choice. Bovens and Hartmann apply this methodology to a wide range of much-discussed issues regarding evidence, testimony, scientific theories and voting. "Bayesian Epistemology" is for anyone working on probabilistic methods in philosophy, and has broad implications for many other disciplines.
Article
Different levels of analysis provide different insights into behavior: computational-level analyses determine the problem an organism must solve and algorithmic-level analyses determine the mechanisms that drive behavior. However, many attempts to model behavior are pitched at a single level of analysis. Research into human and animal learning provides a prime example, with some researchers using computational-level models to understand the sensitivity organisms display to environmental statistics but other researchers using algorithmic-level models to understand organisms’ trial order effects, including effects of primacy and recency. Recently, attempts have been made to bridge these two levels of analysis. Locally Bayesian Learning (LBL) creates a bridge by taking a view inspired by evolutionary psychology: Our minds are composed of modules that are each individually Bayesian but communicate with restricted messages. A different inspiration comes from computer science and statistics: Our brains are implementing the algorithms developed for approximating complex probability distributions. We show that these different inspirations for how to bridge levels of analysis are not necessarily in conflict by developing a computational justification for LBL. We demonstrate that a scheme that maximizes computational fidelity while using a restricted factorized representation produces the trial order effects that motivated the development of LBL. This scheme uses the same modular motivation as LBL, passing messages about the attended cues between modules, but does not use the rapid shifts of attention considered key for the LBL approximation. This work illustrates a new way of tying together psychological and computational constraints.
Article
The present experiment examined decision speed, measured without the subject's knowledge, and sequential confidence revision in a two-choice decision task. Subjects were presented with ten sequences of 20 events from one of two data-generating devices. Before each event was presented, the subjects predicted the event outcome, and after each event they decided which of the two data-generating devices was being used and gave a judgment of confidence in this decision. Following events that disconfirmed their favored hypothesis, subjects demonstrated a marked resistance to decreasing their confidence level (Inertia Effect). Two possible explanations of the Inertia Effect were postulated and their predictions were compared with the data. A commitment hypothesis assumed that subjects were unwilling to reduce their stated confidence following commitment to a decision, while a pattern-effect hypothesis was based upon a subject's hypothetical expectancies, reflected in his predictions. Both hypotheses were partially supported; the Inertia Effect was a function of subjects' predictions and was accompanied by a decrease in decision speed.
Article
Subectsfor whom a health threat was relvant or irrelevant were recruited and matched on prior beliefs in the health threat. Following exposure to either a low- or a high-threat message, high-relvance subjects were less likely to believe in the threat. Consistent with earlier work, no evidence was found to suggest that defensive inattention to the messages mediated subjects' final beliefs. Instead, processing measures suggested that highrelevance subects processed threatening parts of both messages in a biased fashion. The relationship between biased judgment and biased processing is discussed, as are the difficulties in documenting the latter
Article
We propose a model of motivated skepticism that helps explain when and why citizens are biased-information processors. Two experimental studies explore how citizens evaluate arguments about affirmative action and gun control, finding strong evidence of a prior attitude effect such that attitudinally congruent arguments are evaluated as stronger than attitudinally incongruent arguments. When reading pro and con arguments, participants (Ps) counterargue the contrary arguments and uncritically accept supporting arguments, evidence of a disconfirmation bias. We also find a confirmation bias—the seeking out of confirmatory evidence—when Ps are free to self-select the source of the arguments they read. Both the confirmation and disconfirmation biases lead to attitude polarization—the strengthening of t2 over t1 attitudes—especially among those with the strongest priors and highest levels of political sophistication. We conclude with a discussion of the normative implications of these findings for rational behavior in a democracy.
Article
Berkowitz and Devine (this issue) cite Lord, Ross, and Lepper's study of attitude polarization as evidence that current researchers are reinventing dissonance results without realizing it. They also cite Cooper and Fazio's review as evidence that current researchers have excessively narrowed the scope of dissonance theory. They regard these tendencies to overlook and narrow dissonance theory as symptomatic of a pervasive analytic, rather than synthetic, approach to studying human social behavior. In reviewing the evidence cited by Berkowitz and Devine, other interpretations are possible.
Article
This chapter reviews research concerning a variety of confirmation biases and discusses what they have in common and where they differ. The overall picture is one of heterogeneous, complex, and inconsistent phenomena, from which it is nevertheless possible to discern a general direction, namely a general tendency for people to believe too much in their favored hypothesis. The chapter discusses ideas about how to reconcile the apparent heterogeneity and the apparent generality of confirmation biases. There has been considerable interest among cognitive and social psychologists in the idea that people tend to hang on to their favored hypotheses with unwarranted tenacity and confidence. This tendency has been referred to as perseverance of beliefs, hypothesis preservation, and confirmation bias. Research in this area presents a rather heterogeneous collection of findings: a set of confirmation biases, rather than one unified confirmation bias. There are often substantial task-to-task differences in the observed phenomena, their consequences, and the underlying cognitive processes. There is no consensus about such basic questions as what is a favored hypothesis, against what norm is a belief unwarranted, and under what circumstances are people susceptible or not susceptible to a bias.
Article
▪ Abstract Do people assimilate new information in an efficient and unbiased manner—that is, do they update prior beliefs in accordance with Bayes' rule? Or are they selective in the way that they gather and absorb new information? Although many classic studies in political science and psychology contend that people resist discordant information, more recent research has tended to call the selective perception hypothesis into question. We synthesize the literatures on biased assimilation and belief polarization using a formal model that encompasses both Bayesian and biased learning. The analysis reveals (a) the conditions under which these phenomena may be consistent with Bayesian learning, (b) the methodological inadequacy of certain research designs that fail to control for preferences or prior information, and (c) the limited support that exists for the more extreme variants of the selective perception hypothesis.
Article
This paper reports the results of four experiments designed to test the methodological falsificationist's assumption that replication is sufficient to prevent the possibility of error from being used to immunize hypotheses against disconfirmation. The first three experiments compare the performance of subjects on tasks that simulate scientific reasoning under two conditions: (1) where there is a 0–20% possibility of error in experimental results, but no actual error; and (2) a control condition. All experiments used Wason's 2–4–6 task, in which subjects propose triples and are told whether each corresponds to a rule. In Experiment 1, subjects in the possible-error condition proposed significantly more triples than control subjects. Experiment 2 added colour and letter dimensions to the 2–4–6 task; possible-error subjects proposed significantly more triples and replicated the same triple more often than control subjects. Experiment 3 made replication more difficult by limiting the number of experiments subjects could perform and by altering the rule to make the results of the current trial dependent on previous ones. Control subjects solved this problem significantly more often than possible-error subjects. Experiment 4 was run in a manner very similar to Experiment 1, except that an actual 20% error condition was added. Subjects in this condition solved the rule significantly less often than subjects in other conditions, and also took more time and replicated more often. Implications of these results for the methodological falsificationist's position are discussed.
Article
Vitriolic debate surrounds John F. Kennedy's (JFK's) death more than 30 years after the assassination. Whereas some endorse the official government conclusion that Oswald acted alone, others allege that some form of a conspiracy is responsible for Kennedy's death. The central thesis of this article is that due to the processes of biased assimilation and attitude polarization, personal theories about the perpetrator(s) of the assassination are essentially immutable, and therefore that the debate surrounding JFK's assassination will continue endlessly. Due to the process of biased assimilation, proponents of both the Oswald and conspiracy theories perceive the same body of evidence as supportive of their position. Biased assimilation leads to attitude polarization rather than to a moderation or reversal of existing attitudes. The results of the present study strongly support this line of reasoning. The study also examined the formation of assassination attitudes among subjects with no initial opinion. The majority of these subjects embraced the conspiracy theory at the conclusion of the study. However, authoritarianism was indirectly associated with the development of an Oswald theory stance via an increased endorsement of evidence consistent with the Oswald theory.
Article
In a seminal book, Alvin I. Goldman outlines a theory for how to evaluate social practices with respect to their “veritistic value”, i.e., their tendency to promote the acquisition of true beliefs (and impede the acquisition of false beliefs) in society. In the same work, Goldman raises a number of serious worries for his account. Two of them concern the possibility of determining the veritistic value of a practice in a concrete case because (1) we often don't know what beliefs are actually true, and (2) even if we did, the task of determining the veritistic value would be computationally extremely difficult. Neither problem is specific to Goldman's theory and both can be expected to arise for just about any account of veritistic value. It is argued here that the first problem does not pose a serious threat to large classes of interesting practices. The bulk of the paper is devoted to the computational problem, which, it is submitted, can be addressed in promising terms by means of computer simulation. In an attempt to add vividness to this proposal, an up-and-running simulation environment (Laputa) is presented and put to some preliminary tests.
Article
It is a basic and an undeniable fact of social life that one form impressions of other people whom they encounter in the day-to-day lives. As a direct result of generations of theory and research on impression formation and person perception, investigators have learned a great deal about the way individuals process information to form beliefs and impressions of other people. Accordingly, there exists considerable knowledge about the antecedents of social beliefs. The practical implications of these reality-constructing consequences of social beliefs are considerable, both at the level of individual lives and at the level of society. This chapter highlights that the processes of social thought are intimately woven into the fabric of social interaction and interpersonal relationships. The events of the lives are very much a reflection of one's beliefs about other people in the social worlds. Finally, it is in this sense that beliefs can and do create reality.
Article
Field surveys and anecdotal evidence suggest that supporters and opponents of a given technology tend to draw opposite conclusions from noncatastrophic breakdowns. Three studies confirmed this tendency by presenting supporters and opponents of a particular technology with identical descriptions of various technological breakdowns. As predicted, the results indicated that (a) supporters focused on the fact that the safeguards worked, while opponents focused on the fact that the breakdown occurred in the first place; and (b) after reading about the breakdown, supporters reported feeling that the chances of a catastrophic accident were less than previously assumed, whereas opponents reported feeling that the chances of an accident were greater than previously assumed. The recommendation by Lord, Lepper, and Preston (1984) for partisans to consider opposite outcomes-such as a serious failure in safeguards or the absence of major breakdowns—was discussed as a way of preventing biased assimilation and attitude polarization.