Article

The Effect of Prediction Error on Belief Update Across the Political Spectrum

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Making predictions is an adaptive feature of the cognitive system, as prediction errors are used to adjust the knowledge they stemmed from. Here, we investigated the effect of prediction errors on belief update in an ideological context. In Study 1, 704 Cloud Research participants first evaluated a set of beliefs and then either made predictions about evidence associated with the beliefs and received feedback or were just presented with the evidence. Finally, they reevaluated the initial beliefs. Study 2, which involved a U.S. Census–matched sample of 1,073 Cloud Research participants, was a replication of Study 1. We found that the size of prediction errors linearly predicts belief update and that making large errors leads to more belief update than does not engaging in prediction. Importantly, the effects held for both Democrats and Republicans across all belief types (Democratic, Republican, neutral). We discuss these findings in the context of the misinformation epidemic.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A fundamental property of beliefs is that they are subject to change, given their dynamic nature (Bendixen, 2002). Indeed, prior work has identified several strategies that proved effective at changing beliefs, such as using fictional narratives (Wheeler, Green, & Brock, 1999), nudging accuracy goals (Pennycook et al., 2020), manipulating memory accessibility Vlasceanu, Morais, Duker et al., 2020), appending emotional arousing images (Vlasceanu, Goebel et al., 2020), triggering prediction errors (Vlasceanu, Morais & Coman, 2021), or increasing the salience of social norms (Vlasceanu & Coman, 2020b). ...
... We used a set of 8 politically charged statements (Appendix 2), half accurate and half inaccurate as determined by published scientific papers or other official sources. These statements had been pretested in prior work by Vlasceanu, Morais, and Coman (2021) to ensure that half of them were endorsed more by Democrats than by Republicans (e.g., "Millions of children in the US have witnessed a shooting in the past year") and vice-versa (e.g., "Hundreds of thousands of abortions in the US are paid for with public funds each year"). ...
... When testing interactions with identity, we found that beliefs' predictive power of behaviors universally holds across identity boundaries, consistent with prior work on belief change mechanisms (Vlasceanu, Morais, & Coman, 2021). However, belief change caused behavioral change only for Democratic participants on Democratic topics, and not for Democratic participants on Republican topics or for Republican participants on either topic. ...
Preprint
Full-text available
Beliefs have long been posited to be a predictor of behavior. However, empirical evidence of the relationship between beliefs and behaviors has been mostly correlational in nature and provided conflicting findings. Here, we investigated the causal impact of beliefs on behaviors across three experiments (N=659). Participants rated the accuracy of a set of health-related statements (belief pre-test) and chose corresponding campaigns to which they could donate funds in an incentivized choice task (behavior pre-test). They were then provided with relevant evidence in favor of the correct statements and against the incorrect statements. Finally, they rated the accuracy of the initial set of statements again (belief post-test) and were given a chance to change their donation choices (behavior post-test). We found that evidence changed beliefs and this, in turn, led to behavioral change. In two pre-registered follow-up experiments, we replicated these findings with politically charged topics, and found a partisan asymmetry in the effect of belief change on behavioral change in Democrats (but not in Republicans). We discuss the implications of this work for interventions aimed at promoting constructive behaviors such as recycling, donating, or employing preventative health measures.
... To minimize PE, people usually employ one of two methods. The first and more common application of PP principles involves updating the prior beliefs driving the prediction, thus improving the correspondence between future predictions and reality (e.g., Friston et al., 2009;Nassar et al., 2010;Sharot and Garrett, 2016;Vlasceanu et al., 2021;Elder et al., 2021). The second method involves changing the way people perceive reality (''active inference '' in PP terms;Friston, 2010;Hohwy, 2020;Yon et al., 2021), for example by reinterpreting incoming inputs to better align with their predictions (e.g., motivated reasoning; Kunda, 1990;Epley and Gilovich, 2016). ...
Article
Full-text available
The predictive processing framework posits that people continuously use predictive principles when interacting with, learning from, and interpreting their surroundings. Here, we suggest that the same framework may help explain how people process self-relevant knowledge and maintain a stable and positive self-concept. Specifically, we recast two prominent self-relevant motivations, self-verification and self-enhancement, in predictive processing (PP) terms. We suggest that these self-relevant motivations interact with the self-concept (i.e., priors) to create strong predictions. These predictions, in turn, influence how people interpret information about themselves. In particular, we argue that these strong self-relevant predictions dictate how prediction error, the deviation from the original prediction, is processed. In contrast to many implementations of the PP framework, we suggest that predictions and priors emanating from stable constructs (such as the self-concept) cultivate belief-maintaining, rather than belief-updating, dynamics. Based on recent findings, we also postulate that evidence supporting a predicted model of the self (or interpreted as such) triggers subjective reward responses, potentially reinforcing existing beliefs. Characterizing the role of rewards in self-belief maintenance and reframing self-relevant motivations and rewards in predictive processing terms offers novel insights into how the self is maintained in neurotypical adults, as well as in pathological populations, potentially pointing to therapeutic implications.
... We also expected ideological differences in knowledge integration from congruent versus incongruent sources based on prior work showing that conservatives are more resistant to change than liberals [34,35] and that Republicans are less concerned about COVID-19 than Democrats [41]. However, in the present work, we did not find ideological differences in knowledge integration, consistent with prior work in which Democrats and Republicans updated their beliefs similarly as a function of evidence [42]. Along the same lines, Pennycook and colleagues found that accurate beliefs about COVID-19 are associated with reasoning skills regardless of political ideology [9]. ...
Article
Full-text available
During a global health crisis, people are exposed to vast amounts of information from a variety of sources. Here, we assessed which information source could increase knowledge about COVID-19 (Study 1) and COVID-19 vaccines (Study 2). In Study 1, a US census matched sample of 1060 participants rated the accuracy of a set of statements and then were randomly assigned to one of 10 between-subjects conditions of varying sources providing belief-relevant information: a political leader (Trump/Biden), a health authority (Fauci/CDC), an anecdote (Democrat/Republican), a large group of prior participants (Democrats/Republicans/Generic), or no source (Control). Finally, they rated the accuracy of the initial set of statements again. Study 2 involved a replication with a sample of 1876 participants and focused on the COVID-19 vaccine. We found that knowledge increased most when the source of information was a generic group of people, irrespective of participants' political affiliation. We also found that while expert communications were most successful at increasing Democrats' vaccination intentions, no source was successful at increasing Republicans' vaccination intention. We discuss these findings in the context of the current misinformation epidemic. Supplementary information: The online version contains supplementary material available at 10.1007/s41060-021-00307-8.
... People function like intuitive scientists when they pursue accuracy goals (Boudry & Vlerick, 2014;De Cruz et al., 2011). In many circumstances (e.g., appraising danger, obtaining nourishment), correct beliefs promote fitness-enhancing decisions, and so people pursue good information and strive to hold accurate beliefs (Anglin, 2019;Tappin et al., 2020;Vlasceanu et al., 2021), especially when accuracy is obtainable and consequential for fitness. Indeed, the scientific enterprise is a testament to humans' commitment to pursue more accurate information. ...
Article
Full-text available
Behavioral scientists enjoy vast methodological freedom in how they operationalize theoretical constructs. This freedom may promote creativity in designing laboratory paradigms that shed light on real-world phenomena, but it also enables questionable research practices that undercut our collective credibility. Open Science norms impose some discipline but cannot constrain cherry-picking operational definitions that insulate preferred theories from rejection. All too often scholars conduct performative research to score points instead of engaging each other’s strongest arguments—a pattern that allows contradictory claims to fester unresolved for decades. Adversarial collaborations, which call on disputants to co-develop tests of competing hypotheses, are an efficient method of improving our science’s capacity for self-correction and of promoting intellectual competition that exposes false claims. Although individual researchers are often initially reluctant to participate, the research community would be better served by institutionalizing adversarial collaboration into its peer review process.
... Encouragingly, however, beliefs are subject to change, given their dynamic nature (Bendixen, 2002). Prior work has identified several strategies that proved effective at changing beliefs, such as using fictional narratives (Wheeler et al., 1999), nudging accuracy goals (Pennycook et al., 2020), manipulating memory accessibility Vlasceanu, Morais, et al., 2020), appending emotional arousing images (Vlasceanu, Goebel, & Coman, 2020), and triggering prediction errors (Vlasceanu et al., 2021). However, changing peoples' beliefs is not a trivial task. ...
Article
Full-text available
People are constantly bombarded with information they could use to adjust their beliefs. Here, we are interested in exploring the impact of social norms on health-related belief update. To investigate, we recruited a sample of 200 Princeton University students, who first rated the accuracy of a set of health statements (pre-test). They were then provided with relevant evidence either in favor or against the initial statements, and were asked to rate how convincing each piece of evidence was. The evidence was randomly assigned to appear as normative or non-normative, and anecdotal or scientific. Finally, participants rated the accuracy of the initial set of statements again (post-test). The results show that participants rationally updated their beliefs more when the evidence was scientific compared to when it was anecdotal. More importantly to our primary inquiry, the results show that participants changed their beliefs more in line with the evidence when the evidence was portrayed as normative compared to when the evidence was portrayed as non-normative, pointing to the impactful influence social norms have on health beliefs. Both effects were mediated by participants' subjective evaluation of the convincingness of the evidence, indicating the mechanism by which evidence is selectively incorporated into belief systems.
Article
Full-text available
Across two studies with more than 1,700 U.S. adults recruited online, we present evidence that people share false claims about COVID-19 partly because they simply fail to think sufficiently about whether or not the content is accurate when deciding what to share. In Study 1, participants were far worse at discerning between true and false content when deciding what they would share on social media relative to when they were asked directly about accuracy. Furthermore, greater cognitive reflection and science knowledge were associated with stronger discernment. In Study 2, we found that a simple accuracy reminder at the beginning of the study (i.e., judging the accuracy of a non-COVID-19-related headline) nearly tripled the level of truth discernment in participants’ subsequent sharing intentions. Our results, which mirror those found previously for political fake news, suggest that nudging people to think about accuracy is a simple way to improve choices about what to share on social media.
Article
Full-text available
The paper introduces, compares and contrasts formal models of source reliability proposed in the epistemology literature, in particular the prominent models of Bovens and Hartmann (2003) and Olsson (2011). All are Bayesian models seeking to provide normative guidance, yet they differ subtly in assumptions and resulting behavior. Models are evaluated both on conceptual grounds and through simulations, and the relationship between models is clarified. The simulations both show surprising similarities and highlight relevant differences between these models. Most importantly, however, our evaluations reveal that important normative concerns arguably remain unresolved. The philosophical implications of this for testimony are discussed.
Article
Full-text available
The rise of partisan animosity, ideological polarization, and political dogmatism has reignited important questions about the relationship between psychological rigidity and political partisanship. Two competing hypotheses have been proposed: 1 hypothesis argues that mental rigidity is related to a conservative political orientation, and the other suggests that it reflects partisan extremity across the political spectrum. In a sample of over 700 U.S. citizens, partisan extremity was related to lower levels of cognitive flexibility, regardless of political orientation, across 3 independent cognitive assessments of cognitive flexibility. This was evident across multiple statistical analyses, including quadratic regressions, Bayes factor analysis, and interrupted regressions. These findings suggest that the rigidity with which individuals process and respond to nonpolitical information may be related to the extremity of their partisan identities. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Article
Full-text available
Widening polarization about political, religious, and scientific issues threatens open societies, leading to entrenchment of beliefs, reduced mutual understanding, and a pervasive negativity surrounding the very idea of consensus [1, 2]. Such radicalization has been linked to systematic differences in the certainty with which people adhere to particular beliefs [3, 4, 5, 6]. However, the drivers of unjustified certainty in radicals are rarely considered from the perspective of models of metacognition, and it remains unknown whether radicals show alterations in confidence bias (a tendency to publicly espouse higher confidence), metacognitive sensitivity (insight into the correctness of one’s beliefs), or both [7]. Within two independent general population samples (n = 381 and n = 417), here we show that individuals holding radical beliefs (as measured by questionnaires about political attitudes) display a specific impairment in metacognitive sensitivity about low-level perceptual discrimination judgments. Specifically, more radical participants displayed less insight into the correctness of their choices and reduced updating of their confidence when presented with post-decision evidence. Our use of a simple perceptual decision task enables us to rule out effects of previous knowledge, task performance, and motivational factors underpinning differences in metacognition. Instead, our findings highlight a generic resistance to recognizing and revising incorrect beliefs as a potential driver of radicalization.
Article
Full-text available
The formation of collective memories, emotions, and beliefs is a fundamental characteristic of human communities.These emergent outcomes are thought to be the result of a dynamical system of communicative interactions among individuals. But despite recent psychological research on collective phenomena, no programmatic framework to explore the processes involved in their formation exists. Here, we propose a social-interactionist approach that bridges cognitive and social psychology to illuminate how microlevel cognitive phenomena give rise to large-scale social outcomes. It involves first establishing the boundary conditions of cognitive phenomena, then investigating how cognition is influenced by the social context in which it is manifested, and finally studying how dyadic-level influences propagate in social networks. This approach has the potential to (a) illuminate the large-scale consequences of well established cognitive phenomena, (b) lead to interdisciplinary dialogues between psychology and the other social sciences, and (c) be more relevant for public policy than existing approaches.
Article
Full-text available
Both liberals and conservatives accuse their political opponents of partisan bias, but is there empirical evidence that one side of the political aisle is indeed more biased than the other? To address this question, we meta-analyzed the results of 51 experimental studies, involving over 18,000 participants, that examined one form of partisan bias—the tendency to evaluate otherwise identical information more favorably when it supports one’s political beliefs or allegiances than when it challenges those beliefs or allegiances. Two hypotheses based on previous literature were tested: an asymmetry hypothesis (predicting greater partisan bias in conservatives than in liberals) and a symmetry hypothesis (predicting equal levels of partisan bias in liberals and conservatives). Mean overall partisan bias was robust (r = .245), and there was strong support for the symmetry hypothesis: Liberals (r = .235) and conservatives (r = .255) showed no difference in mean levels of bias across studies. Moderator analyses reveal this pattern to be consistent across a number of different methodological variations and political topics. Implications of the current findings for the ongoing ideological symmetry debate and the role of partisan bias in scientific discourse and political conflict are discussed.
Article
Full-text available
Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of "fake news", i.e., low quality news with intentionally false information. The extensive spread of fake news has the potential for extremely negative impacts on individuals and society. Therefore, fake news detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media ineffective or not applicable. First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in and of itself as users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy. Because the issue of fake news detection on social media is both challenging and relevant, we conducted this survey to further facilitate research on the problem. In this survey, we present a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets. We also discuss related research areas, open problems, and future research directions for fake news detection on social media.
Article
Full-text available
The role of prediction error (PE) in driving learning is well-established in fields such as classical and instrumental conditioning, reward learning and procedural memory; however, its role in human one-shot declarative encoding is less clear. According to one recent hypothesis, PE reflects the divergence between two probability distributions: one reflecting the prior probability (from previous experiences) and the other reflecting the sensory evidence (from the current experience). Assuming unimodal probability distributions, PE can be manipulated in three ways: (1) the distance between the mode of the prior and evidence, (2) the precision of the prior, and (3) the precision of the evidence. We tested these three manipulations across five experiments, in terms of peoples' ability to encode a single presentation of a scene-item pairing as a function of previous exposures to that scene and/or item. Memory was probed by presenting the scene together with three choices for the previously paired item, in which the two foil items were from other pairings within the same condition as the target item. In Experiment 1, we manipulated the evidence to be either consistent or inconsistent with prior expectations, predicting PE to be larger, and hence memory better, when the new pairing was inconsistent. In Experiments 2a-c, we manipulated the precision of the priors, predicting better memory for a new pairing when the (inconsistent) priors were more precise. In Experiment 3, we manipulated both visual noise and prior exposure for unfamiliar faces, before pairing them with scenes, predicting better memory when the sensory evidence was more precise. In all experiments, the PE hypotheses were supported. We discuss alternative explanations of individual experiments, and conclude the Predictive Interactive Multiple Memory Signals (PIMMS) framework provides the most parsimonious account of the full pattern of results.
Article
Full-text available
The role of prediction error (PE) in driving learning is well-established in fields such as classical and instrumental conditioning, reward learning and procedural memory; however, its role in human one-shot declarative encoding is less clear. According to one recent hypothesis, PE reflects the divergence between two probability distributions: one reflecting the prior probability (from previous experiences) and the other reflecting the sensory evidence (from the current experience). Assuming unimodal probability distributions, PE can be manipulated in three ways: (1) the distance between the mode of the prior and evidence, (2) the precision of the prior, and (3) the precision of the evidence. We tested these three manipulations across five experiments, in terms of peoples’ ability to encode a single presentation of a scene-item pairing as a function of previous exposures to that scene and/or item. Memory was probed by presenting the scene together with three choices for the previously paired item, in which the two foil items were from other pairings within the same condition as the target item. In Experiment 1, we manipulated the evidence to be either consistent or inconsistent with prior expectations, predicting PE to be larger, and hence memory better, when the new pairing was inconsistent. In Experiments 2a–c, we manipulated the precision of the priors, predicting better memory for a new pairing when the (inconsistent) priors were more precise. In Experiment 3, we manipulated both visual noise and prior exposure for unfamiliar faces, before pairing them with scenes, predicting better memory when the sensory evidence was more precise. In all experiments, the PE hypotheses were supported. We discuss alternative explanations of individual experiments, and conclude the Predictive Interactive Multiple Memory Signals (PIMMS) framework provides the most parsimonious account of the full pattern of results.
Article
Full-text available
In recent years, Mechanical Turk (MTurk) has revolutionized social science by providing a way to collect behavioral data with unprecedented speed and efficiency. However, MTurk was not intended to be a research tool, and many common research tasks are difficult and time-consuming to implement as a result. TurkPrime was designed as a research platform that integrates with MTurk and supports tasks that are common to the social and behavioral sciences. Like MTurk, TurkPrime is an Internet-based platform that runs on any browser and does not require any downloads or installation. Tasks that can be implemented with TurkPrime include: excluding participants on the basis of previous participation, longitudinal studies, making changes to a study while it is running, automating the approval process, increasing the speed of data collection, sending bulk e-mails and bonuses, enhancing communication with participants, monitoring dropout and engagement rates, providing enhanced sampling options, and many others. This article describes how TurkPrime saves time and resources, improves data quality, and allows researchers to design and implement studies that were previously very difficult or impossible to carry out on MTurk. TurkPrime is designed as a research tool whose aim is to improve the quality of the crowdsourcing data collection process. Various features have been and continue to be implemented on the basis of feedback from the research community. TurkPrime is a free research platform.
Article
Full-text available
Over the past decades, delusions have become the subject of growing and productive research spanning clinical and cognitive neurosciences. Despite this, the nature of belief, which underpins the construct of delusions, has received little formal investigation. No account of delusions, however, would be complete without a cognitive level analysis of belief per se. One reason for this neglect is the assumption that, unlike more established and accessible modular psychological process (e.g., vision, audition, face-recognition, language-processing, and motor-control systems), beliefs comprise more distributed and therefore less accessible central cognitive processes. In this paper, we suggest some defining characteristics and functions of beliefs. Working back from cognitive accounts of delusions, we consider potential candidate cognitive processes that may be involved in normal belief formation. Finally, we advance a multistage account of the belief process that could provide the basis for a more comprehensive model of belief.
Article
Full-text available
The Bayesian Information Criterion (BIC) is widely used for variable selection in mixed effects models. However, its expression is unclear in typical situations of mixed effects models, where simple definition of the sample size is not meaningful. We derive an appropriate BIC expression that is consistent with the random effect structure of the mixed effects model. We illustrate the behavior of the proposed criterion through a simulation experiment and a case study and we recommend its use as an alternative to various existing BIC versions that are implemented in available software.
Article
Full-text available
Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.
Article
Full-text available
People respond to dissimilar political beliefs in a variety of ways, ranging from openness and acceptance to closed-mindedness and intolerance. While there is reason to believe that uncertainty may influence political tolerance, the direction of this influence remains unclear. We propose that threat moderates the effect of uncertainty on tolerance; when safe, uncertainty leads to greater tolerance, yet when threatened, uncertainty leads to reduced tolerance. Using independent manipulations of threat and uncertainty, we provide support for this hypothesis. This research demonstrates that, although feelings of threat and uncertainty can be independent, it is also important to understand their interaction.
Article
Full-text available
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation. We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread. We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people’s memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing. We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals, educators, and science communicators—design effective misinformation retractions, educational tools, and public-information campaigns.
Article
Full-text available
This study introduced an individual difference construct of willingness to compromise and examined its implications for understanding and predicting career-related decisions in work settings. In Study 1 (N = 53), critical incidents of career decisions were analyzed to identify commonalities across different types of career-related compromises. In Study 2 (N = 171), an initial 17-item scale was developed and revised. In Study 3 (N = 201), the convergent and criterion-related validity of the scale was examined in relation to specific personality traits, regret, dealing with uncertainty, career adaptability, and a situational dilemma task. Willingness to compromise was negatively related to neuroticism, and positively related to dealing with uncertainty, openness to experience, and career adaptability; it also predicted responses to the situational dilemma task. Results provided support for the reliability and validity of the scale.
Article
Full-text available
Prediction errors (PE) are a central notion in theoretical models of reinforcement learning, perceptual inference, decision-making and cognition, and prediction error signals have been reported across a wide range of brain regions and experimental paradigms. Here, we will make an attempt to see the forest for the trees and consider the commonalities and differences of reported PE signals in light of recent suggestions that the computation of PE forms a fundamental mode of brain function. We discuss where different types of PE are encoded, how they are generated, and the different functional roles they fulfill. We suggest that while encoding of PE is a common computation across brain regions, the content and function of these error signals can be very different and are determined by the afferent and efferent connections within the neural circuitry in which they arise.
Article
Full-text available
Two models of belief change, Laroche's (1977) comparative statics model and the single-push with friction dynamic model (Kaplowitz, Fink, & Bauer, 1983), were combined and tested. Beliefs about two issues (criminal sentencing and tuition increase) were measured every 77 ms, N=95. Eleven time points from each participant's belief trajectory were analyzed. Message discrepancy and source credibility were manipulated. As predicted, belief change monotonically increased over time and the rate of belief change decreased for both issues. For the criminal-sentencing issue, the relationship between message discrepancy and belief change was found to be positive and monotonic for messages from a high-credibility source but nonmonotonic for messages from a low-credibility source. For the criminal-sentencing issue the predicted overtime increase of the effect of message discrepancy on belief change for a high-credibility source and an over-time increase of the effect of source credibility on belief change were found.
Article
Full-text available
People who hold strong opinions on complex social issues are likely to examine relevant empirical evidence in a biased manner. They are apt to accept "confirming" evidence at face value while subjecting "disconfirming" evidence to critical evaluation, and, as a result, draw undue support for their initial positions from mixed or random empirical findings. Thus, the result of exposing contending factions in a social dispute to an identical body of relevant empirical evidence may be not a narrowing of disagreement but rather an increase in polarization. To test these assumptions, 48 undergraduates supporting and opposing capital punishment were exposed to 2 purported studies, one seemingly confirming and one seemingly disconfirming their existing beliefs about the deterrent efficacy of the death penalty. As predicted, both proponents and opponents of capital punishment rated those results and procedures that confirmed their own beliefs to be the more convincing and probative ones, and they reported corresponding shifts in their beliefs as the various results and procedures were presented. The net effect of such evaluations and opinion shifts was the postulated increase in attitude polarization. (28 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
An extensive literature addresses citizen ignorance, but very little research focuses on misperceptions. Can these false or unsubstantiated beliefs about politics be corrected? Previous studies have not tested the efficacy of corrections in a realistic format. We conducted four experiments in which subjects read mock news articles that included either a misleading claim from a politician, or a misleading claim and a correction. Results indicate that corrections frequently fail to reduce misperceptions among the targeted ideological group. We also document several instances of a “backfire effect” in which corrections actually increase misperceptions among the group in question. KeywordsMisperceptions-Misinformation-Ignorance-Knowledge-Correction-Backfire
Article
Full-text available
The medial prefrontal cortex (mPFC) and especially anterior cingulate cortex is central to higher cognitive function and many clinical disorders, yet its basic function remains in dispute. Various competing theories of mPFC have treated effects of errors, conflict, error likelihood, volatility and reward, using findings from neuroimaging and neurophysiology in humans and monkeys. No single theory has been able to reconcile and account for the variety of findings. Here we show that a simple model based on standard learning rules can simulate and unify an unprecedented range of known effects in mPFC. The model reinterprets many known effects and suggests a new view of mPFC, as a region concerned with learning and predicting the likely outcomes of actions, whether good or bad. Cognitive control at the neural level is then seen as a result of evaluating the probable and actual outcomes of one's actions.
Article
Full-text available
Recent lesion studies have implicated the perirhinal cortex in learning that two objects are associated, i.e., visual association learning. In this experiment we tested whether neuronal responses to associated stimuli in perirhinal cortex are altered over the course of learning. Neurons were recorded from monkeys during performance of a visual discrimination task in which a predictor stimulus was followed, after a delay, by a GO or NO-GO choice stimulus. Association learning had two major influences on neuronal responses. First, responses to frequently paired predictor-choice stimuli were more similar to one another than was the case with infrequently paired stimuli. Second, the magnitude of activity during the delay was correlated with the magnitude of responses to both the predictor and choice stimuli. Both of these learning effects were found only for stimulus pairs that had been associated on at least 2 d of training. Early in training, the delay activity was correlated only with the response to the predictor stimuli. Thus, with long-term training, perirhinal neurons tend to link the representations of temporally associated stimuli.
Article
Full-text available
We report three exact replications of experiments aimed at iluminating how fictional narratives influence beliefs (Prentice, Gerrig, & Bailis, 1997). Students read fictional stories that contained weak, unsupported assertions and which took place either at their home school or at an away school. Prentice et al. found that students were influenced to accept the assertions, even those blatantly false, but that this effect on beliefs was limited to the away-school setting. We questioned the limiting of the narrative effect to remote settings. Our studies consistently reproduced the first finding, heightened acceptance of statements occurring in the conversations of narrative protagonists, but we failed to reproduce the moderating effect of school location. In an attempt to understand these discrepancies, we measured likely moderating factors such as readers' need for cognition and their extent of scrutiny of the narratives.
Article
Full-text available
Actions are guided by prior sensory information [1-10], which is inherently uncertain. However, how the motor system is sculpted by trial-by-trial content of current sensory information remains largely unexplored. Previous work suggests that conditional probabilities, learned under a particular context, can be used preemptively to influence the output of the motor system [11-14]. To test this we used transcranial magnetic stimulation (TMS) to read out corticospinal excitability (CSE) during preparation for action in an instructed delay task [15, 16]. We systematically varied the uncertainty about an impending action by changing the validity of the instructive visual cue. We used two information-theoretic quantities to predict changes in CSE, prior to action, on a trial-by-trial basis: entropy (average uncertainty) and surprise (the stimulus-bound information conveyed by a visual cue) [17-19]. Our data show that during preparation for action, human CSE varies according to the entropy and surprise conveyed by visual events guiding action. CSE increases on trials with low entropy about the impending action and low surprise conveyed by an event. Commensurate effects were observed in reaction times. We suggest that motor output is biased according to contextual probabilities that are represented dynamically in the brain.
Preprint
People are constantly bombarded with information they could use to adjust their beliefs. Here, we are interested in exploring the impact of social norms on belief update. To investigate, we recruited a sample of 200 Princeton University students, who first rated the accuracy of a set of statements (pre-test). They were then provided with relevant evidence either in favor or against the initial statements, and they were asked to rate how convincing each piece of evidence was. The evidence was randomly assigned to appear as normative or non-normative, and also randomly assigned to appear as anecdotal or scientific. Finally, participants rated the accuracy of the initial set of statements again (post-test). The results show that participants changed their beliefs more in line with the evidence, when the evidence was scientific compared to when it was anecdotal. More importantly to our primary inquiry, the results show that participants changed their beliefs more in line with the evidence when the evidence was portrayed as normative compared to when the evidence was portrayed as non-normative, pointing to the impactful influence social norms have on beliefs. Both effects were mediated by participants’ subjective evaluation of the convincingness of the evidence, indicating the mechanism by which evidence is selectively incorporated into belief systems.
Preprint
People’s beliefs are influenced by interactions within their communities. The propagation of this influence through conversational social networks should impact the degree to which community members synchronize their beliefs. To investigate, we recruited a sample of 140 participants and constructed fourteen 10-member communities. Participants first rated the accuracy of a set of statements (pre-test) and were then provided with relevant evidence about them. Then, participants discussed the statements in a series of conversational interactions, following pre-determined network structures (clustered/non-clustered). Finally, they rated the accuracy of the statements again (post- test). The results show that belief synchronization, measuring the increase in belief similarity among individuals within a community from pre-test to post-test, is influenced by the community’s conversational network structure. This synchronization is circumscribed by a degree of separation effect and is equivalent in the clustered and non- clustered networks. We also find that conversational content predicts belief change from pre-test to post-test.
Article
Although models of political ideology traditionally focus on the motivations that separate conservatives and liberals, a growing body of research is directly exploring the cognitive factors that vary due to political ideology. Consistent with this emerging literature, the present research proposes that conservatives and liberals excel at tasks of distinct working memory processes (i.e., inhibition and updating, respectively). Consistent with this hypothesis, three studies demonstrate that conservatives are more likely to succeed at response inhibition and liberals are more likely to succeed at response updating. Moreover, this effect is rooted in different levels of cognitive flexibility and independent of respondents’ demographics, intelligence, religiosity, and motivation. Collectively, these findings offer an important perspective on the cognitive factors that delineate conservatism and liberalism, the role of cognitive flexibility in specific working memory processes, and the impact of political ideology on a multitude of behaviors linked to inhibition and updating (e.g., creativity, problem-solving, self-control).
Article
Systems of beliefs organized around religion, politics, and health constitute the building blocks of human communities. One central feature of these collectively held beliefs is their dynamic nature. Here, we study the dynamics of belief endorsement in lab-created 12-member networks using a 2-phase communication model. Individuals first evaluate the believability of a set of beliefs, after which, in Phase 1, some networks listen to a public speaker mentioning a subset of the previously evaluated beliefs while other networks complete a distracter task. In Phase 2, all participants engage in conversations within their network to discuss the initially evaluated beliefs. Believability is then measured both post conversation and after one week. We find that the public speaker impacts the community's beliefs by altering their mnemonic accessibility. This influence is long-lasting and amplified by subsequent conversations, resulting in community-wide belief synchronization. These findings point to optimal sociocognitive strategies for combating misinformation in social networks. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Article
The motivated social cognition (MSC) model of conservative ideology posits there are two core facets of conservative political ideology—endorsement of hierarchies and resistance to change. The present research tested the validity and reliability of a scale developed to measure resistance to change. Five studies support the validity, reliability, and factor structure of the Resistance to Change-Beliefs (RC-B) scale. Scores on the RC-B scale correlated with social and cognitive motivations as well as self-identified conservatism. RC-B also predicted more conservative stances on political issues and factor analyses supported the predicted internal structure of the RC-B scale. This provides the field with a validated instrument that avoids problems inherent in previous measures, can be used to test predictions from the MSC model, and has potential applications beyond political psychology.
Article
Perception and perceptual decision-making are strongly facilitated by prior knowledge about the probabilistic structure of the world. While the computational benefits of using prior expectation in perception are clear, there are myriad ways in which this computation can be realized. We review here recent advances in our understanding of the neural sources and targets of expectations in perception. Furthermore, we discuss Bayesian theories of perception that prescribe how an agent should integrate prior knowledge and sensory information, and investigate how current and future empirical data can inform and constrain computational frameworks that implement such probabilistic integration in perception.
Article
Of this article's seven experiments, the first five demonstrate that virtually no Americans know the basic global warming mechanism. Fortunately, Experiments 2-5 found that 2-45 min of physical-chemical climate instruction durably increased such understandings. This mechanistic learning, or merely receiving seven highly germane statistical facts (Experiment 6), also increased climate-change acceptance-across the liberal-conservative spectrum. However, Experiment 7's misleading statistics decreased such acceptance (and dramatically, knowledge-confidence). These readily available attitudinal and conceptual changes through scientific information disconfirm what we term "stasis theory"-which some researchers and many laypeople varyingly maintain. Stasis theory subsumes the claim that informing people (particularly Americans) about climate science may be largely futile or even counterproductive-a view that appears historically naïve, suffers from range restrictions (e.g., near-zero mechanistic knowledge), and/or misinterprets some polarization and (noncausal) correlational data. Our studies evidenced no polarizations. Finally, we introduce HowGlobalWarmingWorks.org-a website designed to directly enhance public "climate-change cognition."
Article
Accusations of entrenched political partisanship have been launched against both conservatives and liberals. But is feeling superior about one's beliefs a partisan issue? Two competing hypotheses exist: the rigidity-of-the-right hypothesis (i.e., conservatives are dogmatic) and the ideological-extremism hypothesis (i.e., extreme views on both sides predict dogmatism). We measured 527 Americans' attitudes about nine contentious political issues, the degree to which they thought their beliefs were superior to other people's, and their level of dogmatism. Dogmatism was higher for people endorsing conservative views than for people endorsing liberal views, which replicates the rigidity-of-the-right hypothesis. However, curvilinear effects of ideological attitude on belief superiority (i.e., belief that one's position is more correct than another's) supported the ideological-extremism hypothesis. Furthermore, responses reflecting the greatest belief superiority were obtained on conservative attitudes for three issues and liberal attitudes for another three issues. These findings capture nuances in the relationship between political beliefs and attitude entrenchment that have not been revealed previously.
Article
Theory: Recent scholarship has emphasized the potential importance of cues, information shortcuts, and statistical aggregation processes in allowing relatively uninformed citizens to act, individually or collectively, ns if they were fully informed. Hypotheses: Uninformed voters successfully use cues and information shortcuts to behave ns if they were fully informed. Failing that, individual deviations from fully informed voting cancel out in a mass electorate, producing the same aggregate election outcome ns if voters were fully informed. Methods: Hypothetical ''fully informed'' vote choices are imputed to individual voters using the observed relationship between political information and vote choices for voters with similar social and demographic characteristics, estimated by probit analysis of data from National Election Study surveys conducted after the six most recent United States presidential elections. Results: Both hypotheses are clearly disconfirmed. At the individual level, the average deviation of actual vote probabilities from hypothetical ''fully informed'' vote probabilities was about ten percentage points. In the electorate as a whole, these deviations were significantly diluted by aggregation, but by no means eliminated: incumbent presidents did almost five percentage points better, and Democratic candidates did almost two percentage points better, than they would have if voters had in fact been ''fully informed.''
Article
This chapter describes a process model of epistemic belief change and its implications for epistemological development. Epistemic beliefs are considered to be an individual's beliefs about the nature of truth and knowledge. The process model of epistemic belief change offered in this chapter contributes to the understanding of epistemological development in several important ways. The crux of the current model is a detailed view into experiences associated with epistemic doubt. Using descriptions of this experience from individual participants, a four-component model of epistemic belief change is offered to represent the experience as whole. The current model is a significant first step in exploring epistemic doubt and its contribution to epistemic change. The role of epistemic doubt as a mechanism of change is confirmed in the model. The affective side of epistemological development is also illuminated. One of the more unique and compelling findings of this study is the window that was provided into the tumultuous experience of doubting one's epistemic beliefs. An additional contribution of this study was a description of the various strategies employed by individuals to resolve their epistemic doubt successfully. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Opportunities for communicating psychological findings beyond the discipline are limited and often under-rewarded. In this article, we discuss reasons why psychological research often fails to be communicated beyond the discipline, and we provide suggestions for what needs to be changed in order to bridge this gap. Specifically, we identify barriers to communicating beyond the discipline, and we note that more effectively and broadly disseminating knowledge requires a different style than conveying information within the profession. We further illustrate how psychology offers unique perspectives and information that are of considerable value to lay audiences and policy makers. We conclude by articulating the potential benefits for society and psychology of efforts and venues whose explicit intention is to understand social problems and inform policy through the psychological study of social issues.
Article
People often sincerely assert or judge one thing (for example, that all the races are intellectually equal) while at the same time being disposed to act in a way evidently quite contrary to the espoused attitude (for example, in a way that seems to suggest an implicit assumption of the intellectual superiority of their own race). Such cases should be regarded as ‘in-between’ cases of believing, in which it's neither quite right to ascribe the belief in question nor quite right to say that the person lacks the belief.
Article
Scholars have documented the deficiencies in political knowledge among American citizens. Another problem, misinformation, has received less attention. People are misinformed when they confidently hold wrong beliefs. We present evidence of misinformation about welfare and show that this misinformation acts as an obstacle to educating the public with correct facts. Moreover, widespread misinformation can lead to collective preferences that are far different from those that would exist if people were correctly informed. The misinformation phenomenon has implications for two currently influential scholarly literatures: the study of political heuristics and the study of elite persuasion and issue framing.
Article
The problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion. These terms are a valid large-sample criterion beyond the Bayesian context, since they do not depend on the a priori distribution.
Article
In a sentence reading task, words that occurred out of context were associated with specific types of event-related brain potentials. Words that were physically aberrant (larger than normal) elecited a late positive series of potentials, whereas semantically inappropriate words elicited a late negative wave (N400). The N400 wave may be an electrophysiological sign of the "reprocessing" of semantically anomalous information.
Article
Analyzing political conservatism as motivated social cognition integrates theories of personality (authoritarianism, dogmatism-intolerance of ambiguity), epistemic and existential needs (for closure, regulatory focus, terror management), and ideological rationalization (social dominance, system justification). A meta-analysis (88 samples, 12 countries, 22,818 cases) confirms that several psychological variables predict political conservatism: death anxiety (weighted mean r = .50); system instability (.47); dogmatism-intolerance of ambiguity (.34); openness to experience (-.32); uncertainty tolerance (-.27); needs for order, structure, and closure (.26); integrative complexity (-.20); fear of threat and loss (.18); and self-esteem (-.09). The core ideology of conservatism stresses resistance to change and justification of inequality and is motivated by needs that vary situationally and dispositionally to manage uncertainty and threat.
Article
Tested the hypothesis that the greater the inducement offered for performing a counterattitudinal task, the greater the dissonance-if the individuals choose not to comply with the attitude-discrepant request. If was predicted that dissonance aroused by noncompliance would be reduced by a strengthening of the original attitude. 20 undergraduates were offered either a high or a low incentive ($1.50 or $.50) for writing an essay advocating the use of codes of dress in secondary schools. The situation was devised in such a way that all Ss chose not to write the essay. Results of an attitude questionnaire indicate that high incentive Ss became more strongly opposed to dress code regulations than either the low incentive group or a control group (n = 10).
Article
Human perception and memory are often explained as optimal statistical inferences that are informed by accurate prior probabilities. In contrast, cognitive judgments are usually viewed as following error-prone heuristics that are insensitive to priors. We examined the optimality of human cognition in a more realistic context than typical laboratory studies, asking people to make predictions about the duration or extent of everyday phenomena such as human life spans and the box-office take of movies. Our results suggest that everyday cognitive judgments follow the same optimal statistical principles as perception and memory, and reveal a close correspondence between people's implicit probabilistic models and the statistics of the world.
Numerically driven inferencing: A new paradigm for examining judgments, decisions, and policies involving base rates
  • M A Ranney
  • F Cheng
  • J Garcia De Osuna
  • J Nelson
Ranney, M. A., Cheng, F., Garcia de Osuna, J., & Nelson, J. (2001). Numerically driven inferencing: A new paradigm for examining judgments, decisions, and policies involving base rates [Paper presentation].