Chapter

The Politically Motivated Reasoning Paradigm, Part 1: What Politically Motivated Reasoning Is and How to Measure It: An Interdisciplinary, Searchable, and Linkable Resource

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Recent research identifies politically motivated reasoning as the source of persistent public conflict over policy-relevant facts. This essay, the first in a two-part set, presents a basic conceptual model—the Politically Motivated Reasoning Paradigm—and an experimental setup—the PMRP design—geared to distinguishing the influence of PMRP from a truth-seeking Bayesian process of information processing and from recurring biases understood to be inimical to the same. It also discusses alternative schemes for operationalizing “motivating” group predispositions and the characteristics of valid study samples for examining this phenomenon.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Nam et al., 2013;Shook and Fazio, 2009). The intrinsic thesis has, however, been critiqued based on the validity and relevance of the reasoning measures that conservatives score lower on, as well as on the internal validity of its experiments (Kahan, 2012(Kahan, , 2016. More recent studies have, therefore, relied on more stringent measures and designs. ...
... According to this perspective, both liberals and conservatives are predisposed to reject identity-incongruent information due to a psychological need to protect beliefs that maintain their status in an affinity group (Cohen, 2003). This idea is supported by experiments in which the identity congruence of stimulus material was manipulated, which altered information processing among both liberals and conservatives (Kahan, 2016;Washburn and Skitka, 2018). ...
... The CRONOS-2 panel asked participants about their trust in two conservative-dissonant claims, two liberal-dissonant claims, and two neutral claims. The first conservative-dissonant claim is the notion that climate change is human-made, which may threaten conservatives' ideological identity due to its implications for regulatory policies (Kahan, 2016). In addition, conservatives may take cues from conservative politicians and media that contest science on this issue (Rekker, 2021). ...
Article
Full-text available
Citizens’ trust in science increasingly depends on their political leaning. Structural equation models on survey data from 10 European countries (N = 5,306) demonstrate that this science polarization can be captured by a model with four levels of generalization. Voters of populist parties distrust the system and elite in general, which indirectly fuels a broad science skepticism. At another level, right-wingers have less trust in science as a whole than left-wingers. After accounting for this general skepticism, left-wingers and right-wingers are, however, similarly prone to contest ideology-incongruent research fields and specific claims. These findings have three implications. First, research on science skepticism should carefully consider all four levels and their interplay. Second, the science polarization between populist and non-populist voters has fundamentally different origins than the effect of left-right ideology. Third, a four-level model can expose ideological symmetries in science rejection that have previously remained largely undetected in observational studies.
... There are well known differences in risk perception and reactions, leading to strong polarization almost beyond capacity to communicate (Kahneman [52], Opaluch and Segerson [65], Sunstein [88,89], Sunstein et al. [90], Sunstein [91], Tversky and Kahneman [96,97,98], Tversky et al. [100]). Our current work has been motivated by the recent studies (Kahan [50,51]), which describe in detail the Politically Motivated Reasoning Paradigm (PMRP). We aim to create an Agent Based Model using biased information processing and Bayesian updating. ...
... Especially, when the consequences of a rejection from the group are more immediate and important than the results of 'erroneous' perception of the world. Kahan [50,51] has provided a very attractive Bayesian framework, allowing not only to describe the role of various forms of cognitive biases, but also the empirical evidence of the differing predictions of the different heuristics, such as confirmation bias or political predispositions. Experiments with manipulated 'evidence', described by Kahan, are very interesting. ...
... We base our concepts on the Bayesian framework. Figure 1 presents the basic process flow, modelled after Kahan [50]. For simplicity, we shall assume that the belief which we will be modelling may be described as a single, continuous variable θ, ranging from -1 to +1 (providing a natural space for opinion polarization). ...
Preprint
We present an introduction to a novel model of an individual and group opinion dynamics, taking into account different ways in which different sources of information are filtered due to cognitive biases. The agent based model, using Bayesian updating of the individual belief distribution, is based on the recent psychology work by Dan Kahan. Open nature of the model allows to study the effects of both static and time-dependent biases and information processing filters. In particular, the paper compares the effects of two important psychological mechanisms: the confirmation bias and the politically motivated reasoning. Depending on the effectiveness of the information filtering (agent bias), the agents confronted with an objective information source may either reach a consensus based on the truth, or remain divided despite the evidence. In general, the model might provide an understanding into the increasingly polarized modern societies, especially as it allows mixing of different types of filters: psychological, social, and algorithmic.
... The observational equivalence between the influence of expectancy and nonexpectancy variables has been difficult to resolve in past research on partisan bias and in studies of bias generally (Ditto, 2009;. Providing more negative evaluations of politically uncongenial information could reflect that partisans' preferences, desires, and feelings about a topic are guiding information processing (Kahan, 2016), yet it could just as easily reflect processes by which partisans' expectations are leading to biases (Tappin et al., 2020a(Tappin et al., , 2021. Analogous to observations of anchoring and insufficient adjustment (Kahneman, 2003), partisans may cling too tightly to their prior beliefs when evaluating new information, causing them to downgrade the quality of new information when it does not align with their expectations. ...
... Specifically, we measured participants' confidence in their prior bias beliefs, their intellectual humility, and their strength of partisan identification to see how those variables may exacerbate (or mitigate) the biasing influence of partisans' prior beliefs. Additionally, we included an exploratory measure of social concern (i.e., how liberal or conservative one's social environment is) to see whether merely inhabiting a more polarized environment was predictive of participants' quality evaluations, which would accord with some motivated accounts of partisan reasoning (Kahan, 2016). Finally, as in our previous studies, we analyzed how the blinding manipulation and participants' prior beliefs influenced their credibility impressions and belief updating. ...
... Namely, it clarifies how it may be rational, at certain levels of analysis, to be biased. In environments where any individual's beliefs and behaviors will have a limited impact on political systems, it may be rational (i.e., sociofunctional) to express group-or identity-consistent beliefs even at the expense of accuracy or impartiality (Kahan, 2016;Pinsof et al., 2023). Where the costs of inaccuracy, incoherence, and partiality are limited, it can be "ecologically rational" and adaptive to exhibit perceptual and evaluative biases (Arkes et al., 2016). ...
Article
Full-text available
Despite decades of research, it has been difficult to resolve debates about the existence and nature of partisan bias—the tendency to evaluate information more positively when it supports, rather than challenges, one’s political views. Whether partisans display partisan biases, and whether any such biases reflect motivated reasoning, remains contested. We conducted four studies (total N = 4,010) in which participants who made unblinded evaluations of politically relevant science were compared to participants who made blinded evaluations of the same study. The blinded evaluations—judgments of a study’s quality given before knowing whether its results were politically congenial—served as impartial benchmarks against which unblinded participants’ potentially biased evaluations were compared. We also modeled the influence of partisans’ preferences and prior beliefs to test accounts of partisan judgment more stringently than past research. Across our studies, we found evidence of politically motivated reasoning, as unblinded partisans’ preferences and prior beliefs independently biased their evaluations. We contend that conceptual confusion between descriptive and normative (e.g., Bayesian) models of political cognition has impeded the resolution of long-standing theoretical debates, and we discuss how our results may help advance more integrative theorizing. We also consider how the blinding paradigm can help researchers address further theoretical disputes (e.g., whether liberals and conservatives are similarly biased), and we discuss the implications of our results for addressing partisan biases within and beyond social science.
... Susceptibility to misinformation and belief polarization is often attributed to people's motivation to protect their valuable identities and affirm their ideologies even at the defiance of truth (Kahan 2016). Although the neurocognitive processes that underlie politically motivated belief formation are still largely unknown, scholars have proposed competing cognitive models of this phenomenon (Hughes and Zaki 2015, Sharot and Garrett 2016, Van Bavel and Pereira 2018. ...
... In this respect, directional deviations from the Bayesian benchmark would indicate that people engage in motivated reasoning and express two distinct biases ( Fig. 2b): (i) a "desirability bias" whereby people distort their inference process in ways that confirm their desired beliefs (Kahan 2016, Flynn et al. 2017 and (ii) an "identity bias" whereby people incorporate information from in-group sources and resist influence from out-group sources (Abrams and Hogg 1990, Kahan 2016, Guilbeault et al. 2018, Kim et al. 2020). Our study design provides a clean setting to examine whether brain responses to the messages predict the subsequent expression of desirability and identity biases in polarized settings. ...
... In this respect, directional deviations from the Bayesian benchmark would indicate that people engage in motivated reasoning and express two distinct biases ( Fig. 2b): (i) a "desirability bias" whereby people distort their inference process in ways that confirm their desired beliefs (Kahan 2016, Flynn et al. 2017 and (ii) an "identity bias" whereby people incorporate information from in-group sources and resist influence from out-group sources (Abrams and Hogg 1990, Kahan 2016, Guilbeault et al. 2018, Kim et al. 2020). Our study design provides a clean setting to examine whether brain responses to the messages predict the subsequent expression of desirability and identity biases in polarized settings. ...
Article
Full-text available
Susceptibility to misinformation and belief polarization often reflect people’s tendency to incorporate information in a biased way. Despite the presence of competing theoretical models, the underlying neurocognitive mechanisms of motivated reasoning remain elusive as previous empirical work did not properly track the belief formation process. To address this problem, we employed a design that identifies motivated reasoning as directional deviations from a Bayesian benchmark of unbiased belief updating. We asked members of a pro-immigration or an anti-immigration group how much they endorse factual messages on foreign criminality, a polarizing political topic. Both groups exhibited a desirability bias by over-endorsing attitude-consistent messages and under-endorsing attitude-discrepant messages and an identity bias by over-endorsing messages from ingroup members and under-endorsing messages from outgroup members. In both groups, neural responses to the messages predicted subsequent expression of desirability and identity biases suggesting a common neural basis of motivated reasoning across ideologically opposing groups. Specifically, brain regions implicated in encoding value, error detection, and mentalizing tracked the degree of desirability bias. Less extensive activation in the mentalizing network tracked the degree of identity bias. These findings illustrate the distinct neurocognitive architecture of desirability and identity biases and inform existing cognitive models of politically motivated reasoning.
... Motivated reasoning is evident such that, as well as a desire to arrive at accurate beliefs about the state of the world (accuracy goals), reasoning can be driven in a particular direction given ex ante directional goals. These directional goals for reasoning arise because of recipients' desires to protect their salient identities, existing beliefs, or values (Kahan 2016;Kunda 1990;Pennycook 2023;Taber and Lodge 2006). Motivated reasoning is found for citizens (Jilke 2018) and service users (Christensen 2018) as well as public employees (Petersen et al. 2018) and politicians (Baekgaard et al. 2019). ...
... Christensen (2018) identified users' directional goals stemming from avoiding transaction costs from redoing their choice of service provider. Service settings of ongoing and close interaction between users and providers further fuel motivated reasoning through feelings of attachment and organizational identification with providers (Kahan 2016;Mael and Ashforth 1992). More generally, identity driven motivated reasoning among formal members of public organizations makes them selectively skeptical toward negative information about their organization (Petersen et al. 2018). ...
... This computation is possible since we elicit participants' prior beliefs and because the performance information in the experiment is assigned according to a probabilistic but known process; essentially analogous to participants drawing a selection of information from an urn containing multiple pieces of information (Möbius et al. 2022;Zimmermann 2020). Our study of motivated reasoning among service users draws on the identity-protective cognition model (Kahan 2013(Kahan , 2016Kahan et al. 2017;Pennycook 2023). Users' identifying with their service provider makes negative information about the provider seem inconvenient and disturbing to them, causing such information to be treated with asymmetric skepticism compared to positive information (see also Petersen et al. 2018). ...
Article
Full-text available
Although performance information is widely promoted to improve the accountability of public service provision, behavioral research has revealed that motivated reasoning leads recipients to update their beliefs inaccurately. However, the reasoning processes of service users have been largely neglected. We develop a theory of public service users’ motivated reasoning about performance information stemming from their identification with the organization providing their services. We address a significant challenge to studying motivated reasoning—that widely used existing research designs cannot rule out alternative cognitive explanations, especially Bayesian learning, such that existing findings could be driven by strong prior beliefs rather than biased processing of new information. We use a research design incorporating Bayesian learning as a benchmark to identify departures from accuracy motivated reasoning process. We assess the empirical implications of the theory using a preregistered information provision experiment among parents with children using public schools. To assess their identity-based motivated reasoning, we provide them with noisy, but true, performance information about their school. Overall, we find no evidence of directionally motivated reasoning. Instead, parents change their beliefs in response to performance feedback in a way that largely reflects conservative Bayesian learning. Performance reporting to service users is less vulnerable to motivational biases in this context than suggested by the general literature on motivated reasoning. Furthermore, exploratory findings show that performance information can correct erroneous beliefs among misinformed service users, suggesting that investment in reporting performance to service users is worthwhile to inform their beliefs and improve accountability.
... Pritom o nekima od tih pitanja postoji neupitan znanstveni konsenzus, dok za druga on nije jasno uspostavljen. Ipak, zajedničko im je da se povezuju s osobnim, društvenim, političkim i kulturnim identitetima i vrijednostima za koje je vjerojatno da će potaknuti motivirano rezoniranje vođeno direktivnim ciljevima (Kahan, 2016(Kahan, , 2017aSherman i Cohen, 2006). Doista, velik dio skepticizma prema znanosti posljedica je motivirane kognicije -ljudi su manje skloni prihvaćanju nalaza koji ugrožavaju njihove vrijednosti i svjetonazore (npr. ...
... Motivacije u pozadini oprečnih uvjerenja često su utemeljene na zaštiti ili samoafirmaciji identiteta (npr. Kahan, 2016;Sherman i Cohen, 2006). Svjetonazori, vrijednosti, ideologije i grupne identifikacije ključni su za samopoimanje i identitet. ...
... Iz navedenoga proizlazi da su nalazi velikoga dijela eksperimentalnih istraživanja nedijagnostični o tome jesu li opažene pristranosti prema vlastitoj strani rezultat motivirane ili "hladne", nemotivirane kognicije. Međutim, neki teoretičari ističu da, iako rezoniranje može biti racionalno u normativnome, bejzijanskom smislu unutar takvih eksperimenata, divergentna početna uvjerenja u osnovi takve kognitivne obrade mogu biti posljedica pristrane kognicije te je u globalnome smislu rasuđivanje koje općenito ne konvergira prema epistemičkoj točnosti manje racionalno (Kahan, 2016;Stanovich, 2021). Naime, eksperimenti ne daju uvid u podrijetlo samih početnih uvjerenja. ...
Article
Full-text available
U svijetu preplavljenome informacijama naša je sposobnost racionalnoga razmišljanja često na kušnji. U ovome radu donosimo teorijski i istraživački pregled literature koja pojašnjava kada nam i zašto uvjerenja ponekad divergiraju od empirijski provjerenih spoznaja i činjenica, posebno u kontekstu svjetonazorske polarizacije. Polazeći od okvira dvoprocesnih teorija za razumijevanje individualnih razlika u racionalnosti, razmatramo dvije teze o ulozi analitičkoga rezoniranja u (ne)prihvaćanju epistemički utemeljenih dokaza i činjenica podložnih svjetonazorskoj polarizaciji. Dosadašnji nalazi pružaju složenu sliku te ostavljaju prostor različitim teorijskim interpretacijama, ukazujući na nužnost sveobuhvatnoga međudisciplinarnog pristupa u svrhu daljnjega produbljivanja područja.
... Many progressive climate activists do not budge on the risks associated with nuclear energy, even in light of evidence that they might be over-estimating those risks. Republicans and Democrats in the USA maintain radically different risk assessments of permissive gun laws (among many other topics) (e.g., Kahan et al. 2011;Kahan 2012;Kahan 2016) even if given the same evidence. Generalizing: ...
... I have offered two lines of argument for the claim that identity-protective reasoning can be epistemically positive, challenging the received view that it is epistemically vicious and politically dangerous. My arguments imply that we should reject blanket recommendations to reduce identity-protective reasoning, whether by cultivating humility (Carter and McKenna 2020) reducing the salience of social identities in contexts of reasoning or deliberation (Talisse 2019;Klein 2020), or attempting to avoid the connection of empirical beliefs with identities (Kahan 2012;Kahan 2016;Kahan 2017). ...
Article
Full-text available
Identity-protective reasoning – motivated reasoning driven by defending a social identity – is often dismissed as a paradigm of epistemic vice and a key driver of democratic dysfunction. Against this view, I argue that identity-protective reasoning can play a positive epistemic role, both individually and collectively. Collectively, it facilitates an effective division of cognitive labor by enabling groups to test divergent beliefs, serving as an epistemic insurance policy against the possibility that the total evidence is misleading. Individually, it can correct for the distortions that arise from taking ideologically skewed evidence at face value. This is particularly significant for members of marginalized groups, who frequently encounter evidence that diminishes the value of their identities, beliefs, and practices. For them, identity-protective reasoning can counter dominant ideological ignorance and foster resistant standpoint development. While identity-protective reasoning is not without risks, its application from marginalized and counter-hegemonic positions carries epistemic benefits crucial in democracies threatened by elite capture. Against dominant views in contemporary political epistemology and psychology, identity-protective reasoning should be reconceived as a resource to be harnessed and not a problem to be eradicated.
... According to the motivated system 2 reasoning (Kahan, 2016) approach, the reason people avoid revisiting certain information may not be due to a failure in detecting conflicts, but rather a reluctance to engage further with that specific information. This may account for the changes in overconfidence, but not in fake news vulnerability, as observed in the current study. ...
... First, all study materials were presented in Romanian. As a result, cultural cognition, viewed as a form of motivated system 2 reasoning (Kahan, 2016;Mustață et al., 2023) and particular to post-communist countries, may have influenced our outcomes. However, studies from Ukraine, another post-communist country, have shown alignment with the existing literature (Erlich et al., 2022). ...
... Rather than being truth seekers, people often evaluate the veracity of information in line with non-truth seeking motives, and their reasoning is directionally motivated towards desired or identity-protective conclusions (Kahan, 2016;Kunda, 1990). In today's political landscape, it is not just opinions that differ along ideological lines, but also beliefs about factual questions (Rekker & Harteveld, 2022;Van Bavel & Pereira, 2018). ...
... (a) Participants' motives on different topics are assessed through pretreatment self-report items (e.g., "There are too many immigrants in the UK"). Participants who agree with this item are classified as holding anti-immigration motives and assumed to be more likely to perceive messages indicating higher immigrant numbers as true, as this would allow them to protect their political identity or or advocate for their political party's goals (Kahan, 2016;Williams, 2023). (b) In the Fake News Game, for each topic, participants first provide their median guess to a numerical question, i.e., a guess for which they believe that it is equally likely that the correct answer falls above or below. ...
Preprint
Full-text available
People often favour information aligned with their ideological motives. Can our tendency for directional motivated reasoning be overcome with cognitive control? It remains contested whether cognitive control processes, such as cognitive reflection and inhibitory control, are linked to a greater tendency to engage in politically motivated reasoning, as proposed by the “motivated reflection” hypothesis, or can help people overcome it, as suggested by cognitive science research. In this pre-registered study (N = 504 UK participants rating n = 4963 news messages), we first provide evidence for motivated reasoning on multiple political and non-political topics. We then compare the relative evidence for these two competing hypotheses and find that for political topics, it is 20 times more likely that cognitive reflection is associated with less motivated reasoning – in contrast to the prediction from the influential “motivated reflection” hypothesis. Our results highlight the need for more nuanced theories of how different cognitive control processes interact with motivated reasoning.
... In our case, we show that when judges are making high-stakes decisions in Court, they could exhibit polarization in writing. Building on the framework by [12] on politically motivated reasoning, recent work by [13] provides a new design to assess politically motivated reasoning based on trust in news. ...
... Career advancement incentives add another layer: [48] find that judges who appear on shortlists for promotion sometimes alter their judicial behavior, a pattern that parallels evidence of electoral-cycle adjustments in other judicial domains [37,46]. More broadly, judges' concerns about professional standing, institutional legitimacy, and adherence to norms of impartiality [12,22,28] can all contribute to a moderation in overt partisanship near elections, helping explain the decline in polarization we observe in both opinion text and citations leading up to midterm contests. ...
Article
Full-text available
This study explores politically motivated reasoning among U.S. Circuit Court judges over the past 120 years, examining their writing style and use of previous case citations in judicial opinions. Employing natural language processing and supervised machine learning, we scrutinize how judges’ language choices and legal citations reflect partisan slant. Our findings reveal a consistent, albeit modest, polarization in citation practices. More notably, there is a significant increase in polarization within the textual content of opinions, indicating a stronger presence of motivated reasoning in their prose. We also examine the impact of heightened scrutiny on judicial reasoning. On divided panels and as midterm elections draw near, judges show an increase in dissent votes while decreasing in polarization in both writing and citation practices. Furthermore, our study explores polarization dynamics among judges who are potential candidates for Supreme Court promotion. We observe that judges on the shortlist for Supreme Court vacancies demonstrate greater polarization in their selection of precedents. “I pay very little attention to legal rules, statutes, constitutional provisions ... The first thing you do is ask yourself — forget about the law — what is a sensible resolution of this dispute? ... See if a recent Supreme Court precedent or some other legal obstacle stood in the way of ruling in favor of that sensible resolution. ... When you have a Supreme Court case or something similar, they’re often extremely easy to get around.” (An Exit Interview with Richard Posner, The New York Times, Sep. 11, 2017).
... Individuals, however, do not only have truth-seeking motives when evaluating new information: their judgment and communication is also influenced by strategic, social goals (Kunda, 1990), such as signalling commitments to causes and fitting in moral communities (Kahan 2011(Kahan , 2016Williams, 2023;Tetlock, 2002;Van Bavel & Pereira, 2018), and trying to influence other people to increase their investment in causes that benefit the agent or the community (Fitouchi & Singh, 2022;Marie & Petersen, 2022;Kurzban et al., 2010;Pinsof, Sears & Haselton, 2023;Tetlock, 2002 In parallel, we expected potential sex differences in evaluations of research documenting sex-based hiring discrimination against women in academia to be largely explained by differences in MCGE. Indeed, Handley et al.'s (2015) observation that men tend to judge evidence of hiring discrimination against women less positively than women might be largely reducible to men being on average lower in MCGE than women due to differences in personal experiences. ...
... However, it is notoriously difficult to manipulate deep and stable moral convictions, hence our resorting to observational designs in which only the message being presented to respondents was experimentally manipulated, as is standard within research on individual differences and science consumption (e.g. Tappin et al., 2020aTappin et al., , 2020bKahan et al., 2011Kahan et al., , 2016Lord et al., 1979). ...
Article
Full-text available
Exploring what modulates people's trust in evidence of hiring discrimination is crucial to the deployment of corrective policies. Here, we explore one powerful source of variation in such judgments: moral commitment to gender equality (MCGE), that is, perceptions of the issue as a moral imperative and as identity‐defining. Across seven experiments (N = 3579), we examined folk evaluations of scientific reports of hiring discrimination in academia. Participants who were more morally committed to gender equality were more likely to trust rigorous, experimental evidence of gender discrimination against women. This association between moral commitment and research evaluations was not reducible to prior beliefs, and largely explained a sex difference in people's evaluations on the issue. On a darker note, however, MCGE was associated with increased chances of fallaciously inferring discrimination against women from contradictory evidence. Overall, our results suggest that moral convictions amplify people's myside bias, bringing about both benefits and costs in the public consumption of science.
... Many experimental designs focus on beliefs about the meaning or reliability of information on variables where subjects have directional motives, which Kahan (2015) calls the Political Motivated Reasoning Paradigm (see also Taber & Lodge, 2006;Flynn, Nyhan & Reifler, 2017). Although there are variants of this approach, a key component underlying many of them is that if subjects with different directional motives about a fact (say, that global warming is real) provide different assessments of the quality of evidence about that fact (e.g., a study about the severity of global warming), this indicates that they are engaged in motivated reasoning. ...
Article
Full-text available
Can we use the way that people respond to information as evidence that partisan bias or directional motives influence political beliefs? It depends. Using one natural formalization of motivated reasoning as wanting to believe certain things (“once-motivated reasoning”), this is not possible. Anyone exhibiting this kind of motivated reasoning has a “Fully Bayesian Equivalent” with a different prior, who has identical posterior beliefs upon observing any signal. This result clarifies what we can and cannot learn from several prominent research designs and identifies a set of results inconsistent with both Bayesian updating and once-motivated reasoning. An expanded version of the model where subjects sometimes completely reject signals that lead to less pleasant beliefs (“twice-motivated reasoning”) can explain these anomalies. The models clarify which empirical tests can provide evidence for different kinds of motivated beliefs and can be incorporated into decision- and game-theoretic models.
... Further, the better the subjects' mathematical aptitude, the more likely they were to misread the studies as confirming what they want to believe. (Kahan 2016) Exhibiting intergroup bias is perhaps the point of politics for most citizens. As Appiah (2018) summarizes, "People don't vote for what they want. ...
Article
Full-text available
Many businesses police employees’ extramural political speech and beliefs. They refuse to hire potential employees or will fire and blackball current employees for what they say and believe about politics. This paper argues that business managers should, with a few narrow exceptions, forbear from doing so. It grants that some political speech and beliefs, such as racist speech, can indeed be wrongful and presumptive grounds for disassociating with others. However, I argue that we cannot even in principle, even roughly, determine what makes some beliefs bad enough to merit such blackballing. Further, I argue that people in general, and business managers in particular, are likely to be unreliable in practice in assessing which speech or beliefs are unacceptable. Accordingly, businesses should forbear from policing speech because they are likely to be incompetent and unreliable at such policing.
... I focus on pro-immigration respondents in particular because these are the people who should be more responsive to this information due to its ideological congruency (Kahan 2016). Relatedly, I also focus on relatively uncommon sociotropic arguments as opposed to common humanitarian arguments in favor of expanding immigration, because pro-immigration respondents are already more likely to be familiar with and agree with the latter than the former. ...
Article
Full-text available
How can public opinion change in a pro-immigration direction? Recent studies suggest that those who support immigration care less about it than those who oppose it, which may explain why lawmakers do not enact pro-immigration reforms even when voters are pro-immigration. To see if the personal issue importance of immigration can be changed, I conducted a probability-based, nationally representative US survey experiment (N = 3,450) exposing respondents to verifiable arguments about the broad national benefits of expanding legal immigration and the costs of not doing so. Using new measures of issue importance, my descriptive results show that only one-fifth of voters who prioritize the issue have a pro-immigration preference. Furthermore, while anti-immigration respondents prioritize policies regarding law enforcement and (reducing) future immigration, pro-immigration respondents prioritize (helping) immigrants already here. The experimental results confirm that the provided arguments raised immigration’s importance among pro-immigration voters but did not backfire by mobilizing anti-immigration voters. Contrary to expectations, the arguments increased pro-immigration policy preferences, but did not change voters’ subissue priorities within immigration or their willingness to sign a petition. Overall, the treatment was effective beyond changing minds by shifting stated issue positions and priorities in a pro-immigration direction. It can thus be used in a nontargeted information campaign to promote pro-immigration reforms.
... Trump's invocation of shared victimhood has been effective, and people who see him as their ally and champion in the battle against evil forces threatening their security and way of life have a powerful incentive to use one or more modes of motivated reasoning to protect their good opinion of himavoiding, ignoring, disbelieving, discounting, excusing, or dismissing as irrelevant anything suggesting that he might not deserve their support (Lodge and Tabor 2013, Kunda 1990, Kahan 2016. And most do, as evidenced here by their reactions to his indictments, conviction, and civil suit losses. ...
Preprint
The sharp momentum shifts in the presidential contest during the summer of 2024—induced by Joe Biden’s disastrous debate performance, Donald Trump’s attempted assassination, Biden’s withdrawal and the subsequent surge in support for Kamala Harris—have overshadowed the most striking feature of the 2024 election: Donald Trump’s return from exile in Mar-a-Lago to win easy nomination and a serious chance of returning to the White House despite the ignominy of the January 6, 2021, Capitol invasion, felony indictments in four jurisdictions (with convictions on all 34 counts in one of them), and losses in three civil suits since his departure from the White House 2021. This paper documents and attempts to explain this reality through analysis of hundreds of surveys probing reactions to Trump’s criminal charges and civil suits, with an eye to gauging their potential role in shaping voting choices in 2024. mp
... One exper iment, which inspired the present research program, demonstrated that framing the need for environmental action as patriotic and con gruent with the preservation of the status quo made high systemjustifying individuals more willing to sign a petition on behalf of the environment, compared to a control condition ( 8 ). Although this was a promising result, one methodological critic aptly noted that a small sample of undergraduate students "will not support valid inferences about 'message frames' likely to offset politically motivated reason ing among individuals who are skeptical of climate change" ( 22 ). ...
Article
Despite growing scientific alarm about anthropogenic climate change, the world is not on track to solve the crisis. Inaction may be partially explained by skepticism about climate change and resistance to proenvironmental policies from people who are motivated to maintain the status quo (i.e., conservative-rightists). Therefore, practical interventions are needed. In the present research program, we tested an experimental manipulation derived from system justification theory in which proenvironmental initiatives were framed as patriotic and necessary to maintain the American “way of life.” In a large, nationally representative U.S. sample, we found that the system-sanctioned change intervention successfully increased liberal-leftists’ as well as conservative-rightists’ belief in climate change; support for proenvironmental policies; and willingness to share climate information on social media. Similar messages were effective in an aggregated analysis involving 63 countries, although the overall effect sizes were small. More granular exploratory analyses at the country level revealed that while the intervention was moderately successful in some countries (e.g., Brazil, France, Israel), it backfired in others (Germany, Belgium, Russia). Across the three outcome variables, the effects of the intervention were consistent and pronounced in the United States, in support of the hypothesis that system justification motivation can be harnessed on behalf of social change. Potential explanations for divergent country-level effects are discussed. The system-sanctioned change intervention holds considerable promise for policymakers and communicators seeking to increase climate awareness and action.
... Dan Kahan argumenta que esses desacordos polarizados sobre fatos refletem um raciocínio politicamente motivado: as pessoas discordam sobre fatos, porque discordam sobre valores morais e políticos. Elas analisam as evidências para fazer com que suas conclusões factuais estejam de acordo com os valores do grupo a qual pertencem e para defendê-lo contra críticas de grupos rivais (KAHAN, 2016a(KAHAN, , 2016bKAHAN et al., 2017). Vejamos o exemplo de defensores do porte de armas, que são homens brancos em sua maioria. ...
Article
Full-text available
Em discussões sobre justiça, diferentes perspectivas geralmente compartilham princípios morais, mas chegam a conclusões distintas sobre justiça, porque discordam sobre fatos. Argumento que o raciocínio motivado, a injustiça epistêmica e as ideologias de injustiça apoiam instituições injustas ao consolidarem representações distorcidas do mundo. Partindo de uma concepção naturalista de justiça como um tipo de contrato social, sugerirei algumas estratégias para descobrir o que a justiça demanda, neutralizando esses vieses. Sentimentos morais oferecem recursos essenciais para esse fim.
... These results lend support to the core idea of the PMRP framework: that prior attitude and cultural markers not only directly influences the directional outcome of the postevidence opinion (the confirmation bias model), but also influences the process of the elaboration of the incoming message and messenger. And this influence happens in both ways: it changes direction when (a) the media emphasis frame changes or (b) the prior beliefs or cultural markers of the perceiver changes (Kahan, 2012(Kahan, , 2015(Kahan, , 2016Kahan et al., 2011;Kahan, Peters, et al., 2012). ...
Article
Full-text available
On Chinese social media, the stigmatization of homosexuals is tightly connected to the belief that they have a higher risk of contraction than others. However, scientists’ estimation of such risks is selectively framed on media outlets, and could cause confusion about and even polarization around the topic. In the theoretical framework of motivated reasoning, the current study showcases a cognition-intention link in the processing of scientific information regarding homosexuals’ high HIV/AIDS prevalence in China. An online survey experiment (N = 695) using different emphasis frames of the findings from a scientific report shows that ad hoc identification with homosexuals’ rights, and individualism, strongly moderates the direct effect of exposure to different messages on intention of message forwarding, and also the indirect effect mediated by the perception of scientists’ expertise.
... For example, DEI initiatives and climate change can be perceived as identity-threatening when they make members of structurally advantaged groups feel like perpetrators of injustice and/or that their power and status is likely to be diminished if they address injustice. Similarly, political extremism can root people in protecting their identity and affirming their political ideologies to boost their sense of belonging (Kahan, 2015). This can encourage people to create distinctions and embrace divides between groups to affirm their own identity (e.g., Fiorina et al., 2008). ...
Article
Full-text available
Injustice lies at the heart of many societal challenges. By adopting the lens of injustice, we argue that critical insights and interventions can be illuminated. We highlight the importance of healing for addressing the pain and trauma of injustice as well as the role of justice in the healing process, where it can serve as a motivating force (e.g., when people desire justice), healing salve (e.g., when people “do justice”), and desired end state (e.g., working towards a just society). In doing so, we outline how to facilitate healing from injustice and enable the transition from injustice to justice. We provide an agenda for future research that showcases the importance of further understanding the pain and trauma of injustice. We conclude with a call for scholars and practitioners to engage in courageous action to recognize the toll of injustice, promote healing, and work towards a more just society.
... For the control group, we expect that they will be presented increasingly more extreme and less diverse news over time, in line with (Liu et al. 2021). Given that party affiliation is a key predictor of the online content with which people engage (Allen, Martel, and Rand 2022;Törnberg 2022), that partisans are more entrenched in their beliefs (Brewer 2005) and thus more likely to be motivated by their preexisting beliefs (Kahan 2015;Lodge and Taber 2000), and given the connections between partisanship and news extremeness (Tewksbury and Riles 2015;Levendusky and Malhotra 2016), we expect to see the largest shifts for users who consume more moderate news content. Regarding model accuracy, measured through up-vote ratio, we expect our models to improve as additional training data is fed into them, increasing the up-vote ratio over time. ...
Article
While recommendation systems enable users to find articles of interest, they can also create "filter bubbles" by presenting content that reinforces users' pre-existing beliefs. Users are often unaware that the system placed them in a filter bubble and, even when aware, they often lack direct control over it. To address these issues, we first design a political news recommendation system augmented with an enhanced interface that exposes the political and topical interests the system inferred from user behavior. This allows the user to adjust the recommendation system to receive more articles on a particular topic or presenting a particular political stance. We then conduct a user study to compare our system to a traditional interface and found that the transparent approach helped users realize that they were in a filter bubble. Additionally, the enhanced system led to less extreme news for most users but also allowed others to move the system to more extremes. Similarly, while many users moved the system from extreme liberal/conservative to the center, this came at the expense of reducing political diversity of the articles shown. These findings suggest that, while the proposed system increased awareness of the filter bubbles, it had heterogeneous effects on news consumption depending on user preferences.
... If friction is left unmanaged (as it often is in online spaces), it can take a psychological toll, and this often results in negative epistemic outcomes. For example, some people double down on deeply-held beliefs they (know they) don't have sufficient evidence for, as an identity-protecting defence mechanism (Kahan 2016;Lewandowsky 2021), and others avoid engaging with certain topics, platforms, or groups, or disengage from (especially online) debate altogether (Syvertsen 2020). The amount of epistemic friction associated with a topic is not fixed, and can vary depending on the social context of discourse. ...
Article
Full-text available
It is widely accepted that public discourse as we know it is less than ideal from an epistemological point of view. In this paper, we develop an underappreciated aspect of the trouble with public discourse: what we call the Listening Problem. The listening problem is the problem that public discourse has in giving appropriate uptake and reception to ideas and concepts from oppressed groups. Drawing on the work of Jürgen Habermas and Nancy Fraser, we develop an institutional response to the listening problem: the establishment of what we call Receptive Publics, discursive spaces designed to improve listening skills and to give space for counterhegemonic ideas.
Article
Full-text available
Research on U.S. political media has demonstrated that mainstream and right-wing news are qualitatively distinct in a variety of ways. However, the dominant paradigm of political polarization and its attendant assumptions have restricted researchers from putting these descriptive insights into new and potentially generative theoretical context. In this article, we propose a way forward, arguing for the merits of conceptualizing right-wing news as a quasi-religious phenomenon. Putting empirical findings in dialogue with core theoretical insights from the sociology of religion, we argue that the right-wing news ecosystem has epistemic, functional, and ecological features that are more characteristic of religion than its mainstream media counterpart. We illustrate the usefulness of these distinctions by applying them to the case of Fox News and their reporting of the 2020 presidential election. Finally, we discuss how our conceptual framework advances current and future research on mis/disinformation, international politics, and the structural causes and consequences of right-wing news media’s ascendance.
Article
Recent research indicates a generally negative relationship between reflection and conspiracy beliefs. However, most of the existing research relies on correlational data on WEIRD (Western, Educated, Industrialized, Rich, Democratic) populations. The few existing experimental studies are limited by weak manipulation techniques that fail to reliably activate cognitive reflection. Hence, questions remain regarding (1) the consistency of the negative relationship between conspiracy beliefs and cognitive reflection, (2) the extent of cross-cultural variation and potential moderating factors, and (3) the presence of a causal link between cognitive reflection and conspiracy beliefs. In two preregistered studies, we investigated the association between cognitive reflection and conspiracy beliefs. First, we studied the correlation between two variables across 48 cultures and investigated whether factors such as WEIRDness and narcissism (personal and collective) moderate this relationship. In the second study, we tested the causal effect of reflection using a reliable and effective manipulation technique—debiasing training—on both generic and specific conspiracy beliefs. The first study confirmed the negative association between reflection and belief in conspiracy theories across cultures, with the association being notably stronger in non-WEIRD societies. Both personal and collective narcissism played significant moderating roles. The second study demonstrated that debiasing training significantly decreases both generic and COVID-19 conspiracy beliefs in a non-WEIRD context, with more pronounced effects for general conspiracy beliefs. Our research supports that reflection is a consistent cross-cultural predictor of conspiracy beliefs and that activating reflection can reduce such beliefs through rigorous experimental interventions.
Article
Advocates of radical realist theories of legitimacy propose that political legitimation narratives are often void where they show signs of motivated reasoning. In a recent critique of the method, example cases have been put forward in which an analysis and critique of flawed justification narratives seems urgently called for, and yet motivated reasoning is absent. This, critics suggest, should deflate the prominence of motivated reasoning within the radical realism. I argue here that those cases are misconstrued. Motivated reasoning can either be easily identified therein, or the cases are irrelevant to begin with. The issue with realism’s motivated reasoning connection is another: the explanatory direction of fit between epistemic circularity and motivated reasoning. The former explains the normative salience of the latter. Hence, I hope this intervention clarifies a misunderstood and underexplored aspect of contemporary radical realist theory and adds to the contextualisation of the psychology of motivated reasoning within normative social theory more broadly.
Article
Full-text available
Why do we vote, protest, and boycott? Economists explain partisan actions, despite their costs, by arguing political irrationality by a single partisan isn’t costly to them as an individual - they can afford the political irrationality, despite the social costs. And some philosophers worry about the moral and epistemic costs of political irrationality. Here I argue that political irrationality has some benefits: it encourages partisans to engage in virtue signaling and rationalization in politics. And while virtue signaling and rationalization are often epistemically and morally bad, they can nonetheless confer benefits too, like facilitating societal and moral progress.
Article
Is it possible to do ideology critique without morality? In recent years a small group of theorists has attempted to develop such an account and, in doing so, makes claim to a certain sort of “radical realism” distinguished by the ambition to ground political judgments and prescriptions in nonmoral values, principles, or concepts. This essay presents a twofold critique of this realist ideology critique (RIC) by first offering an internal critique of the approach and then arguing that the very attempt to do political theory generally—and ideology critique more specifically—in a way that abjures morality is misguided. In doing so, I contribute both to current debates around “new” ideology critiques and to contested questions about what it means to do political theory realistically.
Article
Explanations for ethnic voting have focused primarily on voters’ use of ethnicity as a heuristic for evaluating parties or candidates, or on the expressive benefits voting for coethnics may provide. This article describes and tests a largely overlooked explanation for ethnic voting resulting from group norms and social pressure. Employing a combination of experimental and observational data from Kenya—as well as observational data from three other African countries—it finds evidence that many voters have no intrinsic preference for coethnic candidates, but that their desire to conform to the norms of their ethnic community drives them to vote along ethnic lines. The results have important implications for our understanding of ethnic voting, as well as the conditions under which survey respondents provide truthful answers about group-related preferences.
Article
Satire is often used in science communication, but it is unclear how it influences perceptions of message credibility and reliance on the information. We examine how two satire types (gentle, harsh) influence perceived message credibility and information reliance, which we define as using the information in discussions or for attitudinal and behavioral changes. Using a partial mediation model, we found no effects of gentle satire, but harsh satire negatively influenced message credibility, which was positively linked to information reliance. Contrary to previous research, we found that the satire type matters. Practical implications include being cautious when using harsh satire.
Article
Communicating the “97%’’ scientific consensus has been the centerpiece of the effort to persuade climate skeptics. Still, this strategy may not work well for those who mistrust climate scientists, to begin with. We examine how the American public—Republicans in particular—respond when provided with a relatively detailed causal explanation summarizing why scientists have concluded that human activities are responsible for climate change. Based on a preregistered survey experiment ( N = 3007), we assessed the effectiveness of detailed causal evidence versus traditional consensus messaging. We found that both treatments had noticeable effects on belief in human-caused climate change, with the causal evidence being slightly more effective, though we did not observe equivalent patterns for changes in attitudes toward climate policies. We conclude that conveying scientific information serves more as a remedy than a cure, reducing but not eliminating misperceptions about climate change and opposition to climate policies.
Chapter
Full-text available
This volume offers a variety of research perspectives on political journalism and its coverage. The contributions show different methodological approaches to the analysis. The patterns of political journalism are mainly outlined in the context of hybrid and digital media. One focus is on journalists in social media. Some contributions shed light on mediation constellations and provide information on changes in the relationship to politics and the audience. The volume is aimed at researchers, teachers and students of journalism and political communication. With contributions by Katarina Bader | Kristina Beckmann, M.A.| Roger Blum | Chung-Hong Chan | Hanne Detel | Maximilian Eder | Rainer Freudenthaler | Anna Gaul, M.A. | Michael Graßl | Jörg Haßler | Jakob Henke | Stefanie Holtrup, M.A. | Carolin Jansen | Andreas Jungherr | Niklas Kastor | Korbinian Klinghardt, M.A. | Maike Körner, M.A. | Katharina Ludwig, M.A. | Renée Lugschitz | Peter Maurer | Philipp Müller | Paula Nitschke | Christian Nuernbergk | Nicole Podschuweit | Katharina Pohl | Marlis Prinzing | Günther Rager | Lars Rinsdorf | Thomas Roessing | Elisabeth Schmidbauer, M.A. | Hannah Schmidt, M.A. | Markus Schug, M.A. | Nina Fabiola Schumacher, M.A. | Jonas Schützeneder | Helena Stehle | Michael Steinbrecher | Bernadette Uth | Hartmut Wessler | Claudia Wilhelm | Dominique Wirz | Anna-Katharina Wurst, M.A. | Florin Zai, M.A.
Article
Full-text available
Background During the COVID-19 pandemic, the rapid spread of misinformation on social media created significant public health challenges. Large language models (LLMs), pretrained on extensive textual data, have shown potential in detecting misinformation, but their performance can be influenced by factors such as prompt engineering (ie, modifying LLM requests to assess changes in output). One form of prompt engineering is role-playing, where, upon request, OpenAI’s ChatGPT imitates specific social roles or identities. This research examines how ChatGPT’s accuracy in detecting COVID-19–related misinformation is affected when it is assigned social identities in the request prompt. Understanding how LLMs respond to different identity cues can inform messaging campaigns, ensuring effective use in public health communications. Objective This study investigates the impact of role-playing prompts on ChatGPT’s accuracy in detecting misinformation. This study also assesses differences in performance when misinformation is explicitly stated versus implied, based on contextual knowledge, and examines the reasoning given by ChatGPT for classification decisions. Methods Overall, 36 real-world tweets about COVID-19 collected in September 2021 were categorized into misinformation, sentiment (opinions aligned vs unaligned with public health guidelines), corrections, and neutral reporting. ChatGPT was tested with prompts incorporating different combinations of multiple social identities (ie, political beliefs, education levels, locality, religiosity, and personality traits), resulting in 51,840 runs. Two control conditions were used to compare results: prompts with no identities and those including only political identity. Results The findings reveal that including social identities in prompts reduces average detection accuracy, with a notable drop from 68.1% (SD 41.2%; no identities) to 29.3% (SD 31.6%; all identities included). Prompts with only political identity resulted in the lowest accuracy (19.2%, SD 29.2%). ChatGPT was also able to distinguish between sentiments expressing opinions not aligned with public health guidelines from misinformation making declarative statements. There were no consistent differences in performance between explicit and implicit misinformation requiring contextual knowledge. While the findings show that the inclusion of identities decreased detection accuracy, it remains uncertain whether ChatGPT adopts views aligned with social identities: when assigned a conservative identity, ChatGPT identified misinformation with nearly the same accuracy as it did when assigned a liberal identity. While political identity was mentioned most frequently in ChatGPT’s explanations for its classification decisions, the rationales for classifications were inconsistent across study conditions, and contradictory explanations were provided in some instances. Conclusions These results indicate that ChatGPT’s ability to classify misinformation is negatively impacted when role-playing social identities, highlighting the complexity of integrating human biases and perspectives in LLMs. This points to the need for human oversight in the use of LLMs for misinformation detection. Further research is needed to understand how LLMs weigh social identities in prompt-based tasks and explore their application in different cultural contexts.
Article
Recent research has highlighted the role of science education in reducing beliefs in science‐related misinformation and stressed its potential positive impact on decision‐making and behavior. This study implemented the Elaboration Likelihood Model to explore how individuals' abilities and motivation interact with the type of processing of scientific information in the peripheral vs. central persuasion routes. A representative sample of adults ( N = 500) completed an online questionnaire during the second wave of COVID‐19 (November 2020) focused on two COVID‐19‐related dilemmas involving social distancing recommendations. First, we examined whether relying on misinformation was associated with participants' stances and the complexity of their arguments and found that relying on misinformation was associated with the intention to reject social distancing recommendations and with the use of simple arguments. Second, we explored how motivation, operationalized as personal relevance, and abilities, operationalized as the highest level of science education, science knowledge, and strategies to identify misinformation, were associated with viewpoints and justifications. We found that personal relevance was associated with the intention to reject the recommendations but also with more complex arguments, suggesting that people did not intend to reject scientific knowledge but rather tended to contextualize it. Abilities were not associated with stance but were positively correlated with argument complexity. Finally, we examined whether motivation and abilities are associated with relying on scientific misinformation when making science‐related decisions. Respondents with higher levels of science education and motivation relied less on misinformation, even if they did not necessarily intend to follow the health recommendations. This implies that motivation directs people to greater usage of the central processing route, resulting in more deliberative use of information. Science education, it appears, impacts the information evaluation decision‐making process more than its outcome.
Preprint
Full-text available
All over the world, political parties, politicians, and campaigns explore how Artificial Intelligence (AI) can help them win elections. However, the effects of these activities are unknown. We propose a framework for assessing AI's impact on elections by considering its application in various campaigning tasks. The electoral uses of AI vary widely, carrying different levels of concern and need for regulatory oversight. To account for this diversity, we group AI-enabled campaigning uses into three categories -- campaign operations, voter outreach, and deception. Using this framework, we provide the first systematic evidence from a preregistered representative survey and two preregistered experiments (n=7,635) on how Americans think about AI in elections and the effects of specific campaigning choices. We provide three significant findings. 1) the public distinguishes between different AI uses in elections, seeing AI uses predominantly negative but objecting most strongly to deceptive uses; 2) deceptive AI practices can have adverse effects on relevant attitudes and strengthen public support for stopping AI development; 3) Although deceptive electoral uses of AI are intensely disliked, they do not result in substantial favorability penalties for the parties involved. There is a misalignment of incentives for deceptive practices and their externalities. We cannot count on public opinion to provide strong enough incentives for parties to forgo tactical advantages from AI-enabled deception. There is a need for regulatory oversight and systematic outside monitoring of electoral uses of AI. Still, regulators should account for the diversity of AI uses and not completely disincentivize their electoral use.
Article
Full-text available
Objectives: We explored the roles of personal values and value congruence一the alignment between individual and national values一in predicting public support for pandemic restrictions across 20 European countries. Study design: Cross-sectional study. Methods: We analyzed multinational European survey data (N = 34,356) using Schwartz's values theory and person-environment fit theory. Multilevel polynomial regression was employed to assess the linear and curvilinear effects of personal values on policy support. Multilevel Euclidean similarity analysis and response surface analysis were conducted to evaluate the impact of value congruence and delineate nuanced congruence patterns. Results: Findings revealed that extreme levels of security, conformity, stimulation, hedonism, and achievement values were associated with decreased policy support. Value congruence with security, conformity, and benevolence increased support, while congruence with stimulation, hedonism, and achievement reduced it. High congruence between personal and national social focus values significantly boosted policy support. Extreme mismatches in self-direction values amplified support. Societal power exceeding personal power also increased support. Matched levels of hedonism motivated greater support, while stimulation and achievement value (in)congruence showed little impact. Conclusions: We highlight the differential effects of personal values and value congruence on public attitudes toward pandemic restrictions. The findings underscore the importance of considering the interplay between individual and societal values when designing and implementing effective pandemic response strategies.
Article
Full-text available
One recent debate in political theory centers on the question of whether there is a distinctively political normativity. According to an influential view, there is a distinctive set of norms that applies specifically to political actions and decisions, which are not grounded in moral normativity. On one version of this non-moral view, political theory is grounded in epistemic normativity (and thus epistemic norms). Theorists identifying as “radical realists” insist that political theorists do not need any moral normativity (and thus moral norms), because epistemic normativity may provide action-guidance for political theory. In this article, we take our point of departure in a critical analysis of this epistemic version of the non-moral view, with the overall aim of analyzing the importance and limitation of epistemic norms in political theory. We argue that epistemic norms are necessary—since a political theory should not rely on empirical falsities—but not sufficient for a successful account in the political domain. Two claims are made: moral norms are essential in the process of political theorizing, both in the form of pre-epistemic norms and in the form of post-epistemic norms. More specifically, we contend, first, that we need moral norms to identify and justify which practices to study when conducting political theorizing, and second, that we need moral norms to tell us how to act in light of our investigation of warranted and unwarranted beliefs.
Article
Full-text available
In a series of very influential papers, Dan Kahan argues for “the identity protective cognition thesis”: the claim that politically motivated reasoning is a major factor explaining current levels of polarization over matters of fact, especially in the US. An important part of his case consists of experimental data supporting the claim that ideological polarization is more extreme amongst more numerate individuals. In this paper, we take a close look at how precisely this “numeracy effect” is supposed to come about. Working with Kahan’s own notion of motivated reasoning, we reconstruct the mechanism that according to him produces the effect. Surprisingly, it turns out to involve plenty of motivation to reason, but no motivated reasoning. This undermines the support he takes the numeracy effect to provide for the identity protective cognition hypothesis.
Chapter
The subject of this paper is how the epistemic limitations of individuals and their biases in reasoning affect collective decisions and in particular the functioning of democracies. In fact, while the cognitive sciences have largely shown how the imperfections of human rationality shape individual decisions and behaviors, the implications of these imperfections for collective choice and mass behaviors have not yet been studied in such detail. In particular, the link between these imperfections and the emergence of contemporary populisms has not yet been thoroughly explored. This is done in this paper by considering both fundamental dimensions of the political space: the cultural-identitarian and the socio-economic one. As has been noted, reflections on these points induce to revise the picture of democracy as a regime producing collective decisions that come out from the interaction of independent individuals well aware of their values and interests, and rationally (in the sense of rational choice theory) pursuing them. This leads to a certain skepticism towards the idealization of democracy as human rationality in pursuit of the common good, which serves to provide cover for those who profit from the distortions and biases in the policy-making processes of actual democracies. A natural conclusion of the paper is that contemporary democracies are quite vulnerable in the face of populist leaders and parties, that are systematically trying to exploit to their advantage people’s imperfect rationality (using “easy arguments”, emotions, stereotypes…).
Article
Motivated reasoning posits that people distort how they process information in the direction of beliefs they find attractive. This paper creates a novel experimental design to identify motivated reasoning from Bayesian updating when people have preconceived beliefs. It analyzes how subjects assess the veracity of information sources that tell them the median of their belief distribution is too high or too low. Bayesians infer nothing about the source veracity, but motivated beliefs are evoked. Evidence supports politically motivated reasoning about immigration, income mobility, crime, racial discrimination, gender, climate change, and gun laws. Motivated reasoning helps explain belief biases, polarization, and overconfidence. (JEL C91, D12, D72, D83, D91, L82)
Article
Full-text available
Information ignorance refers to the act of deliberately avoiding, neglecting, or distorting information to uphold a positive self-image and protect our identity-based beliefs. We apply this framework to household finance and develop a concise 12-item questionnaire measuring individuals’ receptiveness to financial information, or the lack thereof – the Financial Homo Ignorans (FHI) Scale. We conduct two studies with samples from the general population in Sweden (total N=2508) and show that the FHI scale has high reliability and distinct from other commonly used individual-difference measures in behavioral finance. We show that individual heterogeneity as assessed by the FHI scale explains a substantial variation in financial behaviors and financial well-being, also when controlling for demographics and financial literacy. These results unequivocally demonstrate the utility of the FHI scale as a valuable instrument for researchers and practitioners in comprehending and addressing the challenges posed by the omnipresence of financial information in today's world.
Article
Full-text available
This interdisciplinary study, coupling philosophy of law with empirical cognitive science, presents preliminary insight into the role of emotion in criminalization decisions, for both laypeople and legal professionals. While the traditional approach in criminalization theory emphasizes the role of deliberative and reasoned argumentation, this study hypothesizes that affective and emotional processes (i.e., disgust, as indexed by a dispositional proneness to experience disgust) are also associated with the decision to criminalize behavior, in particular virtual child pornography. To test this empirically, an online study (N = 1402) was conducted in which laypeople and legal professionals provided criminalization ratings on four vignettes adapted from criminal law, in which harmfulness and disgustingness were varied orthogonally. They also completed the 25-item Disgust Scale-Revised (DS-R-NL). In line with the hypothesis, (a) the virtual child pornography vignette (characterized as low in harm, high in disgust) was criminalized more readily than the financial harm vignette (high in harm, low in disgust), and (b) disgust sensitivity was associated with the decision to criminalize behavior, especially virtual child pornography, among both lay participants and legal professionals. These findings suggest that emotion can be relevant in shaping criminalization decisions. Exploring this theoretically, the results could serve as a stepping stone towards a new perspective on criminalization, including a “criminalization bias”. Study limitations and implications for legal theory and policymaking are discussed.
Article
Evidence indicates that when people forecast potential social risks, they are guided not only by facts but often by motivated reasoning also. Here I apply a Bayesian decision framework to interpret the role of motivated reasoning during forecasting and assess some of the ensuing predictions. In 2 online studies, for each of a set of potential risky social events (e.g., economic crisis, rise of income inequality, and increase in violent crime), participants expressed judgments about the probability that the event will occur, how negative occurrence of the event would be, whether society is able to intervene in the event. Supporting predictions of the Bayesian decision model, the analyses revealed that participants who deemed the events as more probable also assessed occurrence of the events as more negative and believed society to be more capable to intervene in the events. Supporting the notion that a social threat is appraised as more probable when an intervention is deemed to be possible, these findings are compatible with a form of intervention bias. These observations are relevant for campaigns aimed at informing the population about potential social risks such as climate change, economic dislocations, and pandemics.
Article
Full-text available
Crowdsourcing has had a dramatic impact on the speed and scale at which scientific research can be conducted. Clinical scientists have particularly benefited from readily available research study participants and streamlined recruiting and payment systems afforded by Amazon Mechanical Turk (MTurk), a popular labor market for crowdsourcing workers. MTurk has been used in this capacity for more than five years. The popularity and novelty of the platform have spurred numerous methodological investigations, making it the most studied nonprobability sample available to researchers. This article summarizes what is known about MTurk sample composition and data quality with an emphasis on findings relevant to clinical psychological research. It then addresses methodological issues with using MTurk-many of which are common to other nonprobability samples but unfamiliar to clinical science researchers-and suggests concrete steps to avoid these issues or minimize their impact. Expected final online publication date for the Annual Review of Clinical Psychology Volume 12 is March 28, 2016. Please see http://www.annualreviews.org/catalog/pubdates.aspx for revised estimates.
Article
Full-text available
The cultural cognition thesis posits that individuals rely extensively on cultural meanings in forming perceptions of risk. The logic of the cultural cognition thesis suggests that a two-channel science communication strategy, combining information content (Channel 1) with cultural meanings (Channel 2), could promote open-minded assessment of information across diverse communities. We test this kind of communication strategy in a two-nation (United States, n = 1,500; England, n = 1,500) study, in which scientific information content on climate change was held constant while the cultural meaning of that information was experimentally manipulated. We found that cultural polarization over the validity of climate change science is offset by making citizens aware of the potential contribution of geoengineering as a supplement to restriction of CO2 emissions. We also tested the hypothesis, derived from a competing model of science communication, that exposure to information on geoengineering would lead citizens to discount climate change risks generally. Contrary to this hypothesis, we found that subjects exposed to information about geoengineering were slightly more concerned about climate change risks than those assigned to a control condition.
Article
Full-text available
An important component of political polarization in the United States is the degree to which ordinary people perceive political polarization. We used over 30 years of national survey data from the American National Election Study to examine how the public perceives political polarization between the Democratic and Republican parties and between Democratic and Republican presidential candidates. People in the United States consistently overestimate polarization between the attitudes of Democrats and Republicans. People who perceive the greatest political polarization are most likely to report having been politically active, including voting, trying to sway others' political beliefs, and making campaign contributions. We present a 3-factor framework to understand ordinary people's perceptions of political polarization. We suggest that people perceive greater political polarization when they (a) estimate the attitudes of those categorized as being in the "opposing group"; (b) identify strongly as either Democrat or Republican; and (c) hold relatively extreme partisan attitudes-particularly when those partisan attitudes align with their own partisan political identity. These patterns of polarization perception occur among both Democrats and Republicans. © The Author(s) 2015.
Article
Full-text available
Some claim that recent advances in neuroscience will revolutionize the way we think about human nature and legal culpability. Empirical support for this proposition is mixed. Two highly-cited empirical studies found that irrelevant neuroscientific explanations and neuroimages were highly persuasive to laypersons. However, attempts to replicate these effects have largely been unsuccessful. Two separate experiments tested the hypothesis that neuroscience is susceptible to motivated reasoning, which refers to the tendency to selectively credit or discredit information in a manner that reinforces preexisting beliefs. Participants read a newspaper article about a cutting-edge neuroscience study. Consistent with the hypothesis, participants deemed the hypothetical study sound and the neuroscience persuasive when the outcome of the study was congruent with their prior beliefs, but gave the identical study and neuroscience negative evaluations when it frustrated their beliefs. Neuroscience, it appears, is subject to the same sort of cognitive dynamics as other types of scientific evidence. These findings qualify claims that neuroscience will play a qualitatively different role in the way in which it shapes people's beliefs and informs issues of social policy.
Article
Full-text available
Extending existing scholarship on the white male effect in risk perception, we examine whether conservative white males (CWMs) are less worried about the risks of environmental problems than are other adults in the US general public. We draw theoretical and analytical guidance from the identity-protective cognition thesis explaining the white male effect and from recent political psychology scholarship documenting the heightened system-justification tendencies of political conservatives. We utilize public opinion data from nine Gallup surveys between 2001 and 2010, focusing on both a single-item indicator and a composite measure of worry about environmental problems. We find that CWMs indeed have significantly lower worry about environmental problems than do other Americans. Furthermore, the results of our multivariate regression models reveal that this CWMs effect remains significant when controlling for the direct effects of political ideology, race, and gender and the effects of nine social, demographic, and temporal control variables – as well as the effect of individuals generalized (nonenvironmental) risk perceptions. We conclude that the white male effect is due largely to CWMs, and that the latters low level of concern with environmental risks is likely driven by their social commitment to prevent new environmental regulations and repeal existing ones.
Article
Full-text available
Cooperation is central to human societies. Yet relatively little is known about the cognitive underpinnings of cooperative decision making. Does cooperation require deliberate self-restraint? Or is spontaneous prosociality reined in by calculating self-interest? Here we present a theory of why (and for whom) intuition favors cooperation: cooperation is typically advantageous in everyday life, leading to the formation of generalized cooperative intuitions. Deliberation, by contrast, adjusts behaviour towards the optimum for a given situation. Thus, in one-shot anonymous interactions where selfishness is optimal, intuitive responses tend to be more cooperative than deliberative responses. We test this 'social heuristics hypothesis' by aggregating across every cooperation experiment using time pressure that we conducted over a 2-year period (15 studies and 6,910 decisions), as well as performing a novel time pressure experiment. Doing so demonstrates a positive average effect of time pressure on cooperation. We also find substantial variation in this effect, and show that this variation is partly explained by previous experience with one-shot lab experiments.
Article
Full-text available
To test the effectiveness of messages designed to reduce vaccine misperceptions and increase vaccination rates for measles-mumps-rubella (MMR). A Web-based nationally representative 2-wave survey experiment was conducted with 1759 parents age 18 years and older residing in the United States who have children in their household age 17 years or younger (conducted June-July 2011). Parents were randomly assigned to receive 1 of 4 interventions: (1) information explaining the lack of evidence that MMR causes autism from the Centers for Disease Control and Prevention; (2) textual information about the dangers of the diseases prevented by MMR from the Vaccine Information Statement; (3) images of children who have diseases prevented by the MMR vaccine; (4) a dramatic narrative about an infant who almost died of measles from a Centers for Disease Control and Prevention fact sheet; or to a control group. None of the interventions increased parental intent to vaccinate a future child. Refuting claims of an MMR/autism link successfully reduced misperceptions that vaccines cause autism but nonetheless decreased intent to vaccinate among parents who had the least favorable vaccine attitudes. In addition, images of sick children increased expressed belief in a vaccine/autism link and a dramatic narrative about an infant in danger increased self-reported belief in serious vaccine side effects. Current public health communications about vaccines may not be effective. For some parents, they may actually increase misperceptions or reduce vaccination intention. Attempts to increase concerns about communicable diseases or correct false claims about vaccines may be especially likely to be counterproductive. More study of pro-vaccine messaging is needed.
Article
Full-text available
A long acknowledged but seldom addressed problem with political communication experiments concerns the use of captive participants. Study participants rarely have the opportunity to choose information themselves, instead receiving whatever information the experimenter provides. We relax this assumption in the context of an over-time framing experiment focused on opinions about health care policy. Our results dramatically deviate from extant understandings of over-time communication effects. Allowing individuals to choose information themselves—a common situation on many political issues—leads to the preeminence of early frames and the rejection of later frames. Instead of opinion decay, we find dogmatic adherence to opinions formed in response to the first frame to which participants were exposed (i.e., staunch opinion stability). The effects match those that occur when early frames are repeated multiple times. The results suggest that opinion stability may often reflect biased information seeking. Moreover, the findings have implications for a range of topics including the micro–macro disconnect in studies of public opinion, political polarization, normative evaluations of public opinion, the role of inequality considerations in the debate about health care, and, perhaps most importantly, the design of experimental studies of public opinion.
Article
Full-text available
Although participants with psychiatric symptoms, specific risk factors, or rare demographic characteristics can be difficult to identify and recruit for participation in research, participants with these characteristics are crucial for research in the social, behavioral, and clinical sciences. Online research in general and crowdsourcing software in particular may offer a solution. However, no research to date has examined the utility of crowdsourcing software for conducting research on psychopathology. In the current study, we examined the prevalence of several psychiatric disorders and related problems, as well as the reliability and validity of participant reports on these domains, among users of Amazon’s Mechanical Turk. Findings suggest that crowdsourcing software offers several advantages for clinical research while providing insight into potential problems, such as misrepresentation, that researchers should address when collecting data online.
Article
Full-text available
People who hold strong opinions on complex social issues are likely to examine relevant empirical evidence in a biased manner. They are apt to accept "confirming" evidence at face value while subjecting "disconfirming" evidence to critical evaluation, and, as a result, draw undue support for their initial positions from mixed or random empirical findings. Thus, the result of exposing contending factions in a social dispute to an identical body of relevant empirical evidence may be not a narrowing of disagreement but rather an increase in polarization. To test these assumptions, 48 undergraduates supporting and opposing capital punishment were exposed to 2 purported studies, one seemingly confirming and one seemingly disconfirming their existing beliefs about the deterrent efficacy of the death penalty. As predicted, both proponents and opponents of capital punishment rated those results and procedures that confirmed their own beliefs to be the more convincing and probative ones, and they reported corresponding shifts in their beliefs as the various results and procedures were presented. The net effect of such evaluations and opinion shifts was the postulated increase in attitude polarization. (28 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Libertarians are an increasingly prominent ideological group in U.S. politics, yet they have been largely unstudied. Across 16 measures in a large web-based sample that included 11,994 self-identified libertarians, we sought to understand the moral and psychological characteristics of self-described libertarians. Based on an intuitionist view of moral judgment, we focused on the underlying affective and cognitive dispositions that accompany this unique worldview. Compared to self-identified liberals and conservatives, libertarians showed 1) stronger endorsement of individual liberty as their foremost guiding principle, and weaker endorsement of all other moral principles; 2) a relatively cerebral as opposed to emotional cognitive style; and 3) lower interdependence and social relatedness. As predicted by intuitionist theories concerning the origins of moral reasoning, libertarian values showed convergent relationships with libertarian emotional dispositions and social preferences. Our findings add to a growing recognition of the role of personality differences in the organization of political attitudes.
Article
Full-text available
Five studies demonstrated that people selectively use general moral principles to rationalize preferred moral conclusions. In Studies 1a and 1b, college students and community respondents were presented with variations on a traditional moral scenario that asked whether it was permissible to sacrifice one innocent man in order to save a greater number of people. Political liberals, but not relatively more conservative participants, were more likely to endorse consequentialism when the victim had a stereotypically White American name than when the victim had a stereotypically Black American name. Study 2 found evidence suggesting participants believe that the moral principles they are endorsing are general in nature: when presented sequentially with both versions of the scenario, liberals again showed a bias in their judgments to the initial scenario, but demonstrated consistency thereafter. Study 3 found conservatives were more likely to endorse the unintended killing of innocent civilians when Iraqis civilians were killed than when Americans civilians were killed, while liberals showed no significant effect. In Study 4, participants primed with patriotism were more likely to endorse consequentialism when Iraqi civilians were killed by American forces than were participants primed with multiculturalism. However, this was not the case when American civilians were killed by Iraqi forces. Implications for the role of reason in moral judgment are discussed.
Article
Full-text available
Why, during a decision between new alternatives, do people bias their evaluations of information to support a tentatively preferred option? The authors test the following 3 decision process goals as the potential drivers of such distortion of information: (a) to reduce the effort of evaluating new information, (b) to increase the separation between alternatives, and (c) to achieve consistency between old and new units of information. Two methods, the nonconscious priming of each goal and assessing the ambient activation levels of multiple goals, reveal that the goal of consistency drives information distortion. Results suggest the potential value of combining these methods in studying the dynamics of multiple, simultaneously active goals.
Article
Full-text available
Four studies demonstrated both the power of group influence in persuasion and people's blindness to it. Even under conditions of effortful processing, attitudes toward a social policy depended almost exclusively upon the stated position of one's political party. This effect overwhelmed the impact of both the policy's objective content and participants' ideological beliefs (Studies 1-3), and it was driven by a shift in the assumed factual qualities of the policy and in its perceived moral connotations (Study 4). Nevertheless, participants denied having been influenced by their political group, although they believed that other individuals, especially their ideological adversaries, would be so influenced. The underappreciated role of social identity in persuasion is discussed.
Article
This commentary uses the dynamic of identity-protective cognition to pose a friendly challenge to Jussim (2012). Like other forms of information processing, this one is too readily characterized as a bias. It is no mistake, however, to view identity-protective cognition as generating inaccurate perceptions. The “bounded rationality” paradigm incorrectly equates rationality with forming accurate beliefs. But so does Jussim's critique.
Article
This book attempts to resolve the Great Rationality Debate in cognitive science-the debate about how much irrationality to ascribe to human cognition. It shows how the insights of dual-process theory and evolutionary psychology can be combined to explain why humans are sometimes irrational even though they possess remarkably adaptive cognitive machinery. The book argues that to characterize fully differences in rational thinking, we need to replace dual-process theories with tripartite models of cognition. Using a unique individual differences approach, it shows that the traditional second system (System 2) of dual-process theory must be further divided into the reflective mind and the algorithmic mind. Distinguishing them gives a better appreciation of the significant differences in their key functions: the key function of the reflective mind is to detect the need to interrupt autonomous processing and to begin simulation activities, whereas that of the algorithmic mind is to sustain the processing of decoupled secondary representations in cognitive simulation. The book then uses this algorithmic/reflective distinction to develop a taxonomy of cognitive errors made on tasks in the heuristics and biases literature. It presents the empirical data to show that the tendency to make these thinking errors is not highly related to intelligence. Using a tripartite model of cognition, the book shows how, when both are properly defined, rationality is a more encompassing construct than intelligence, and that IQ tests fail to assess individual differences in rational thought. It then goes on to discuss the types of thinking processes that would be measured if rational thinking were to be assessed as IQ has been.
Article
This article examines the science-of-science-communication measurement problem. In its simplest form, the problem reflects the use of externally invalid measures of the dynamics that generate cultural conflict over risk and other policy-relevant facts. But at a more fundamental level, the science-of-science-communication measurement problem inheres in the phenomena being measured themselves. The “beliefs” individuals form about a societal risk such as climate change are not of a piece; rather they reflect the distinct clusters of inferences that individuals draw as they engage information for two distinct ends: to gain access to the collective knowledge furnished by science and to enjoy the sense of identity enabled by membership in a community defined by particular cultural commitments. The article shows how appropriately designed “science comprehension” tests—one general and one specific to climate change—can be used to measure individuals’ reasoning proficiency as collective-knowledge acquirers independently of their reasoning proficiency as cultural-identity protectors. Doing so reveals that there is in fact little disagreement among culturally diverse citizens on what science knows about climate change. The source of the climate-change controversy and like disputes over societal risks is the contamination of the science-communication environment with forms of cultural status competition that make it impossible for diverse citizens to express their reason as both collective-knowledge acquirers and cultural-identity protectors at the same time.
Article
Numerous factors shape citizens' beliefs about global warming, but there is very little research that compares the views of the public with key actors in the policymaking process. We analyze data from simultaneous and parallel surveys of (1) the U.S. public, (2) scientists who actively publish research on energy technologies in the United States, and (3) congressional policy advisors and find that beliefs about global warming vary markedly among them. Scientists and policy advisors are more likely than the public to express a belief in the existence and anthropogenic nature of global warming. We also find ideological polarization about global warming in all three groups, although scientists are less polarized than the public and policy advisors over whether global warming is actually occurring. Alarmingly, there is evidence that the ideological divide about global warming gets significantly larger according to respondents' knowledge about politics, energy, and science.
Article
Taber and Lodge offer a powerful case for the prevalence of directional reasoning that aims not at truth, but at the vindication of prior opinions. Taber and Lodge's results have far-reaching implications for empirical scholarship and normative theory; indeed, the very citizens often seen as performing “best” on tests of political knowledge, sophistication, and ideological constraint appear to be the ones who are the most susceptible to directional reasoning. However, Taber and Lodge's study, while internally beyond reproach, may substantially overstate the presence of motivated reasoning in political settings. That said, focusing on the accuracy motivation has the potential to bring together two models of opinion formation that many treat as competitors, and to offer a basis for assessing citizen competence.
Article
Experimentation is an increasingly popular method among political scientists. While experiments are highly advantageous for creating internally valid conclusions, they are often criticized for being low on external validity. Critical to questions of external validity are the types of subjects who participate in a given experiment, with scholars typically arguing that samples of adults are more externally valid then student samples. Despite the vociferousness of such arguments, these claims have received little empirical treatment. In this paper we empirically test for key differences between student and adult samples by conducting four parallel experiments on each of the three samples commonly used by political scientists. We find that our student and diverse, national adult sample behave consistently and in line with theoretical predictions once relevant moderators are taken into account. The same is not true for our adult convenience sample.
Article
Political parties play a vital role in democracies by linking citizens to their representatives. Nonetheless, a longstanding concern is that partisan identification slants decision-making. Citizens may support (oppose) policies that they would otherwise oppose (support) in the absence of an endorsement from a political party—this is due in large part to what is called partisan motivated reasoning where individuals interpret information through the lens of their party commitment. We explore partisan motivated reasoning in a survey experiment focusing on support for an energy law. We identify two politically relevant factors that condition partisan motivated reasoning: (1) an explicit inducement to form an “accurate” opinion, and (2) cross-partisan, but not consensus, bipartisan support for the law. We further provide evidence of how partisan motivated reasoning works psychologically and affects opinion strength. We conclude by discussing the implications of our results for understanding opinion formation and the overall quality of citizens’ opinions.
Book
Human beings are consummate rationalizers, but rarely are we rational. Controlled deliberation is a bobbing cork on the currents of unconscious information processing, but we have always the illusion of standing at the helm. This book presents a theory of the architecture and mechanisms that determine when, how, and why unconscious thoughts, the coloration of feelings, the plausibility of goals, and the force of behavioral dispositions change moment-by-moment in response to “priming” events that spontaneously link changes in the environment to changes in beliefs, attitudes, and behavior. Far from the consciously directed decision-making assumed by conventional models, political behavior is the result of innumerable unnoticed forces, with conscious deliberation little more than a rationalization of the outputs of automatic feelings and inclinations.
Article
▪ Abstract Do people assimilate new information in an efficient and unbiased manner—that is, do they update prior beliefs in accordance with Bayes' rule? Or are they selective in the way that they gather and absorb new information? Although many classic studies in political science and psychology contend that people resist discordant information, more recent research has tended to call the selective perception hypothesis into question. We synthesize the literatures on biased assimilation and belief polarization using a formal model that encompasses both Bayesian and biased learning. The analysis reveals (a) the conditions under which these phenomena may be consistent with Bayesian learning, (b) the methodological inadequacy of certain research designs that fail to control for preferences or prior information, and (c) the limited support that exists for the more extreme variants of the selective perception hypothesis.
Article
How is public opinion towards nanotechnology likely to evolve? The ‘familiarity hypothesis’ holds that support for nanotechnology will likely grow as awareness of it expands. The basis of this conjecture is opinion polling, which finds that few members of the public claim to know much about nanotechnology, but that those who say they do are substantially more likely to believe its benefits outweigh its risks1, 2, 3, 4. Some researchers, however, have avoided endorsing the familiarity hypothesis, stressing that cognitive heuristics and biases could create anxiety as the public learns more about this novel science5, 6. We conducted an experimental study aimed at determining how members of the public would react to balanced information about nanotechnology risks and benefits. Finding no support for the familiarity hypothesis, the study instead yielded strong evidence that public attitudes are likely to be shaped by psychological dynamics associated with cultural cognition.
Article
Bayes’ Theorem is increasingly used as a benchmark against which to judge the quality of citizens’ thinking, but some of its implications are not well understood. A common claim is that Bayesians must agree more as they learn and that the failure of partisans to do the same is evidence of bias in their responses to new information. Formal inspection of Bayesian learning models shows that this is a misunderstanding. Learning need not create agreement among Bayesians. Disagreement among partisans is never clear evidence of bias. And although most partisans are not Bayesians, their reactions to new information are surprisingly consistent with the ideal of Bayesian rationality.
Article
“Cultural cognition” refers to the unconscious influence of individuals’ group commitments on their perceptions of legally consequential facts. We conducted an experiment to assess the impact of cultural cognition on perceptions of facts relevant to distinguishing constitutionally protected “speech” from unprotected “conduct.” Study subjects viewed a video of a political demonstration. Half the subjects believed that the demonstrators were protesting abortion outside of an abortion clinic, and the other half that the demonstrators were protesting the military’s “don’t ask, don’t tell” policy outside a campus recruitment facility. Subjects of opposing cultural outlooks who were assigned to the same experimental condition (and who thus had the same belief about the nature of the protest) disagreed sharply on key “facts” – including whether the protesters obstructed and threatened pedestrians. Subjects also disagreed sharply with those who shared their cultural outlooks but who were assigned to the opposing experimental condition (and hence had a different belief about the nature of the protest). These results supported the study hypotheses about how cultural cognition would affect perceptions pertinent to the “speech”-“conduct” distinction. We discuss the significance of the results for constitutional law and liberal principles of self-governance generally.
Chapter
This chapter provides an overview of self-affirmation theory. Self-affirmation theory asserts that the overall goal of the self-system is to protect an image of its self-integrity, of its moral and adaptive adequacy. When this image of self-integrity is threatened, people respond in such a way as to restore self-worth. The chapter illustrates how self-affirmation affects not only people's cognitive responses to threatening information and events, but also their physiological adaptations and actual behavior. It examines the ways in which self-affirmations reduce threats to the self at the collective level, such as when people confront threatening information about their groups. It reviews factors that qualify or limit the effectiveness of self-affirmations, including situations where affirmations backfire, and lead to greater defensiveness and discrimination. The chapter discusses the connection of self-affirmations theory to other motivational theories of self-defense and reviews relevant theoretical and empirical advances. It concludes with a discussion of the implications of self-affirmations theory for interpersonal relationships and coping.
Article
How do individuals form opinions about new technologies? What role does factual information play? We address these questions by incorporating 2 dynamics, typically ignored in extant work: information competition and over-time processes. We present results from experiments on 2 technologies: carbon-nanotubes and genetically modified foods. We find that factual information is of limited utility—it does not have a greater impact than other background factors (e.g., values), it adds little power to newly provided arguments/frames (e.g., compared to arguments lacking facts), and it is perceived in biased ways once individuals form clear initial opinions (e.g., motivated reasoning). Our results provide insight into how individuals form opinions over time, and bring together literatures on information, framing, and motivated reasoning.
Article
Despite extensive evidence of climate change and environmental destruction, polls continue to reveal widespread denial and resistance to helping the environment. It is posited here that these responses are linked to the motivational tendency to defend and justify the societal status quo in the face of the threat posed by environmental problems. The present research finds that system justification tendencies are associated with greater denial of environmental realities and less commitment to pro-environmental action. Moreover, the effects of political conservatism, national identification, and gender on denial of environmental problems are explained by variability in system justification tendencies. However, this research finds that it is possible to eliminate the negative effect of system justification on environmentalism by encouraging people to regard pro-environmental change as patriotic and consistent with protecting the status quo (i.e., as a case of "system-sanctioned change"). Theoretical and practical implications of these findings are discussed.
Article
When making judgments, one may encounter not only justifiable factors, i.e., attributes which the judge thinks that he/she should take into consideration, but also unjustifiable factors, i.e, attributes which the judge wants to take into consideration but knows he/she should not. It is proposed that the influence of an unjustifiable fact on one's judgment depends on the presence of elasticity (ambiguity) in justifiable factors; the influence will be greater if there is elasticity than if there is not. Two studies involving different contexts demonstrated the proposed elasticity effect and suggested that the effect could be a result of a self-oriented justification process. Implications of this research for decisions involving a should-vs-want conflict are dicussed.
Article
Psychological research indicates that people have a cognitive bias that leads them to misinterpret new information as supporting previously held hypotheses. We show in a simple model that such confirmatory bias induces overconfidence: given any probabilistic assessment by an agent that one of two hypotheses is true, the appropriate beliefs would deem it less likely to be true. Indeed, the hypothesis that the agent believes in may be more likely to be wrong than right. We also show that the agent may come to believe with near certainty in a false hypothesis despite receiving an infinite amount of information.
Moral tribes: Emotion, reason, and the gap between us and them
  • J. D. Greene
Most Republicans back a flat tax
  • P. Moore
Cambridge handbook of experimental political science
  • J. N. Druckman
  • C. D. Kam
Handbook of self and identity
  • D. Dunning
Cultural cognition of scientific consensus
  • D. M. Kahan
  • H. Jenkins-Smith
  • D. Braman
How representatives are Amazon mechanical turk workers?
  • S. Richey
  • B. Taylor
General social survey, 2014 file [data file and codebook]
  • T.W. Smith
  • P.V. Marsden
  • M. Hout
Mechanical Turk: Amazon's New Underclass
  • J Dobson
Most republicans do not think humans are causing climate change
  • K. Frankovic
Oxford handbook of social cognition
  • J. T. Jost
  • E. P. Hennes
  • H. Lavine
Majority say more concealed weapons would make U.S. Safer
  • F. Newport