Chapter

The Politically Motivated Reasoning Paradigm, Part 2: Unanswered Questions: An Interdisciplinary, Searchable, and Linkable Resource

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This is the second in a pair of essays on politically motivated reasoning. The first presented a conceptual model of this dynamic: the “Politically Motivated Reasoning Paradigm” (PMRP). This essay uses PMRP to highlight a set of unsettled issues, including the rationality of politically motivated reasoning; the association of it with ideological conservatism; the power of monetary incentives to neutralize it; and the interaction of it with expert judgment.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Fact polarization is "intense, persistent partisan contestation over facts that admit of scientific evidence" (Kahan 2016b;p. 1). ...
... The best test of politically motivated reasoning is whether study subjects alter the weight they assign the same piece of evidence in response to an experimental manipulation of the perceived relationship between that evidence and positions that predominate in their cultural group. This experimental setup can be called the PMRP design (Kahan 2016b;p. 2). ...
... Under these conditions, it is a perfectly rational thing for one to attend to information in a manner that promotes beliefs that express one's identity correctly, regardless whether such beliefs are factually correct (…). And if one is really good at conscious, effortful information processing, then it pays to apply that reasoning proficiency to give information exactly this effect (Kahan 2016b;p. 4). ...
Article
Full-text available
In a series of very influential papers, Dan Kahan argues for “the identity protective cognition thesis”: the claim that politically motivated reasoning is a major factor explaining current levels of polarization over matters of fact, especially in the US. An important part of his case consists of experimental data supporting the claim that ideological polarization is more extreme amongst more numerate individuals. In this paper, we take a close look at how precisely this “numeracy effect” is supposed to come about. Working with Kahan’s own notion of motivated reasoning, we reconstruct the mechanism that according to him produces the effect. Surprisingly, it turns out to involve plenty of motivation to reason, but no motivated reasoning. This undermines the support he takes the numeracy effect to provide for the identity protective cognition hypothesis.
... In a narrow sense, knowledge resistance thus involves a form av irrational response to empirical evidence that is available to the individual. One key mechanism is motivated reasoning, where people -consciously but more often unconsciously -assess factual information not based on the empirical evidence and its truth value but rather to reach some other goals such as protecting one's social identity (Kahan, 2016a(Kahan, , 2016bKunda, 1990). One example might be someone who resists scienti c evidence that human activities are the main cause of global warming because such evidence con icts with the political or social group that s/he identi es with. ...
... Beyond exposure, con rmation biases also manifest themselves through how people process and interpret information they encounter. They may, for example, weigh evidence or facts that support a certain belief or attitude more heavily than information that runs counter to them, evaluate congruent information as stronger and more compelling than incongruent information, spend more time and cognitive efforts at counterarguing incongruent facts and arguments than in scrutinizing congruent facts and arguments, and simply disregard facts and evidence that are attitude-incongruent (Hart et al., 2009;Kahan, 2016aKahan, , 2016bKunda, 1990;Lilienfeld et al., 2009;Lodge & Taber, 2013;Nickerson, 1998). Such biased processing of information and motivated reasoning is particularly likely when people are motivated by directional goals rather than accuracy goals. ...
... Both the affective and the cognitive reactions are also likely to be more intensive compared to when someone just has a different attitude. Thereby, biased processing and factual disputes might contribute to increasing factual belief polarization (Kahan, 2016a(Kahan, , 2016bLodge & Taber, 2013;Lord et al., 1979;Rekker, 2021;Taber & Lodge, 2006). ...
... In a narrow sense, knowledge resistance thus involves a form av irrational response to empirical evidence that is available to the individual. One key mechanism is motivated reasoning, where people -consciously but more often unconsciously -assess factual information not based on the empirical evidence and its truth value but rather to reach some other goals such as protecting one's social identity (Kahan, 2016a(Kahan, , 2016b. One example might be someone who resists scientific evidence that human activities are the main cause of global warming because such evidence conflicts with the political or social group that s/he identifies with. ...
... Beyond exposure, confirmation biases also manifest themselves through how people process and interpret information they encounter. They may, for example, weigh evidence or facts that support a certain belief or attitude more heavily than information that runs counter to them, evaluate congruent information as stronger and more compelling than incongruent information, spend more time and cognitive efforts at counterarguing incongruent facts and arguments than in scrutinizing congruent facts and arguments, and simply disregard facts and evidence that are attitude-incongruent (Hart et al., 2009;Kahan, 2016aKahan, , 2016bLilienfeld et al., 2009;. Such biased processing of information and motivated reasoning is particularly likely when people are motivated by directional goals rather than accuracy goals. ...
... Both the affective and the cognitive reactions are also likely to be more intensive compared to when someone just has a different attitude. Thereby, biased processing and factual disputes might contribute to increasing factual belief polarization (Kahan, 2016a(Kahan, , 2016b. This holds true not least in political contexts where " [d]efining what is true and false has become a common political strategy, replacing debates on a mutually agreed set of facts" (Vosoughi et al., 2018). ...
... Consistent with all of these psychological differences, research suggests that in the United States, at least, rumors, misinformation, and conspiracy theories spread more rapidly and extensively in the social networks of conservatives, as compared with liberals (Benkler, Faris, Roberts, & Zuckerman, 2017;Guess, Nagler, & Tucker, 2019;Guess, Nyhan, & Reifler, 2020;Jost, van der Linden, Panagopoulos, & Hardin, 2018). This was observed, for instance, in the early days of the SARS-2/COVID-19 pandemic: Right-wing news outlets such as Fox News and Breitbart were much more likely than mainstream news outlets to spread misinformation, including conspiracy theories about the virus, and citizens who consumed more right-wing news held more false beliefs about the pandemic (Motta, Stecula, & Farhart, 2020).Thus, although many perspectives in social science would suggest that motivated reasoning, biased information processing, and conspiratorial thinking should be equally prevalent among leftists and rightists (Ditto et al., 2019;Kahan, 2016;McClosky & Chong, 1985;Moore et al., 2014;Oliver & Wood, 2014;van Prooijen et al., 2015;Sunstein & Vermeule, 2009;Uscinski, Klofstad, & Atkinson, 2016), there are ample empirical reasons to question this assumption (see also Baron & Jost, 2019). The fact that "conspiracy theories are not just for conservatives" (Moore et al., 2014) does not mean that conspiracies are endorsed at the same scale or level of intensity by liberals and conservatives nor that conspiracy theories on the left and right are equally harmful, fallacious, or driven by paranoid ideation. ...
... Although it may be reasonable to suggest that liberals and conservatives may both be susceptible to conspiratorial forms of thinking under certain circumstances (Moore et al., 2014), the results of our investigation point to meaningful psychological differences, at least in the context of American politics. Although previous accounts have suggested that conspiratorial thinking should be equally prevalent among ideological extremists on the left and right (e.g., Kahan, 2016;McClosky & Chong, 1985;Oliver & Wood, 2014;van Prooijen et al., 2015)-with some concluding that "there is no marked ideological asymmetry in conspiracy belief" (Sutton & Douglas, 2020, p. 1)-this is not what we find. Consistent with Hofstadter's (1964) historical observations about "the paranoid style in American politics" and the theory of political conservatism as motivated social cognition (Jost, 2006(Jost, , 2017Jost et al., 2003Jost et al., , 2018, we observed a replicable ideological asymmetry when it comes to the adoption of a conspiratorial mindset in general. ...
... Second, there is a good deal of evidence linking political conservatism in particular to epistemic, existential, and relational needs (Jost, 2017;Jost et al., 2003Jost et al., , 2009Jost et al., , 2018 which, as noted above, are themselves linked to the endorsement of conspiracy theories (Douglas et al., 2017;Kay et al., 2009;Whitson et al., 2015). Third, although there are alternative theoretical accounts emphasizing ideological symmetry, which would suggest that conspiratorial thinking should be equally prevalent on the left and right (Kahan, 2016;McClosky & Chong, 1985;van Prooijen et al., 2015;Uscinski et al., 2016), we know of no theories in social science that would make the opposite prediction, namely that liberals would be more prone to conspiratorial thinking than conservatives. Nor are we aware of any patterns of data that show an asymmetry in the direction opposite to the one we have observed here. ...
Article
Full-text available
It is often claimed that conspiracy theories are endorsed with the same level of intensity across the left‐right ideological spectrum. But do liberals and conservatives in the United States embrace conspiratorial thinking to an equivalent degree? There are important historical, philosophical, and scientific reasons dating back to Richard Hofstadter's book The Paranoid Style in American Politics to doubt this claim. In four large studies of U.S. adults (total N = 5049)—including national samples—we investigated the relationship between political ideology, measured in both symbolic and operational terms, and conspiratorial thinking in general. Results reveal that conservatives in the United States were not only more likely than liberals to endorse specific conspiracy theories, but they were also more likely to espouse conspiratorial worldviews in general (r = .27, 95% CI: .24, .30). Importantly, extreme conservatives were significantly more likely to engage in conspiratorial thinking than extreme liberals (Hedges' g = .77, SE = .07, p < .001). The relationship between ideology and conspiratorial thinking was mediated by a strong distrust of officialdom and paranoid ideation, both of which were higher among conservatives, consistent with Hofstadter's account of the paranoid style in American politics.
... Given the importance of the Constitution in the American civil religion, policy winners in any given case are motivated to see decisions that favor their side as being clearly right because the Constitution deems it or the neutral arbiters on the Court declare it so. On the other hand, people respond to Court decisions they disagree with by chalking such decisions up to "politics"-a neutral, fair reading of the law would not, of course, yield a decision they do not like (Badas 2016, Kahan 2015a, 2015b. Together, this suggests that people want to believe that the Court is a political-and that they may punish the Court when it defies this closely-held expectation. ...
... As in other areas of political life, when they learn new information about the Court, individuals engage in information processing that allows them to accommodate both sets of beliefs. New political information is not simply accepted on its face; rather, individuals can selectively learn facts, denigrate the source of those facts, and counter-argue uncomfortable facts to stay consistent with their prior beliefs (Jerit and Barabas 2012;Kahan 2015aKahan , 2015bLodge and Taber 2000;Redlawsk 2002;Redlawsk, Civettini, and Emmerson 2010). Moreover, polarization has changed the way Americans think about politics (Abramowitz and Webster 2016;Hetherington 2001;Mason 2018), including the ways in which they evaluate political institutions (Donovan et al. 2020). ...
... Even if one holds this concern, it should be noted that the importance of democratic values in models of judicial legitimacy has been the subject of considerable empirical debate. Many recent studies have found that democratic values do not meaningfully moderate the relationship between political attitudes (partisanship, ideology, policy agreement, etc.) and legitimacy (see Bartels and Johnston 2013;Christenson and Glick 2015a, 2015b-and building on the discussion in the previous paragraph, this is likely the case, at least in part, precisely because democratic values are quite durable. ...
... There are well known differences in risk perception and reactions, leading to strong polarization almost beyond capacity to communicate (Kahneman [52], Opaluch and Segerson [65], Sunstein [88,89], Sunstein et al. [90], Sunstein [91], Tversky and Kahneman [96,97,98], Tversky et al. [100]). Our current work has been motivated by the recent studies (Kahan [50,51]), which describe in detail the Politically Motivated Reasoning Paradigm (PMRP). We aim to create an Agent Based Model using biased information processing and Bayesian updating. ...
... Especially, when the consequences of a rejection from the group are more immediate and important than the results of 'erroneous' perception of the world. Kahan [50,51] has provided a very attractive Bayesian framework, allowing not only to describe the role of various forms of cognitive biases, but also the empirical evidence of the differing predictions of the different heuristics, such as confirmation bias or political predispositions. Experiments with manipulated 'evidence', described by Kahan, are very interesting. ...
Preprint
We present an introduction to a novel model of an individual and group opinion dynamics, taking into account different ways in which different sources of information are filtered due to cognitive biases. The agent based model, using Bayesian updating of the individual belief distribution, is based on the recent psychology work by Dan Kahan. Open nature of the model allows to study the effects of both static and time-dependent biases and information processing filters. In particular, the paper compares the effects of two important psychological mechanisms: the confirmation bias and the politically motivated reasoning. Depending on the effectiveness of the information filtering (agent bias), the agents confronted with an objective information source may either reach a consensus based on the truth, or remain divided despite the evidence. In general, the model might provide an understanding into the increasingly polarized modern societies, especially as it allows mixing of different types of filters: psychological, social, and algorithmic.
... Given the importance of the Constitution in the American civil religion, policy winners in any given case are motivated to see decisions that favor their side as being clearly right because the Constitution deems it or the neutral arbiters on the Court declare it so. On the other hand, people respond to Court decisions they disagree with by chalking such decisions up to "politics"-a neutral, fair reading of the law would not, of course, yield a decision they do not like (Badas 2016, Kahan 2015a, 2015b. Together, this suggests that people want to believe that the Court is a political-and that they may punish the Court when it defies this closely-held expectation. ...
... As in other areas of political life, when they learn new information about the Court, individuals engage in information processing that allows them to accommodate both sets of beliefs. New political information is not simply accepted on its face; rather, individuals can selectively learn facts, denigrate the source of those facts, and counter-argue uncomfortable facts to stay consistent with their prior beliefs (Jerit and Barabas 2012;Kahan 2015aKahan , 2015bLodge and Taber 2000;Redlawsk 2002;Redlawsk, Civettini, and Emmerson 2010). Moreover, polarization has changed the way Americans think about politics (Abramowitz and Webster 2016;Hetherington 2001;Mason 2018), including the ways in which they evaluate political institutions (Donovan et al. 2020). ...
Article
It is widely agreed that dissatisfaction with Supreme Court decisions harms the Court’s standing among the public. However, we do not yet know how or why Court performance affects legitimacy. We examine the role that mass perceptions of the Supreme Court’s institutional nature—particularly how “political” it is—plays in assessments of its legitimacy. We find that policy disagreement with Supreme Court decisions causes individuals to view that decision, and the Court itself, as being political in nature. We then show that the more political people think the Court is, the less legitimate they consider it to be. In this way, we show that policy disagreement with decisions strongly and directly reduces Court legitimacy.
... That said, a burgeoning chorus of scholars have asserted that the relation between conservatism and rigidity hinges crucially on a host of empirical (e.g., Ditto et al., 2019;Federico & Malka, 2018;Feldman & Johnston, 2014;Kahan, 2016;Malka & Soto, 2015;Zmigrod et al., 2019), methodological (e.g., Malka et al., 2017;Zmigrod, 2020), and metascientific (e.g., Duarte et al., 2015;Jussim et al., 2016) factors, such that the RRH's evidentiary foundation may be grounded in a noisy and contradictory literature. To provide a sense of these prior critiques, consider that many people identify as "socially liberal" and "economically conservative" (or vice versa), suggesting that "liberalism" and "conservatism" may not be psychologically coherent categories (Feldman, 2013;Kerr, 1952). ...
... Still, what might account for such a dramatic split? Notwithstanding the possible influence of method artifacts such as content overlap, common method variance, semantic overlap, and/or acquiescence response bias (e.g., Peabody, 1961;Rokeach, 1967;Rorer, 1965), one reasonably straightforward interpretation has been advanced by Kahan (2016), who noted that idiosyncrasies of information processing are not easily accessible to introspective observation, such that "there is thus little reason to believe a person's own perception of the quality of his reasoning is a valid measure of it" (p. 5). ...
Preprint
Full-text available
The rigidity-of-the-right hypothesis (RRH), which posits that cognitive, motivational, and ideological rigidity resonate with political conservatism, is an influential but controversial psychological account of political ideology. Here, we leverage several methodological and theoretical sources of this controversy to conduct an extensive quantitative review—with the dual aims of probing the RRH’s basic assumptions and parsing the RRH literature’s heterogeneity. Using multi-level meta-analyses of relations between varieties of rigidity and ideology measures alongside a bevy of potential moderators (s = 329, k = 708, N = 187,612), we find that associations between conservatism and rigidity are tremendously heterogeneous, suggesting a complex—yet conceptually fertile—network of relations between these constructs. Most notably, whereas social conservatism was robustly associated with rigidity, associations between economic conservatism and rigidity indicators were inconsistent, small, and not statistically significant outside of the United States. Moderator analyses revealed that non-representative sampling, criterion contamination, and disproportionate use of American samples have yielded over-estimates of associations between rigidity-related constructs and conservatism in past research. We resolve that drilling into this complexity, thereby moving beyond the question of if conservatives are essentially rigid to when and why they might or might not be, will help provide a more realistic account of the psychological underpinnings of political ideology.
... Sood and Khanna (2018) address the problem of politically motivated responding in a different context by providing financial incentives, which are likely to increase the accuracy motivation of respondents (Bullock et al., 2015;Prior et al., 2015). Following the reasoning of previous research (Berinsky, 2018;Kahan, 2016b), we see the use of financial incentives as problematic for two reasons: ...
... (1) financial incentives have no real-world equivalent and thus only little external validity (Kahan, 2016b), and (2) incentives might induce strategic, incentive-seeking behavior among respondents in general (Berinsky, 2018). In particular, a financial incentive might motivate some respondents who otherwise would report their inference truthfully to adjust their responses simply to get a financial incentive. ...
Article
Full-text available
Information processing during heated debates on asylum and immigration may often be influenced by prejudice rather than a desire to learn facts. In this article, we investigate how people process empirical evidence on the consequences of refugee arrivals through a novel survey experiment that disentangles politically motivated learning from other forms of learning and expressive responding. Specifically, we ask respondents to interpret a 2×2 table about the relationship between asylum seekers and crime rates. Crucially, respondents are randomly allocated to evaluate a conclusion that triggers their identity‐protective stakes or not. In addition, we test for motivated responding as an alternative explanation by randomly providing some respondents with a response format that motivates them to report their inference truthfully. We find that information processing changes substantially when new information challenges existing asylum attitudes. Politically motivated learning is strongest among voters with strong negative prior attitudes towards asylum seekers. Our results also indicate that expressive responding can only partially account for this gap in correctly reported inferences. Our research has important implications for research on the consequences of refugee migration, theories of motivated reasoning, and survey methodology.
... In the US, Republicans select and tend to believe different news sources than Democrats (Gallup, 2019;Pennycook & Rand, 2019c), however the relation between ideology and media evaluations is less clear in Germany. Scholars also debate the role of partisanship in bias in information seeking and processing (Ditto et al., 2018;Jost, 2017;Kahan, 2016b). It could be that one side of the political spectrum is more prone to believing sources that cater to their preexisting beliefs and attitudes. ...
... The concept of "media trust" describes average credibility/trustworthiness perceptions across a set of sources; "mainstream media trust", the average perception across sources considered mainstream. 2. There is a debate whether the effect of congruence can be explained within a Bayesian framework or whether it should be conceived of as "bias", which is not crucial for our expectations below (Gerber & Green, 1999;Kahan, 2016b;Tappin et al., 2018). 3. ...
Article
The increasing spread of false stories (“fake news”) represents one of the great challenges societies face in the 21st century. A little-understood aspect of this phenomenon and of the processing of online news in general is how sources influence whether people believe and share what they read. In contrast to the predigital era, the Internet makes it easy for anyone to imitate well-known and credible sources in name and appearance. In a preregistered survey experiment, we first investigate the effect of this contrast (real vs. fake source) and find that subjects, as expected, have a higher tendency to believe and a somewhat higher propensity to share news by real sources. We then expose subjects to a number of reports manipulated in content (congruent vs. incongruent with individuals’ attitudes), which reveals our most crucial finding. As predicted, people are more likely to believe a news report by a source that has previously given them congruent information. However, this only holds if the source is fake. We further use machine learning to uncover treatment heterogeneity. Effects vary most strongly for different levels of trust in the mainstream media and having voted for the populist right.
... In particular, conservatives can show motivated reasoning when forming their attitudes about economic inequality in the United States (Bartels, 2019). Other work shows that liberals and conservatives both engage in motivated reasoning (Frimer et al., 2017;Kahan, 2016). Related to and sometimes fueling motivated reasoning, both liberals and conservatives can show motivation for accuracy. ...
Preprint
Political polarization is a barrier to enacting policy solutions to global issues. Social psychology has a rich history of studying polarization, and there is an important opportunity to refine and define its contributions to the present political realities. We do so in the context of one of the most pressing modern issues: climate change. We synthesize the literature on political polarization and its applications to climate change, and we propose lines of further research and intervention design. We focus on polarization in the United States, examining other countries when literature is available. The polarization literature emphasizes two types of mechanisms: individual-level psychological processes related to political ideology and group-level psychological processes related to partisan identification. We highlight the potential intervention strategies of circumventing solution aversion, leveraging superordinate identities, correcting misperceived norms, and having trusted experts and politicians communicate about climate change. Interventions that address group-level processes can be more effective than those that address individual-level processes. These areas of research and intervention development are particularly important given that behavioral interventions grounded in scientific research are one of our most promising tools to achieve the behavioral wedge we need to address climate change and to make progress on other policy issues.
... Another way individuals perceive risk is according to what reinforces their commitments to the group to which their views belong [16]. This perception could be independent from the best available evidence and sometimes in conflict with it [17]. The second way relates to the cultural-cognitive aspect of risk perception and explains why groups with opposing political views tend to disagree on important social topics. ...
Chapter
Full-text available
COVID-19 pandemic unexpectedly created many health risks when it appeared at the end of 2019 and beginning of 2020. The aim of this paper is to examine perception of general health risks during the second wave of Pandemic (October 2020 to December 2020). For this purpose, a questionnaire that explores different risks during COVID was created. The study included a survey with a specific target group which was chosen purposefully from members of opposing political parties in N. Macedonia in order to explore potential differences. 100 respondents in total were included in the sample, consisting of 50 respondents from two main political parties (VMRO-DPMNE and SDSM) in the country. The findings show that were no differences between members of opposing political parties when it comes to aspects related to general health risks during COVID. The conclusion stated that results are related to beliefs of members of political parties in the efficacy of well-known measures. The findings provide excellent background for creating strategies for management of public perception in situations with health risks.
... [Motivated system 2 reasoning] also distinguishes politically motivated reasoning from cognitively biased forms of information processing in which the likelihood ratio is endogenous to some non-truth-seeking influence other than identity protection, such as an individuals' priors in the case of confirmation bias," although the effects of prior beliefs and partisanship have not been sufficiently empirically investigated in the context of investigating the apparent role of deliberation (39,40). ...
Article
Full-text available
Why is disbelief in anthropogenic climate change common despite broad scientific consensus to the contrary? A widely held explanation involves politically motivated (system 2) reasoning: Rather than helping uncover the truth, people use their reasoning abilities to protect their partisan identities and reject beliefs that threaten those identities. Despite the popularity of this account, the evidence supporting it (i) does not account for the fact that partisanship is confounded with prior beliefs about the world and (ii) is entirely correlational with respect to the effect of reasoning. Here, we address these shortcomings by (i) measuring prior beliefs and (ii) experimentally manipulating participants’ extent of reasoning using cognitive load and time pressure while they evaluate arguments for or against anthropogenic global warming. The results provide no support for the politically motivated system 2 reasoning account over other accounts: Engaging in more reasoning led people to have greater coherence between judgments and their prior beliefs about climate change—a process that can be consistent with rational (unbiased) Bayesian reasoning—and did not exacerbate the impact of partisanship once prior beliefs are accounted for.
... In contrast, Kahan's model of politically motivated reasoning suggests that deliberate, slow System 2 thinking is required to successfully direct reasoning. For example, Kahan (2013;2016b) argues that when individuals defeat challenging arguments to ensure their position remains loyal to their identity-giving group, it is a deliberate and often sophisticated intellectual act that requires System 2 thinking. ...
Chapter
Full-text available
Empirical research in psychology and political science shows that individuals collect, process, and interpret information in a goal-driven fashion. Several theorists have argued that rather than striving for accuracy in their conclusions, individuals are motivated to arrive at conclusions that align with their previous beliefs, values, or identity commitments. The literature refers to this phenomenon broadly as ‘motivated reasoning’. In the context of risk governance, motivated reasoning can help to explain why people vary in their risk perceptions, evaluations, and preferences about risk management. But our current understanding of the phenomenon is incomplete, including the degree to which motivated reasoning should be considered rational and reasonable. Further, the research on motivated reasoning is largely unknown among risk practitioners. This chapter identifies key theoretical models of motivated reasoning, discusses the conceptual differences between them, and explores the implications of motivated reasoning for risk governance. Motivated reasoning is often labeled as ‘irrational’ and thus seen to prevent effective decision-making about risk, but this chapter challenges this assessment. The chapter concludes by identifying theoretical and empirical implications for researchers studying motivated reasoning and risk, as well as practical implications for policymakers and regulators involved in risk governance.
... Zum anderen scheint die Präferenz für mit politischen Meinungen übereinstimmenden Informationen von Informationsnutzungsmotiven abhängig zu sein. Hier zeigen sich Präferenzen für meinungsbestätigende Informationen bei unterhaltungsorientierter Nutzung während Nutzung, die auf das Finden von korrekten Informationen zielt, weitgehend unabhängig von politischen Voreinstellungen scheint (Evans, 2008;Jost et al., 2013;Kahan, 2016aKahan, , 2016b. Die allgemein formulierte Erwartung, Menschen würden aus psychologischen Motiven gezielt nur Informationen konsumieren, die ihrer Meinung entsprechen, ist also zu stark vereinfacht und dadurch falsch. ...
... Another limitation of the present work is the underdeveloped validity of many or most rigidity constructs and measures. For instance, the idiosyncrasies of information processing are not easily accessible to introspective observation (see Kahan, 2016). Indeed, cognitive psychology and neuropsychology typically rely on behavioral tasks to assess cognition (e.g., cognitive ability or memory are rarely measured using self-reports), in part because self-assessments of cognitive performance are frequently inaccurate (Furnham, 2001;Kruger & Dunning, 1999). ...
Article
Full-text available
The rigidity-of-the-right hypothesis (RRH), which posits that cognitive, motivational, and ideological rigidity resonate with political conservatism, is an influential but controversial psychological account of political ideology. Here, we leverage several methodological and theoretical sources of this controversy to conduct an extensive quantitative review—with the dual aims of probing the RRH’s basic assumptions and parsing the RRH literature’s heterogeneity. Using multi-level meta-analyses of relations between varieties of rigidity and ideology measures alongside a bevy of potential moderators (s = 329, k = 708, N = 187,612), we find that associations between conservatism and rigidity are tremendously heterogeneous, suggesting a complex—yet conceptually fertile—network of relations between these constructs. Most notably, whereas social conservatism was robustly associated with rigidity, associations between economic conservatism and rigidity indicators were inconsistent, small, and not statistically significant outside of the United States. Moderator analyses revealed that non-representative sampling, criterion contamination, and disproportionate use of American samples have yielded over-estimates of associations between rigidity-related constructs and conservatism in past research. We resolve that drilling into this complexity, thereby moving beyond the question of if conservatives are essentially rigid to when and why they might or might not be, will help provide a more realistic account of the psychological underpinnings of political ideology.
... While issue reprioritization aimed at better understanding how partisans reweight issues based on their party identity, goal reprioritization sheds light on how public officials process performance data when there is a cognitive dissonance between their governance and goal preferences. Goal reprioritization indicates that, when public officials' governance and goal preferences do not align, they will place more weight on consistent performance indicators and will lower the importance they give to inconsistent performance indicators Kahan 2016). For instance, imagine public officials who weight cost as being more relevant compared to citizen satisfaction. ...
Article
Full-text available
When public officials evaluate service providers’ performance, this evaluation is influenced by their preferences for the public or private provision of services. However, these so-called governance preferences often conflict with public officials’ preferences for certain performance measures during evaluation processes. Building on goal reprioritization theory, this study examines how public officials behave in situations where their governance preferences do not align with their preferences for the performance measures. Using survey experiment data (n = 4,248), we found that public officials use goal reprioritization rather than unbiased decision-making when assessing conflicting performance information, questioning the efficient use of performance information by public administrations.
... Specifically, we explore the relationship between risk perceptions, political orientation, and people's willingness to "adhere to" or "comply with" disease-spread mitigating behaviors like vaccinating, masking, and social distancing. Here, we focus on differences between those who lean Left in their political orientation (i.e., Democrats and liberals) versus those on the Right (i.e., Republicans and conservatives), and we explore how COVID-related decision making can reflect a uniquely political form of "motivated reasoning" (Bolsen and Palm, 2019;Druckman and McGrath, 2019;Epley and Gilovich, 2016;Kahan, 2015Kahan, , 2016. We define such polarization more precisely later in the paper. ...
Article
Objective Risk assessment and response is important for understanding human behavior. The divisive context surrounding the coronavirus pandemic inspires our exploration of risk perceptions and the polarization of mitigation practices (i.e., the degree to which the behaviors of people on the political “Left” diverge from those on the “Right”). Specifically, we investigate the extent to which the political polarization of willingness to comply with mitigation behaviors changes with risk perceptions. Method Analyses use data from two sources: an original dataset of Twitter posts and a nationally-representative survey. In the Twitter data, negative binomial regression models are used to predict mitigation intent measured using tweet counts. In the survey data, logit models predict self-reported mitigation behavior (vaccination, masking, and social distancing). Results Findings converged across both datasets, supporting the idea that the links between political orientation and willingness to follow mitigation guidelines depend on perceived risk. People on the Left are more inclined than their Right-oriented colleagues to follow guidelines, but this polarization tends to decrease as the perceived risk of COVID-19 intensifies. Additionally, we find evidence that exposure to COVID-19 infections sends ambiguous signals about the risk of the virus while COVID-19 related deaths have a more consistent impact on mitigation behaviors. Conclusions Pandemic-related risks can create opportunities for perceived “common ground,” between the political “Right” and “Left.” Risk perceptions and politics interact in their links to intended COVID-19 mitigation behavior (as measured both on Twitter and in a national survey). Our results invite a more complex interpretation of political polarization than those stemming from simplistic analyses of partisanship and ideology.
... Especially social identity needs have come to play a prominent role in research on a form of motivated reasoning of particular interest: politically motivated reasoning. Politically motivated reasoning is often considered to be the main explanation for the phenomenon of fact polarization (Kahan, 2016a(Kahan, , 2016b. In Kahan's words, fact polarization is "intense, persistent partisan contestation over facts that admit of scientific evidence" (Kahan, 2016b, p. 1). ...
... If people were driven solely by accuracy goals when dealing with political information, it would be easy to identify those who answer knowledge questions incorrectly as uninformed. However, since people are also likely to have directional goals, such as protecting core values and partisan and political identities (Kahan, 2016a(Kahan, , 2016bKunda, 1990;Taber & Lodge, 2006), they may respond incorrectly because their priors led them to access and/or interpret information in ways that support an incorrect answer. For example, people with strong party identities and attitudes that deviate from those disseminated in mainstream news media may selectively turn to alternative and partisan media for news (Benkler et al., 2018;Knobloch-Westerwick, 2014;Zaller, 1992), which may increase the likelihood of encountering biased or misleading information (Vargo et al., 2017). ...
... Yet most adults fall short of being fully aware of when and why they change their beliefs (Sloman & Fernbach, 2017), and researchers have devoted enormous attention to identifying external interventions capable of influencing beliefs regarded as changeworthy (Sloman & Rabb, 2019). The long-standing approach of simply exposure to new, more justifiable understandings, whether in a textbook or a discussion group with differently thinking participants, has shown at best mixed results (Kahan, 2016;Kahne & Bowyer, 2017;Stanley et al., 2020). Change, if it occurs at all, may be superficial, transient, and at risk of the negative outcome of polarization. ...
Article
Full-text available
The construct of metacognition appears in an ever increasing number and range of contexts in educational, developmental, and cognitive psychology. Can it retain its status as a useful construct in the face of such diverse application? Or is it merely an umbrella term for diverse mental phenomena that are loosely if at all connected? Here I argue for metacognition playing many diverse roles yet having key features that connect these in a shared framework. Proposed as central to this framework is the exercise of inhibitory cognitive control as a necessary condition for metacognitive competence. Also argued for is greater recognition of metacognition as a disposition, not just competence. As a disposition its foundations are epistemological, and its value and importance lie in supporting individuals’ effective management of their own minds. This disposition puts them in maximum control of what they think and know and the processes they engage in to revise their beliefs, individually and in interaction with others.
... This lack of a direct effect of education on the acceptance or rejection of scientific facts has been well documented in the case of climate change [52][53][54]. Kahan [55,56] has shown the motivations behind such an apparent paradox, and suggested that the personal quality necessary to accept views contradicting currently held beliefs is not education or general intelligence but scientific curiosity [57]. We note, however, that these findings may be particular to the USA, as a recent study [58] suggests. ...
Article
Full-text available
Background: A realistic description of the social processes leading to the increasing reluctance to various forms of vaccination is a very challenging task. This is due to the complexity of the psychological and social mechanisms determining the positioning of individuals and groups against vaccination and associated activities. Understanding the role played by social media and the Internet in the current spread of the anti-vaccination (AV) movement is of crucial importance. Methods: We present novel, long-term Big Data analyses of Internet activity connected with the AV movement for such different societies as the US and Poland. The datasets we analyzed cover multiyear periods preceding the COVID-19 pandemic, documenting the behavior of vaccine related Internet activity with high temporal resolution. To understand the empirical observations, in particular the mechanism driving the peaks of AV activity, we propose an Agent Based Model (ABM) of the AV movement. The model includes the interplay between multiple driving factors: contacts with medical practitioners and public vaccination campaigns, interpersonal communication, and the influence of the infosphere (social networks, WEB pages, user comments, etc.). The model takes into account the difference between the rational approach of the pro-vaccination information providers and the largely emotional appeal of anti-vaccination propaganda. Results: The datasets studied show the presence of short-lived, high intensity activity peaks, much higher than the low activity background. The peaks are seemingly random in size and time separation. Such behavior strongly suggests a nonlinear nature for the social interactions driving the AV movement instead of the slow, gradual growth typical of linear processes. The ABM simulations reproduce the observed temporal behavior of the AV interest very closely. For a range of parameters, the simulations result in a relatively small fraction of people refusing vaccination, but a slight change in critical parameters (such as willingness to post anti-vaccination information) may lead to a catastrophic breakdown of vaccination support in the model society, due to nonlinear feedback effects. The model allows the effectiveness of strategies combating the anti-vaccination movement to be studied. An increase in intensity of standard pro-vaccination communications by government agencies and medical personnel is found to have little effect. On the other hand, focused campaigns using the Internet and social media and copying the highly emotional and narrative-focused format used by the anti-vaccination activists can diminish the AV influence. Similar effects result from censoring and taking down anti-vaccination communications by social media platforms. The benefit of such tactics might, however, be offset by their social cost, for example, the increased polarization and potential to exploit it for political goals, or increased ‘persecution’ and ‘martyrdom’ tropes.
... Furthermore, even assuming that worldview influences the processing of misinformation corrections at least some of the time, it is not clear whether such worldview effects are symmetrical along the political dimension. On the one hand, Kahan [35,36] has argued that in general, worldview effects should occur equally on both ends of the political spectrum, as biased information processing functions to protect one's socio-cultural worldview and 'tribal' identity, while endorsing opposing beliefs may lead to social exclusion (see [37]). This view has been supported by studies showing that both liberals and conservatives show motivated resistance to worldview-incongruent science-related information (e.g. ...
Article
Full-text available
Misinformation often has a continuing effect on people's reasoning despite clear correction. One factor assumed to affect post-correction reliance on misinformation is worldview-driven motivated reasoning. For example, a recent study with an Australian undergraduate sample found that when politically situated misinformation was retracted, political partisanship influenced the effectiveness of the retraction. This worldview effect was asymmetrical, that is, particularly pronounced in politically conservative participants. However, the evidence regarding such worldview effects (and their symmetry) has been inconsistent. Thus, the present study aimed to extend previous findings by examining a sample of 429 pre-screened US participants supporting either the Democratic or Republican Party. Participants received misinformation suggesting that politicians of either party were more likely to commit embezzlement; this was or was not subsequently retracted, and participants' inferential reasoning was measured. While political worldview (i.e. partisanship) influenced the extent to which participants relied on the misinformation overall, retractions were equally effective across all conditions. There was no impact of political worldview on retraction effectiveness, let alone evidence of a backfire effect, and thus we did not replicate the asymmetry observed in the Australian-based study. This pattern emerged despite some evidence that Republicans showed a stronger emotional response than Democrats to worldview-incongruent misinformation. This article is part of the theme issue ‘The political brain: neurocognitive and computational mechanisms’.
... Another way in which we frequently distort information is motivated reasoning, phenomenon well known in social sciences [163][164][165][166][167][168][169][170]. It relies on emotions to create explanations, justifications, or decisions that fit the person's desires and goals, rather than those that accurately correspond to evidence. ...
Article
Full-text available
The article describes the current status and potential directions of development of agent-based models of social opinion dynamics. Despite extensive effort, the models achieve, at best, only qualitative agreement with social observations. To understand the increasingly pressing issues such as all-encompassing political and social polarization, resurgence of fundamentalist and populist movements, persistence of socially dangerous trends such as denial of climate change, or the anti-vaccination activism, the models must be capable of handling much more complex set of agent characteristics, content of the communications (between agents and through media) as well as psychologically adequate reaction mechanisms, and realistic influence networks. Moreover, to meet the challenge of understanding the globally growing political polarization and changes brought by increasing reliance on electronic communication, the models should adapt to the post-truth era. It is necessary to include in the models phenomena such as fake news, omnipresent exaggerations and stereotypes, trolling, and algorithmic biases funneling personal information universe. We also need to consider that most social systems may be described as transient, out-of-equilibrium ones, where a crucial role is played by the models’ initial conditions. In this work, we analyze the challenges facing the modeling community and point out certain promising directions for development.
... Another possibility is that the politician's role changes how people respond to justification requirements. Some studies show that professional roles lead certain groups to make unbiased professional judgments (Kahan, 2016b). For instance, relative to the public, judges and lawyers appear to be less biased when asked to evaluate judicial information, implying that legal training, but possibly also the demands of their job, condition legal professionals to better resist politically biased processing of information (Kahan et al., 2016). ...
Article
Full-text available
A growing body of evidence shows that politicians use motivated reasoning to fit evidence with prior beliefs. In this, they are not unlike other people. We use survey experiments to reaffirm prior work showing that politicians, like the public they represent, engage in motivated reasoning. However, we also show that politicians are more resistant to debiasing interventions than others. When required to justify their evaluations, politicians rely more on prior political attitudes and less on policy information, increasing the probability of erroneous decisions. The results raise the troubling implication that the specialized role of elected officials makes them more immune to the correction of biases, and in this way, less representative of the voters they serve when they process policy information. NB: There is open access to the published article, including supplementary materials, on Behavioural Public Policy's homepage: doi:10.1017/bpp.2020.50
... tion between immigration attitudes and beliefs about policy sequences, which is generally hard to do in real-world politics due to motivated reasoning (e.g., Kahan, 2016). Given that all the policy attributes are randomized simultaneously and their effects are measured on the same scale, the design allows estimating and comparing the elasticity of (counterfactual) immigration preferences to various personal and collective interests. ...
Article
Anti-immigration preferences among educated and racially egalitarian voters is hard to explain using existing frameworks of self-interest or prejudice. I address this puzzle by developing a theory of parochial altruism, which stipulates that voters are motivated to help others at a cost, but they prioritize helping compatriots. I hypothesize that parochial altruists or voters high in both "nationalism" and "altruism" are more supportive of immigration restrictions perceived to be in the national interest. However, parochial altruists are also expected to be more supportive of increasing immigration when it benefits their compatriots. I test my theory by conducting a population-based UK survey. Using a novel measure of elicited preferences, I first find most altruists who donate to domestic rather than global charities are as anti-immigration as egoists who do not donate at all. Using a conjoint experiment, I then show voters support increasing immigration when these alternative policies benefit their compatriots.
... Bias is notoriously difficult to measure, but fairly easy to understand. One way to define bias is that it occurs when someone changes their response when extraneous information is introduced (Kahan 2016). Below, we briefly review two examples of liberal bias in how information about group differences is interpreted. ...
Article
Full-text available
Many people greet evidence of biologically based race and sex differences with extreme skepticism, even hostility. We argue that some of the vehemence with which many intellectuals in the West resist claims about group differences is rooted in the tacit assumption that accepting evidence for group differences in socially valued traits would undermine our reasons to treat people with respect. We call this the egalitarian fallacy. We first explain the fallacy and then give evidence that self-described liberals in the United States are especially likely to commit it when they reason about topics like race and sex. We then argue that people should not be as worried as they often are about research that finds psychological differences between men and women, or between people of different racial or ethnic groups. We conclude that if moral equality is believed to rest on biological identity, ethnically diverse societies are in trouble.
Chapter
Full-text available
This volume offers a variety of research perspectives on political journalism and its coverage. The contributions show different methodological approaches to the analysis. The patterns of political journalism are mainly outlined in the context of hybrid and digital media. One focus is on journalists in social media. Some contributions shed light on mediation constellations and provide information on changes in the relationship to politics and the audience. The volume is aimed at researchers, teachers and students of journalism and political communication. With contributions by Katarina Bader | Kristina Beckmann, M.A.| Roger Blum | Chung-Hong Chan | Hanne Detel | Maximilian Eder | Rainer Freudenthaler | Anna Gaul, M.A. | Michael Graßl | Jörg Haßler | Jakob Henke | Stefanie Holtrup, M.A. | Carolin Jansen | Andreas Jungherr | Niklas Kastor | Korbinian Klinghardt, M.A. | Maike Körner, M.A. | Katharina Ludwig, M.A. | Renée Lugschitz | Peter Maurer | Philipp Müller | Paula Nitschke | Christian Nuernbergk | Nicole Podschuweit | Katharina Pohl | Marlis Prinzing | Günther Rager | Lars Rinsdorf | Thomas Roessing | Elisabeth Schmidbauer, M.A. | Hannah Schmidt, M.A. | Markus Schug, M.A. | Nina Fabiola Schumacher, M.A. | Jonas Schützeneder | Helena Stehle | Michael Steinbrecher | Bernadette Uth | Hartmut Wessler | Claudia Wilhelm | Dominique Wirz | Anna-Katharina Wurst, M.A. | Florin Zai, M.A.
Chapter
The subject of this paper is how the epistemic limitations of individuals and their biases in reasoning affect collective decisions and in particular the functioning of democracies. In fact, while the cognitive sciences have largely shown how the imperfections of human rationality shape individual decisions and behaviors, the implications of these imperfections for collective choice and mass behaviors have not yet been studied in such detail. In particular, the link between these imperfections and the emergence of contemporary populisms has not yet been thoroughly explored. This is done in this paper by considering both fundamental dimensions of the political space: the cultural-identitarian and the socio-economic one. As has been noted, reflections on these points induce to revise the picture of democracy as a regime producing collective decisions that come out from the interaction of independent individuals well aware of their values and interests, and rationally (in the sense of rational choice theory) pursuing them. This leads to a certain skepticism towards the idealization of democracy as human rationality in pursuit of the common good, which serves to provide cover for those who profit from the distortions and biases in the policy-making processes of actual democracies. A natural conclusion of the paper is that contemporary democracies are quite vulnerable in the face of populist leaders and parties, that are systematically trying to exploit to their advantage people’s imperfect rationality (using “easy arguments”, emotions, stereotypes…).
Article
Full-text available
This paper considers the possibility that ‘epistemic hypocrisy’ could be relevant to our blaming practices. It argues that agents who culpably violate an epistemic norm can lack the standing to blame other agents who culpably violate similar norms. After disentangling our criticism of epistemic hypocrites from various other fitting responses, and the different ways some norms can bear on the legitimacy of our blame, I argue that a commitment account of standing to blame allows us to understand our objections to epistemic hypocrisy. Agents lack the epistemic standing to blame when they are not sufficiently committed to the epistemic norms they are blaming others for violating. This not only gives us a convincing account of epistemic standing to blame, it leaves us with a unified account of moral and epistemic standing.
Article
Evidence indicates that when people forecast potential social risks, they are guided not only by facts but often by motivated reasoning also. Here I apply a Bayesian decision framework to interpret the role of motivated reasoning during forecasting and assess some of the ensuing predictions. In 2 online studies, for each of a set of potential risky social events (e.g., economic crisis, rise of income inequality, and increase in violent crime), participants expressed judgments about the probability that the event will occur, how negative occurrence of the event would be, whether society is able to intervene in the event. Supporting predictions of the Bayesian decision model, the analyses revealed that participants who deemed the events as more probable also assessed occurrence of the events as more negative and believed society to be more capable to intervene in the events. Supporting the notion that a social threat is appraised as more probable when an intervention is deemed to be possible, these findings are compatible with a form of intervention bias. These observations are relevant for campaigns aimed at informing the population about potential social risks such as climate change, economic dislocations, and pandemics.
Article
Full-text available
I introduce and discuss an underappreciated form of motivated cognition: motivational pessimism, which involves the biasing of beliefs for the sake of self-motivation. I illustrate how motivational pessimism avoids explanatory issues that plague other (putative) forms of motivated cognition and discuss distinctions within the category, related to awareness, aetiology, and proximal goals.
Article
This study examines how affective polarization impacts the partisan divide in perceptions of cable news networks’ political biases. Results from a 2016 U.S. presidential election survey show that affective polarization deepens this divide, even after controlling for election involvement, partisanship strength, and cable news usage. Partisanship strength is more closely associated with in-party love than out-party hate, but out-party hate has a stronger association with media bias perception, indicating that out-group hate is more influential in shaping partisans’ media bias perception.
Chapter
Full-text available
Since political polarization significantly impacts contemporary politics and democracy, much of the research in the social sciences is dedicated to this topic. In recent times, philosophers joined the discussion related to the research on political polarization, primarily in the fields of political philosophy and political epistemology. The main aim of this paper is philosophical analysis of some dominant explanations of political polarization, but also to propose solutions for a way out of political polarization from the perspective of political philosophy. In a nutshell, to find solutions for a way out of political polarization, I will be looking in the direction of boosting epistemic rationality and fostering communication in conditions of tolerance and equality.
Article
Political polarization is a barrier to enacting policy solutions to global issues. Social psychology has a rich history of studying polarization, and there is an important opportunity to define and refine its contributions to the present political realities. We do so in the context of one of the most pressing modern issues: climate change. We synthesize the literature on political polarization and its applications to climate change, and we propose lines of further research and intervention design. We focus on polarization in the United States, examining other countries when literature was available. The polarization literature emphasizes two types of mechanisms of political polarization: (1) individual-level psychological processes related to political ideology and (2) group-level psychological processes related to partisan identification. Interventions that address group-level processes can be more effective than those that address individual-level processes. Accordingly, we emphasize the promise of interventions leveraging superordinate identities, correcting misperceived norms, and having trusted leaders communicate about climate change. Behavioral interventions like these that are grounded in scientific research are one of our most promising tools to achieve the behavioral wedge that we need to address climate change and to make progress on other policy issues.
Article
The highly influential theory of "Motivated System 2 Reasoning" argues that analytical, deliberative ("System 2") reasoning is hijacked by identity when considering ideologically charged issues-leading people who are more likely to engage in such reasoning to be more polarized, rather than more accurate. Here, we fail to replicate the key empirical support for this theory across five contentious issues, using a large gold-standard nationally representative probability sample of Americans. While participants were more accurate in evaluating a contingency table when the outcome aligned with their politics (even when controlling for prior beliefs), we find that participants with higher numeracy were more accurate in evaluating the contingency table, regardless of whether or not the table's outcome aligned with their politics. These findings call for a reconsideration of the effect of identity on analytical reasoning.
Article
Full-text available
Vice epistemology studies how character traits, attitudes, or thinking styles systematically get in the way of knowledge, while doxastic responsibility is concerned with what kinds of responses are appropriate towards agents who believe badly. This paper identifies a new connection between these two fields, arguing that our propensity to take responsibility for our doxastic failures is directly relevant for vice epistemology, and in particular, understanding the social obstacles to knowledge that epistemic vices can create. This is because responses to norm violations are an important mechanism by which norms are upheld, and maintaining epistemic norms is crucial for our collective epistemic successes. This paper then identifies a new kind of vice, one which is bad precisely because of the way it undermines the epistemic norms that our blaming practices help maintain, and thus the benefits that said norms create. I call this vice epistemic evasiveness, and it concerns the attitude that one takes towards their own performance as an epistemic agent. Evasiveness is bad because it creates uncertainty about which agents are reliable, it prevents holders of this attitude from learning from their mistakes, and it signals to third parties that the norm is not being upheld, making them less likely to follow the norm.
Article
Full-text available
Women’s rights advocates in Iowa successfully got state laws adopted in the late 1980s and in 2009 requiring gender balance on state and local boards and commissions, the only such laws in the USA. Through interview and archival methods, this paper uses a critical juncture framework to unveil how this was accomplished in part through a strategy underexplored in academic and practitioner literature—deradicalizing an issue through a series of “piecemeal” efforts. Small less controversial changes can build up to alter the status quo, making room for changes previously thought unaccomplishable. This study brings normatizing—the process of incrementally institutionalizing new norms—forward as a socialization strategy for social movement actors to intentionally consider employing in situations they encounter where political will on an issue is substantially lacking.
Article
Full-text available
Whereas people’s reasoning is often biased by intuitive stereotypical associations, recent debias studies suggest that performance can be boosted by short training interventions that stress the underlying problem logic. The nature of this training effect remains unclear. Does training help participants correct erroneous stereotypical intuitions through deliberation? Or does it help them develop correct intuitions? We addressed this issue in four studies with base-rate neglect and conjunction fallacy problems. We used a two-response paradigm in which participants first gave an initial intuitive response, under time pressure and cognitive load, and then gave a final response after deliberation. Studies 1A and 2A showed that training boosted performance and did so as early as the intuitive stage. After training, most participants solved the problems correctly from the outset and no longer needed to correct an initial incorrect answer through deliberation. Studies 1B and 2B indicated that this sound intuiting persisted over at least two months. The findings confirm that a short training can debias reasoning at an intuitive “System 1” stage and get reasoners to favour logical over stereotypical intuitions.
Article
Full-text available
A substantial literature shows that public polarization over climate change in the U.S. is most pronounced among the science literate. A dominant explanation for this phenomenon is that science literacy amplifies motivated reasoning, the tendency to interpret evidence such that it confirms prior beliefs. The present study tests the biasing account of science literacy in a study among the U.S. population that investigated both interpretation of climate change evidence and repeated belief-updating. Results replicated the typical correlational pattern of political polarization as a function of science literacy. However, results delivered little support for the core causal claim of the biasing account-that science literacy drives motivated reasoning. Hence, these results speak against a mechanism whereby science literacy driving motivated reasoning could explain polarized climate change beliefs among the science literate. This study adds to our growing understanding of the role of science literacy for public beliefs about contested science. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Article
Surveys of Americans’ views on the Zika virus allowed tests of overlapping but distinctive models of how culture affects risk responses: (1) a model inspired by the “solution aversion” (SA) hypothesis, which posits that cultural attitudes lower perceived risk when mediated by negative attitudes toward a hazard management option, and (2) the Affect Heuristic-Cultural Cognition Theory (AH-CCT) model, which posits that cultural biases can alter varied risk responses mediated by affect about the risk source or hazard. Dependent variables were personal and U.S. risk perceptions, judged need for U.S. action against Zika, and support for potential or actual Zika management options. Both models were supported, but the SA-based model (using value threat evoked by a management option as the affect mediator) was stronger overall and applied more to need and support judgments, while AH-CCT (using affect aroused by Zika as mediator) applied more to amplifying perceived risk. These results underline the value of parallel testing of models with multiple candidate measures of a limited number of concepts to enhance scientific understanding of factors affecting risk responses.
Chapter
Full-text available
The natural heritage of mankind is currently exposed to numerous threats caused by civilization. Awareness of these dangers is low among many people. The attempt to awaken awareness of threats is entangled in complex moral, ideological and poly-scholarly relationships. These conditions are currently being researched.
Article
Mistakes and overconfidence in detecting lies could help lies spread. Participants in our experiments observe videos in which senders either tell the truth or lie, and are incentivized to distinguish between them. We find that participants fail to detect lies, but are overconfident about their ability to do so. We use these findings to study the determinants of sharing and its effect on lie detection, finding that even when incentivized to share truthful videos, participants are more likely to share lies. Moreover, the receivers are more likely to believe shared videos. Combined, the tendency to believe lies increases with sharing. (JEL C91, D83, D91, L82)
Article
We use laboratory experiments to study whether biases in beliefs grow more severe when people socially exchange these beliefs with one another. We elicit subjects’ (naturally biased) beliefs about their relative performance in an intelligence quotient (IQ) test and allow them to update these beliefs in real time. Part of the way through the task we give each subject access to the beliefs of a counterpart who performed similarly on the test and allow them both to observe the evolution of one another’s beliefs. We find that subjects respond to one another’s beliefs in a highly asymmetric way, causing a severe amplification of subjects’ initial bias. We find no such patterns in response to objective public signals or in control treatments without social exchange or scope for motivated beliefs. We also provide evidence that the pattern is difficult to reconcile with Bayesianism and standard versions of confirmation bias. Overall, our results suggest that bias amplification is likely driven by “motivated assignment of accuracy” to others’ beliefs: subjects selectively attribute higher informational value to social signals that reinforce their motivation.
Article
Full-text available
The scientific evidence of climate change has never been clearer and more convergent, and calls for transformations to sustainability have never been greater. Yet, perspectives and social opinions about it remain fractured, and collaborative action is faltering. Climate policy seeks to forge a singular sense of climate change, dominated by an ‘information deficit model’ that focuses on transferring climate science to the lay public. Critics argue that this leaves out certain perspectives, including the plurality of meanings uncovered through participatory approaches. However, questions remain about how these approaches can better account for nuances in the psychological complexity of climate change, without getting stuck in the cul-de-sacs of epistemological relativism and post-truth politics. In this paper, I explore an approach through which we might find shared meaning at the interface of individual and collective views about climate change. I first present a conceptual framework that describes five psychological reasons why climate change challenges individual and collective meaning-making, and also provides a way to understand how meaning is organized within that. I then use this framework to inform the use of photo voice as a transformative (action-research) method, examining its ability to overcome some of the meaning-making challenges specific to climate change. I discuss how participants from a coffee cooperative in Guatemala reflected first on their own climate meanings and then engaged in a meaning-making process with other actors in the coffee value chain. Findings suggest a psychosocial approach to climate engagement—one that engages both subjectively and intersubjectively on the complexities unique to climate change—is helpful in acknowledging an ontological pluralism of ‘climate changes ’ amongst individuals, while also supporting a nexus-agreement collectively. This may in turn contribute to a more effective and ethical process of transformation.
Article
Full-text available
“Meta-argument allegations” consist of protestations that an interlocutor’s speech is wrongfully offensive or will trigger undesirable social consequences. Such protestations are meta-argument in the sense that they do not interrogate the soundness of an opponent’s argumentation, but instead focus on external features of that argument. They are allegations because they imply moral wrongdoing. There is a legitimate place for meta-argument allegations, and the moral and epistemic goods that can come from them will be front of mind for those levelling such allegations. But I argue there is a dark side to such allegations, and their epistemic and moral costs must be seriously weighed. Meta-argument allegations have a concerning capacity to derail discussions about important topics, stymieing argumentational interactions and the goods they provide. Such allegations can license efforts to silence, punish and deter—even as they provoke the original speaker to retaliate in kind. Used liberally, such allegations can escalate conflicts, block open-mindedness, and discourage constructive dialogues. In response, I defend “argumentational tolerance”—a principled wariness in employing meta-argument allegations—as a virtue of ethical argument.
Article
Full-text available
In arguing about justice, different sides often accept common moral principles, but reach different conclusions about justice because they disagree about facts. I argue that motivated reasoning, epistemic injustice, and ideologies of injustice support unjust institutions by entrenching distorted representations of the world. Working from a naturalistic conception of justice as a kind of social contract, I suggest some strategies for discovering what justice demands by counteracting these biases. Moral sentiments offer vital resources to this end.
Article
Full-text available
This essay seeks to explain what the “science of science communication” is by doing it. Surveying studies of cultural cognition and related dynamics, it demonstrates how the form of disciplined observation, measurement, and inference distinctive of scientific inquiry can be used to test rival hypotheses on the nature of persistent public conflict over societal risks; indeed, it argues that satisfactory insight into this phenomenon can be achieved only by these means, as opposed to the ad hoc story-telling dominant in popular and even some forms of scholarly discourse. Synthesizing the evidence, the essay proposes that conflict over what is known by science arises from the very conditions of individual freedom and cultural pluralism that make liberal democratic societies distinctively congenial to science. This tension, however, is not an “inherent contradiction”; it is a problem to be solved — by the science of science communication understood as a “new political science” for perfecting enlightened self-government.
Article
Full-text available
Crowdsourcing has had a dramatic impact on the speed and scale at which scientific research can be conducted. Clinical scientists have particularly benefited from readily available research study participants and streamlined recruiting and payment systems afforded by Amazon Mechanical Turk (MTurk), a popular labor market for crowdsourcing workers. MTurk has been used in this capacity for more than five years. The popularity and novelty of the platform have spurred numerous methodological investigations, making it the most studied nonprobability sample available to researchers. This article summarizes what is known about MTurk sample composition and data quality with an emphasis on findings relevant to clinical psychological research. It then addresses methodological issues with using MTurk-many of which are common to other nonprobability samples but unfamiliar to clinical science researchers-and suggests concrete steps to avoid these issues or minimize their impact. Expected final online publication date for the Annual Review of Clinical Psychology Volume 12 is March 28, 2016. Please see http://www.annualreviews.org/catalog/pubdates.aspx for revised estimates.
Article
Full-text available
Prior research suggests that liberals are more complex than conservatives. However, it may be that liberals are not more complex in general, but rather only more complex on certain topic domains (while conservatives are more complex in other domains). Four studies (comprised of over 2,500 participants) evaluated this idea. Study 1 involves the domain specificity of a self-report questionnaire related to complexity (dogmatism). By making only small adjustments to a popularly used dogmatism scale, results show that liberals can be significantly more dogmatic if a liberal domain is made salient. Studies 2–4 involve the domain specificity of integrative complexity. A large number of open-ended responses from college students (Studies 2 and 3) and candidates in the 2004 Presidential election (Study 4) across an array of topic domains reveals little or no main effect of political ideology on integrative complexity, but rather topic domain by ideology interactions. Liberals are higher in complexity on some topics, but conservatives are higher on others. Overall, this large dataset calls into question the typical interpretation that conservatives are less complex than liberals in a domain-general way.
Article
Full-text available
The existence of anthropogenic climate change remains a public controversy despite the consensus among climate scientists. The controversy may be fed by the existence of scientists from other disciplines publicly casting doubt on the validity of climate science. The extent to which non-climate scientists are skeptical of climate science has not been studied via direct survey. Here we report on a survey of biophysical scientists across disciplines at universities in the Big 10 Conference. Most respondents (93.6%) believe that mean temperatures have risen and most (91.9%) believe in an anthropogenic contribution to rising temperatures. Respondents strongly believe that climate science is credible (mean credibility score 6.67/7). Those who disagree about climate change disagree over basic facts (e.g., the effects of CO2 on climate) and have different cultural and political values. These results suggest that scientists who are climate change skeptics are outliers and that the majority of scientists surveyed believe in anthropogenic climate change and that climate science is credible and mature.
Article
Full-text available
Decision scientists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, and like issues that turn on empirical evidence. This paper describes a study of three of them: the predominance of heuristic-driven information processing by members of the public; ideologically motivated reasoning; and the cognitive-style correlates of political conservativism. The study generated both observational and experimental data inconsistent with the hypothesis that political conservatism is distinctively associated with either un-reflective thinking or motivated reasoning. Conservatives did no better or worse than liberals on the Cognitive Reflection Test (Frederick, 2005), an objective measure of information-processing dispositions associated with cognitive biases. In addition, the study found that ideologically motivated reasoning is not a consequence of over-reliance on heuristic or intuitive forms of reasoning generally. On the contrary, subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition. These findings corroborated an alternative hypothesis, which identifies ideologically motivated cognition as a form of information processing that promotes individuals' interests in forming and maintaining beliefs that signify their loyalty to important affinity groups. The paper discusses the practical significance of these findings, including the need to develop science communication strategies that shield policy-relevant facts from the influences that turn them into divisive symbols of political identity.
Article
Full-text available
The cultural cognition thesis posits that individuals rely extensively on cultural meanings in forming perceptions of risk. The logic of the cultural cognition thesis suggests that a two-channel science communication strategy, combining information content (Channel 1) with cultural meanings (Channel 2), could promote open-minded assessment of information across diverse communities. We test this kind of communication strategy in a two-nation (United States, n = 1,500; England, n = 1,500) study, in which scientific information content on climate change was held constant while the cultural meaning of that information was experimentally manipulated. We found that cultural polarization over the validity of climate change science is offset by making citizens aware of the potential contribution of geoengineering as a supplement to restriction of CO2 emissions. We also tested the hypothesis, derived from a competing model of science communication, that exposure to information on geoengineering would lead citizens to discount climate change risks generally. Contrary to this hypothesis, we found that subjects exposed to information about geoengineering were slightly more concerned about climate change risks than those assigned to a control condition.
Article
Full-text available
We review and reconceptualize "intuition," defining intuitions as affectively charged judgments that arise through rapid, nonconscious, and holistic associations. In doing so, we delineate intuition from other decision-making approaches (e.g., insight, ra- tional). We also develop a model and propositions that incorporate the role of domain knowledge, implicit and explicit learning, and task characteristics on intuition effec- tiveness. We close by suggesting directions for future research on intuition and its applications to managerial decision making.
Article
Full-text available
Despite evidence that individual differences in numeracy affect judgment and decision making, the precise mechanisms underlying how such differences produce biases and fallacies remain unclear. Numeracy scales have been developed without sufficient theoretical grounding, and their relation to other cognitive tasks that assess numerical reasoning, such as the Cognitive Reflection Test (CRT), has been debated. In studies conducted in Brazil and in the USA, we administered an objective Numeracy Scale (NS), Subjective Numeracy Scale (SNS), and the CRT to assess whether they measured similar constructs. The Rational-Experiential Inventory, inhibition (go/no-go task), and intelligence were also investigated. By examining factor solutions along with frequent errors for questions that loaded on each factor, we characterized different types of processing captured by different items on these scales. We also tested the predictive power of these factors to account for biases and fallacies in probability judgments. In the first study, 259 Brazilian undergraduates were tested on the conjunction and disjunction fallacies. In the second study, 190 American undergraduates responded to a ratio-bias task. Across the different samples, the results were remarkably similar. The results indicated that the CRT is not just another numeracy scale, that objective and subjective numeracy scales do not measure an identical construct, and that different aspects of numeracy predict different biases and fallacies. Dimensions of numeracy included computational skills such as multiplying, proportional reasoning, mindless or verbatim matching, metacognitive monitoring, and understanding the gist of relative magnitude, consistent with dual-process theories such as fuzzy-trace theory.
Article
Full-text available
Adult educators stress the importance of civic education, but few studies have theorized and measured the impact of such educational programs. This study presents a social cognitive model of political participation that posits connections among deliberative education, civic dispositions, and political conversations. The validity of this model was tested using two field studies of National Issues Forums participants, and the results provided partial support for the model. The first investigation indicated that deliberative civic education had a negative relationship with participants’ group efficacy and conversation dominance and positive associations with the ideological and demographic diversity of participants’ conversation networks. A second study demonstrated that civic dispositions and behaviors were positively associated with forum experiences that involved higher levels of reading, listening, observing, and enactment. These findings suggest the potential value of deliberative forums as a means of civic education, but they also demonstrate that forums vary considerably in their educational impact.
Article
Full-text available
The so-called bias blind spot arises when people report that thinking biases are more prevalent in others than in themselves. Bias turns out to be relatively easy to recognize in the behaviors of others, but often difficult to detect in one's own judgments. Most previous research on the bias blind spot has focused on bias in the social domain. In 2 studies, we found replicable bias blind spots with respect to many of the classic cognitive biases studied in the heuristics and biases literature (e.g., Tversky & Kahneman, 1974). Further, we found that none of these bias blind spots were attenuated by measures of cognitive sophistication such as cognitive ability or thinking dispositions related to bias. If anything, a larger bias blind spot was associated with higher cognitive ability. Additional analyses indicated that being free of the bias blind spot does not help a person avoid the actual classic cognitive biases. We discuss these findings in terms of a generic dual-process theory of cognition.
Article
Full-text available
The authors test the hypothesis that low-effort thought promotes political conservatism. In Study 1, alcohol intoxication was measured among bar patrons; as blood alcohol level increased, so did political conservatism (controlling for sex, education, and political identification). In Study 2, participants under cognitive load reported more conservative attitudes than their no-load counterparts. In Study 3, time pressure increased participants' endorsement of conservative terms. In Study 4, participants considering political terms in a cursory manner endorsed conservative terms more than those asked to cogitate; an indicator of effortful thought (recognition memory) partially mediated the relationship between processing effort and conservatism. Together these data suggest that political conservatism may be a process consequence of low-effort thought; when effortful, deliberate thought is disengaged, endorsement of conservative ideology increases.
Article
Full-text available
The cultural cognition thesis holds that individuals form risk perceptions that reflect their commitments to contested views of the good society. We conducted a study that used the dispute over mandatory HPV vaccination to test the cultural cognition thesis. Although public health officials have recommended that all girls aged 11 or 12 be vaccinated for HPV-a sexually transmitted virus that causes cervical cancer-political controversy has blocked adoption of mandatory school-enrollment vaccination programs in all but one state. An experimental study of a large sample of American adults (N = 1,538) found that cultural cognition generates disagreement about the risks and benefits of the vaccine through two mechanisms: biased assimilation, and the credibility heuristic. We discuss theoretical and practical implications.
Article
Full-text available
Analyzing political conservatism as motivated social cognition integrates theories of personality (authoritarianism, dogmatism-intolerance of ambiguity), epistemic and existential needs (for closure, regulatory focus, terror management), and ideological rationalization (social dominance, system justification). A meta-analysis (88 samples, 12 countries, 22,818 cases) confirms that several psychological variables predict political conservatism: death anxiety (weighted mean r = .50); system instability (.47); dogmatism-intolerance of ambiguity (.34); openness to experience (-.32); uncertainty tolerance (-.27); needs for order, structure, and closure (.26); integrative complexity (-.20); fear of threat and loss (.18); and self-esteem (-.09). The core ideology of conservatism stresses resistance to change and justification of inequality and is motivated by needs that vary situationally and dispositionally to manage uncertainty and threat.
Article
Full-text available
Three studies link resistance to probative information and intransigence in negotiation to concerns of identity maintenance. Each shows that affirmations of personal integrity (vs. nonaffirmation or threat) can reduce resistance and intransigence but that this effect occurs only when individuals' partisan identity and/or identity-related convictions are made salient. Affirmation made participants' assessment of a report critical of U.S. foreign policy less dependent on their political views, but only when the identity relevance of the issue rather than the goal of rationality was salient (Study 1). Affirmation increased concession making in a negotiation over abortion policy, but again this effect was moderated by identity salience (Studies 2 and 3). Indeed, although affirmed negotiators proved relatively more open to compromise when either the salience of their true convictions or the importance of remaining faithful to those convictions was heightened, the reverse was true when the salient goal was compromise. The theoretical and applied significance of these findings are discussed.
Article
This commentary uses the dynamic of identity-protective cognition to pose a friendly challenge to Jussim (2012). Like other forms of information processing, this one is too readily characterized as a bias. It is no mistake, however, to view identity-protective cognition as generating inaccurate perceptions. The “bounded rationality” paradigm incorrectly equates rationality with forming accurate beliefs. But so does Jussim's critique.
Article
This Article reports the results of a study on whether political predispositions influence judicial decisionmaking. The study was designed to overcome the two principal limitations on existing empirical studies that purport to find such an influence: the use of nonexperimental methods to assess the decisions of actual judges; and the failure to use actual judges in ideologically-biased-reasoning experiments. The study involved a sample of sitting judges (n = 253), who, like members of a general public sample (n = 800), were culturally polarized on climate change.
Article
Why does public conflict over societal risks persist in the face of compelling and widely accessible scientific evidence? We conducted an experiment to probe two alternative answers: the “Science Comprehension Thesis” (SCT), which identifies defects in the public’s knowledge and reasoning capacities as the source of such controversies; and the “Identity-protective Cognition Thesis” (ICT) which treats cultural conflict as disabling the faculties that members of the public use to make sense of decision-relevant science. In our experiment, we presented subjects with a difficult problem that turned on their ability to draw valid causal inferences from empirical data. As expected, subjects highest in Numeracy — a measure of the ability and disposition to make use of quantitative information — did substantially better than less numerate ones when the data were presented as results from a study of a new skin-rash treatment. Also as expected, subjects’ responses became politically polarized — and even less accurate — when the same data were presented as results from the study of a gun-control ban. But contrary to the prediction of SCT, such polarization did not abate among subjects highest in Numeracy; instead, it increased. This outcome supported ICT, which predicted that more Numerate subjects would use their quantitative-reasoning capacity selectively to conform their interpretation of the data to the result most consistent with their political outlooks. We discuss the theoretical and practical significance of these findings.
Article
Partisanship seems to affect factual beliefs about politics. For example, Republicans are more likely than Democrats to say that the deficit rose during the Clinton administration; Democrats are more likely to say that inflation rose under Reagan. What remains unclear is whether such patterns reflect differing beliefs among partisans or instead reflect a desire to praise one party or criticize another. To shed light on this question, we present a model of survey response in the presence of partisan cheerleading and payments for correct and "don't know" responses. We design two experiments based on the model's implications. The experiments show that small payments for correct and "don't know" answers sharply diminish the gap between Democrats and Republicans in responses to "partisan" factual questions. Our conclusion is that the apparent gulf in factual beliefs between members of different parties may be more illusory than real. The experiments also bolster and extend a major finding about political knowledge in America: we show (as others have) that Americans know little about politics, but we also show that they often recognize their own lack of knowledge.
Article
When surveyed about economic conditions, supporters of the president's party often report more positive conditions than its opponents. Scholars have interpreted this finding to mean that partisans cannot even agree on matters of fact. We test an alternative interpretation: Partisans give partisan congenial answers even when they have, or could have inferred, information less flattering to the party they identify with. To test this hypothesis, we administered two surveys to nationally representative samples, experimentally manipulating respondents' motivation to be accurate via monetary incentives and on-screen appeals. Both treatments reduced partisan differences in reports of economic conditions significantly. Many partisans interpret factual questions about economic conditions as opinion questions, unless motivated to see them otherwise. Typical survey conditions thus reveal a mix of what partisans know about the economy, and what they would like to be true.
Article
This essay seeks to explain what the "science of science communication" is by doing it. Surveying studies of cultural cognition and related dynamics, it demonstrates how the form of disciplined observation, measurement, and inference distinctive of scientific inquiry can be used to test rival hypotheses on the nature of persistent public conflict over societal risks; indeed, it argues that satisfactory insight into this phenomenon can be achieved only by these means, as opposed to the ad hoc story-telling dominant in popular and even some forms of scholarly discourse. Synthesizing the evidence, the essay proposes that conflict over what is known by science arises from the very conditions of individual freedom and cultural pluralism that make liberal democratic societies distinctively congenial to science. This tension, however, is not an "inherent contradiction"; it is a problem to be solved - by the science of science communication understood as a "new political science" for perfecting enlightened self-government.
Article
This article examines the science-of-science-communication measurement problem. In its simplest form, the problem reflects the use of externally invalid measures of the dynamics that generate cultural conflict over risk and other policy-relevant facts. But at a more fundamental level, the science-of-science-communication measurement problem inheres in the phenomena being measured themselves. The “beliefs” individuals form about a societal risk such as climate change are not of a piece; rather they reflect the distinct clusters of inferences that individuals draw as they engage information for two distinct ends: to gain access to the collective knowledge furnished by science and to enjoy the sense of identity enabled by membership in a community defined by particular cultural commitments. The article shows how appropriately designed “science comprehension” tests—one general and one specific to climate change—can be used to measure individuals’ reasoning proficiency as collective-knowledge acquirers independently of their reasoning proficiency as cultural-identity protectors. Doing so reveals that there is in fact little disagreement among culturally diverse citizens on what science knows about climate change. The source of the climate-change controversy and like disputes over societal risks is the contamination of the science-communication environment with forms of cultural status competition that make it impossible for diverse citizens to express their reason as both collective-knowledge acquirers and cultural-identity protectors at the same time.
Article
Numerous factors shape citizens' beliefs about global warming, but there is very little research that compares the views of the public with key actors in the policymaking process. We analyze data from simultaneous and parallel surveys of (1) the U.S. public, (2) scientists who actively publish research on energy technologies in the United States, and (3) congressional policy advisors and find that beliefs about global warming vary markedly among them. Scientists and policy advisors are more likely than the public to express a belief in the existence and anthropogenic nature of global warming. We also find ideological polarization about global warming in all three groups, although scientists are less polarized than the public and policy advisors over whether global warming is actually occurring. Alarmingly, there is evidence that the ideological divide about global warming gets significantly larger according to respondents' knowledge about politics, energy, and science.
Article
In this study we attempted to replicate and extend Nam, Jost, and van Bavel's (2013) finding that political conservatives are more likely to avoid dissonance-arousing situations relative to political liberals. Across two studies, Nam et al. (2013) found that conservatives were less willing to write essays in support of Democratic presidents (Obama, Study 1; Clinton, Study 2) than were liberals to write essays in support of Republican presidents (Bush, Study 1; Reagan, Study 2). We received access to Nam et al.’s materials to construct our study, and increased sample size relative to theirs. Further, we included measures of need for closure (NFC), participants' confidence in science, and the perceived ideology of the experimenters in order to test motivated social cognition and confidence in the experiment as possible mechanisms for the effect. Contrary to Nam et al.’s findings, conservatives and liberals were equally likely to avoid dissonance arousing situations. We found some limited evidence that the dependent variable may index compliance as opposed to desire to engage in dissonance arousing behavior.
Article
Seeming public apathy over climate change is often attributed to a deficit in comprehension. The public knows too little science, it is claimed, to understand the evidence or avoid being misled. Widespread limits on technical reasoning aggravate the problem by forcing citizens to use unreliable cognitive heuristics to assess risk. An empirical study found no support for this position. Members of the public with the highest degrees of science literacy and technical reasoning capacity were not the most concerned about climate change. Rather, they were the ones among whom cultural polarization was greatest. This result suggests that public divisions over climate change stem not from the public’s incomprehension of science but from a distinctive conflict of interest: between the personal interest individuals have in forming beliefs in line with those held by others with whom they share close ties and the collective one they all share in making use of the best available science to promote common welfare.
Book
Human beings are consummate rationalizers, but rarely are we rational. Controlled deliberation is a bobbing cork on the currents of unconscious information processing, but we have always the illusion of standing at the helm. This book presents a theory of the architecture and mechanisms that determine when, how, and why unconscious thoughts, the coloration of feelings, the plausibility of goals, and the force of behavioral dispositions change moment-by-moment in response to “priming” events that spontaneously link changes in the environment to changes in beliefs, attitudes, and behavior. Far from the consciously directed decision-making assumed by conventional models, political behavior is the result of innumerable unnoticed forces, with conscious deliberation little more than a rationalization of the outputs of automatic feelings and inclinations.
Article
Why do people resist evidence that challenges the validity of long–held beliefs? And why do they persist in maladaptive behavior even when persuasive information or personal experience recommends change? We argue that such defensive tendencies are driven, in large part, by a fundamental motivation to protect the perceived worth and integrity of the self. Studies of social–political debate, health–risk assessment, and responses to team victory or defeat have shown that people respond to information in a less defensive and more open–minded manner when their self–worth is buttressed by an affirmation of an alternative source of identity. Self–affirmed individuals are more likely to accept information that they would otherwise view as threatening, and subsequently to change their beliefs and even their behavior in a desirable fashion. Defensive biases have an adaptive function for maintaining self–worth, but maladaptive consequences for promoting change and reducing social conflict.
Article
This paper introduces the ideologically objectionable premise model (IOPM), which predicts that biased political judgments will emerge on both the political left and right, but only when the premise of a judgment is not ideologically objectionable to the perceiver. The IOPM generates three hypothesized patterns of bias: biases among both those on the left and right, bias only among those on the right, and bias only among those on the left. These hypotheses were tested within the context of the dual process motivational model of ideological attitudes (DPM; Duckitt, 2001), which posits that right-wing authoritarianism (RWA) and social dominance orientation (SDO) are related but distinct ideological attitudes. Across two studies, all three IOPM hypotheses were tested and supported on the RWA ideological attitude dimension, and two of the three IOPM hypotheses were tested and supported on the SDO dimension. These findings indicate that the context of the judgment is an important determinant of whether biases emerge in political judgment.
Article
Very little work has been done in developing a systematic and exhaustive set of criteria which can be used for assessing model validity from a managerial standpoint. Most of the other published work in this area focuses on an individual criterion, such as sensitivity analysis. This manuscript offers a set of criteria making that assessment. The following three basic kinds of validity are proposed: (1) technical validity; (2) operational validity; and (3) dynamic validity. Each has a number of subcriteria which are identified and discussed. Some of these criteria are quantifiable, and some are not. Each of the criterion identified is discussed in relatively broad terms, with brief examples. One of the purposes of this paper is to draw attention to an area worthy of further investigation. Perhaps further scrutiny will generate a truly systematic and exhaustive set of criteria. It is also hoped that further attention will lead to clarification of the criteria identified.
Article
This study is a replication and extension in Canada of a previous study in the United States in which toxicologists and members of the public were surveyed to determine their attitudes, beliefs, and perceptions regarding risks from chemicals. This study of “intuitive vs. scientific toxicology” was motivated by the premise that different assumptions, conceptions, and values underlie much of the discrepancy between expert and lay views of chemical risks. The results showed that Canadian toxicologists had far lower perceptions of risk and more favorable attitudes toward chemicals than did the Canadian public. The public's attitudes were quite negative and showed the same lack of dose-response sensitivity found in the earlier U.S. study. Both the public and the toxicologists lacked confidence in the value of animal studies for predicting human health risks. However, the public had great confidence in the validity of animal studies that found evidence of carcinogenicity, whereas such evidence was not considered highly predictive of human health risk by many toxicologists. Technical judgments of toxicologists were found to be associated with factors such as affiliation, gender, and worldviews. Implications of these data for risk communication are briefly discussed.
Article
How do individuals form opinions about new technologies? What role does factual information play? We address these questions by incorporating 2 dynamics, typically ignored in extant work: information competition and over-time processes. We present results from experiments on 2 technologies: carbon-nanotubes and genetically modified foods. We find that factual information is of limited utility—it does not have a greater impact than other background factors (e.g., values), it adds little power to newly provided arguments/frames (e.g., compared to arguments lacking facts), and it is perceived in biased ways once individuals form clear initial opinions (e.g., motivated reasoning). Our results provide insight into how individuals form opinions over time, and bring together literatures on information, framing, and motivated reasoning.
Article
The 2008 presidential election brought the partisan divide between U.S. Republicans and Democrats to the forefront. In such contested situations, people who identify with the parties and their candidates experience pressure to adhere to their group's core beliefs and behaviors. This research hypothesized that providing individuals a chance to affirm their self-integrity would relieve some of this pressure and facilitate greater openness to the opposition. In the 2 days prior to the 2008 election, Democrats (N= 50) and Republicans (N= 60) who affirmed their self-integrity by writing about important personal values (versus those who did not self-affirm) were less driven by partisan preferences in their evaluations of Barack Obama's debate performance, more favorable to opposition candidates, and more generally open to alternative viewpoints. Additionally, 10 days after the election, affirmed Republicans thought Obama would make a better president than did nonaffirmed Republicans. Discussion centers on how motivational factors can exacerbate—and attenuate—the divide between “red” and “blue” America.
Article
Students in three sections of a high school biology course were taught a unit on evolution and natural selection. Prior to instruction, students were pretested to determine their (a) reflective reasoning skill, (b) strength of religious commitment, (c) prior declarative knowledge of evolution and natural selection, and (d) beliefs in evolution or special creation and related religiously oriented beliefs. Following instruction the measures of declarative knowledge and beliefs were readministered. The study was designed to test (a) the hypothesis that the acquisition of domain-specific concepts and the modification of nonscientific beliefs largely depends upon reflective reasoning skill, not prior declarative knowledge; and (b) the hypothesis that strength of religious commitment and a belief in special creation hinder the acquisition of scientific beliefs. Although instruction produced no overall shift toward a belief in evolution, as predicted, reflective reasoning skill was significantly related to initial scientific beliefs, and reflective reasoning skill, but not prior declarative knowledge, was significantly related to gains in declarative knowledge. Reflective reasoning skill, however, was not significantly related to changes in beliefs. Also as predicted, strength of religious commitment was negatively correlated with initial belief in evolution and with a change in belief toward evolution. Interrelationships among the study's major variables, as well as educational implications, are discussed.
Article
Two of the most important sources of catastrophic risk are terrorism and climate change. The United States has responded aggressively to the risk of terrorism while doing very little about the risk of climate change. For the United States alone, the cost of the Iraq war is now in excess of the anticipated cost of the Kyoto Protocol. The divergence presents a puzzle; it also raises more general questions about both risk perception and the public demand for legislation. The best explanation for the divergence emphasizes bounded rationality. Americans believe that aggressive steps to reduce the risk of terrorism promise to deliver significant benefits in the near future at acceptable cost. By contrast, they believe that aggressive steps to reduce the risk of climate change will not greatly benefit American citizens in the near future, and they are not willing to pay a great deal to reduce that risk. This intuitive form of cost-benefit analysis is much influenced by behavioral factors, including the availability heuristic, probability neglect, outrage, and myopia. All of these contribute, after 9/11, to a willingness to support significant steps to respond to terrorism and to relative indifference to climate change. It follows that Americans are likely to support such steps in response to climate change only if one of two conditions is met: the costs of those steps can be shown to be acceptably low or new information, perhaps including a salient incident, indicates that Americans have much to gain from risk reduction in the relatively near future.
Article
For decades, policymakers and analysts have been frustrated by the stubborn and often dramatic disagreement between experts and the public on acceptable levels of environmental risk. Most experts, for instance, see no severe problem in dealing with nuclear waste, given the precautions and safety levels now in place. Yet public opinion vehemently rejects this view, repudiating both the experts' analysis and the evidence. In Dealing with Risk, Howard Margolis moves beyond the usual "rival rationalities" explanation proffered by risk analysts for the rift between expert and lay opinion. He reveals the conflicts of intuition that undergird those concerns, and proposes a new approach to the psychology of persuasion and belief. Examining the role of intuition, mental habits, and cognitive frameworks in the construction of public opinion, this compelling account bridges the public policy impasse that has plagued controversial environmental issues.
Article
This paper is concerned with the influence of scientists’ prior beliefs on their judgments of evidence quality. A laboratory experiment using advanced graduate students in the sciences (study 1) and an experimental survey of practicing scientists on opposite sides of a controversial issue (study 2) revealed agreement effects. Research reports that agreed with scientists’ prior beliefs were judged to be of higher quality than those that disagreed. In study 1, a prior belief strength X agreement interaction was found, indicating that the agreement effect was larger for general, evaluative judgments than for more specific, analytical judgments. A Bayesian analysis indicates that the pattern of agreement effects found in these studies may be normatively defensible, although arguments against implementing a Bayesian approach to scientific judgment are also advanced.
Supplement to Deppe et al
  • Baron J.
What dilemma? Moral evaluation shapes factual belief
  • B. S. Liu
  • P. H. Ditto
The Cambridge handbook of thinking and reasoning
  • D. Kahneman
  • S. Frederick
Motivated learning or motivated responding? Using incentives to distinguish between two processes
  • Khanna K.
  • G. Sood
Scientists and motivated reasoning
  • J. Curry
Motivated learning or motivated responding? Using incentives to distinguish between two processes
  • Khannak