Article

They Might Be a Liar But They’re My Liar: Source Evaluation and the Prevalence of Misinformation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Even if people acknowledge that misinformation is incorrect after a correction has been presented, their feelings towards the source of the misinformation can remain unchanged. The current study investigated whether participants reduce their support of Republican and Democratic politicians when the prevalence of misinformation disseminated by the politicians appears to be high in comparison to the prevalence of their factual statements. We presented U.S. participants either with (1) equal numbers of false and factual statements from political candidates or (2) disproportionately more false than factual statements. Participants received fact‐checks as to whether items were true or false, then rerated both their belief in the statements as well as their feelings towards the candidate. Results indicated that when corrected misinformation was presented alongside equal presentations of affirmed factual statements, participants reduced their belief in the misinformation but did not reduce their feelings towards the politician. However, if there was considerably more misinformation retracted than factual statements affirmed, feelings towards both Republican and Democratic figures were reduced—although the observed effect size was extremely small.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Previous research has examined the effects of political misinformation and fact-checking, including differences in how beliefs and political attitudes change depending on whether the misinformation is presented by a politician who supports or opposes a person's viewpoint [6][7][8]. However, this research has focused primarily on political divisions along party lines (e.g. ...
... When it comes to political misinformation, an additional concern arises from the fact that people often have strong political attachments, and thus motivated reasoning may make them resistant to belief updating when misinformation from politicians they support is corrected [13][14][15]. Fortunately, however, research into political misinformation has generally found that corrections and fact-checks are effective at decreasing belief in misinformation and increasing belief in accurate statements, at least when these beliefs are measured directly [6][7][8][16][17][18]. Swire et al. [7] and Nyhan et al. [17] both demonstrated that supporters and non-supporters of Donald Trump adjusted their level of belief in Trump statements in response to fact-checks. ...
... Swire et al. [7] and Nyhan et al. [17] both demonstrated that supporters and non-supporters of Donald Trump adjusted their level of belief in Trump statements in response to fact-checks. In a follow-up study, Swire-Thompson et al. [8] found that this replicated across the political aisle, with supporters and non-supporters of Donald Trump and Bernie Sanders both adjusting their belief in statements from these politicians in response to factchecks. Aird et al. [6] found the same pattern in an Australian context. ...
Article
Full-text available
In recent years, the UK has become divided along two key dimensions: party affiliation and Brexit position. We explored how division along these two dimensions interacts with the correction of political misinformation. Participants saw accurate and inaccurate statements (either balanced or mostly inaccurate) from two politicians from opposing parties but the same Brexit position (Experiment 1), or the same party but opposing Brexit positions (Experiment 2). Replicating previous work, fact-checking statements led participants to update their beliefs, increasing belief after fact affirmations and decreasing belief for corrected misinformation, even for politically aligned material. After receiving fact-checks participants had reduced voting intentions and more negative feelings towards party-aligned politicians (likely due to low baseline support for opposing party politicians). For Brexit alignment, the opposite was found: participants reduced their voting intentions and feelings for opposing (but not aligned) politicians following the fact-checks. These changes occurred regardless of the proportion of inaccurate statements, potentially indicating participants expect politicians to be accurate more than half the time. Finally, although we found division based on both party and Brexit alignment, effects were much stronger for party alignment, highlighting that even though new divisions have emerged in UK politics, the old divides remain dominant.
... Much attention has focused on the role of social media as a vector of misinformation [3]. The role of political leaders has attracted less research attention, even though leaders demonstrably influence media coverage [4] and public opinion [5], and even though politicians who "speak their mind" are perceived by segments of the public as authentic and honest even if their statements are unsupported by evidence or facts [6][7][8]. Here we show that in the last decade, politicians' concept of truth has undergone a distinct shift, with authentic but evidence-free belief-speaking becoming more prominent and more differentiated from evidence-based truth seeking. ...
... A troubling aspect of misinformation is that it lingers in memory even if people acknowledge, believe, and try to adhere to a correction [12]. That is, even though people may adjust their factual beliefs in response to corrections [e.g., 13], their political behaviors and attitudes may be largely unaffected [e.g., 6,7]. Perhaps most concerningly, in some circumstances people may even come to value overt dishonesty as a signal of authenticity [8]. ...
... The disconnect between accuracy and politicians' attractiveness to voters has also been established in behavioral experiments involving the American public [6,7]. ...
Preprint
The spread of online misinformation is increasingly perceived as a major problem for societal cohesion and democracy. Much attention has focused on the role of social media as a vector of misinformation. The role of political leaders has attracted less research attention, even though leaders demonstrably influence media coverage and public opinion, and even though politicians who "speak their mind" are perceived by segments of the public as authentic and honest even if their statements are unsupported by evidence or facts. Here we show that in the last decade, politicians' concept of truth has undergone a distinct shift, with authentic but evidence-free belief-speaking becoming more prominent and more differentiated from evidence-based truth seeking. We analyze communications by members of the U.S. Congress on Twitter between 2011 and 2022 and show that political speech has fractured into two distinct components related to belief-speaking and evidence-based truth-seeking, respectively, and that belief-speaking is related to spreading of untrustworthy information. We show that in tweets by conservative members of Congress, an increase in belief-speaking of 10% is associated with a decrease of 6.8 points of quality (using the NewsGuard scoring system) in the sources shared in a tweet. In addition, we find that an increase of belief-speaking language by 10% in the shared articles themselves is associated with a drop in NewsGuard score of 4.3 points for members of both parties. By contrast, increase in truth-seeking language is associated with a slight increase in quality of sources. The results support the hypothesis that the current flood of misinformation in political discourse is in part driven by a new understanding of truth and honesty that has replaced reliance on evidence with the invocation of subjective belief.
... Research has shown that people are still misinformed about political issues, even after receiving corrective information related to these misperceptions (Thorson, 2016). Corrections against real-world political lies also fail to reduce voting intentions and support for lying politicians (Aird et al., 2018;Swire-Thompson et al., 2020). ...
... Interestingly, the correction made those supporting Palin and scoring high in political knowledge believe the misperception even more strongly-an effect the authors called backfire. While ample research revealed similar results of a reduced effect of corrections contradicting people's prior beliefs (Ecker & Ang, 2019;Lewandowsky et al., 2005;Weeks, 2015), more recent attempts to replicate the backfire effect have failed (Aird et al., 2018;Clayton et al., 2020;Swire-Thompson et al., 2020;Wood & Porter, 2019). Although there seems to be a consensus that backfire effects have been exaggerated in the literature and the press, there does appear to be a bias for citizens to accept misinformation corrections if these fit their own ideology (Nyhan, 2021). ...
Article
In the last few years, especially after the Brexit referendum and the 2016 U.S. elections, there has been a surge in academic interest for misinformation and disinformation. Social, cognitive, and political scientists' work on these phenomena has focused on two main aspects: • Individuals' (and by extension societies') vulnerability to misinformation; • Factors and interventions that can increase individuals' (and societies') resistance to misinformation. In this article, we offer a critical review of the psychological research pertaining to these two aspects. Drawing on this review, we highlight an emerging tension in the relevant literature. Indeed, the current state of the art of the political misinformation literature reflects the combined operation of two opposing psychological constructs: excess gullibility on the one hand and excess vigilance on the other. We argue that this conceptualization is important in both advancing theories of individuals' and societies' vulnerability to misinformation and in designing prospective research programs. We conclude with proposing what, in our view, are the most promising avenues for future research in the field.
... While the fact-checking literature also focuses primarily on the alteration of beliefs and knowledge, beliefs are not necessarily indicative of political preferences or social media behavior. Nyhan et al. (2019) and Swire-Thompson et al. (2020), for example, both find that while fact-checks of politicians' false claims successfully reduce beliefs in the claims, they do not impact support for the politician. With respect to intended sharing behavior, Bor et al. (2020) and Pennycook et al. (2020) found that identification of fake news may not prevent sharing on Twitter. ...
... Yet, cultural differences appear to matter. Swire- Thompson et al. (2020) replicated a study on American and Australian voters and found significantly different effect sizes, indicating that cultural context may impact the efficacy of social media interventions. Additionally, the majority of studies recruited participants from universities (67 studies) or used Amazon's Mechanical Turk (72 studies), a crowd-sourcing platform that can also be used to administer research surveys. ...
Article
Full-text available
Despite ongoing discussion of the need for increased regulation and oversight of social media, as well as debate over the extent to which the platforms themselves should be responsible for containing misinformation, there is little consensus on which interventions work to address the problem of influence operations and disinformation campaigns. To provide policymakers and scholars a baseline on academic evidence about the efficacy of countermeasures, the Empirical Studies of Conflict Project conducted a systematic review of research articles that aimed to estimate the impact of interventions that could reduce the impact of misinformation.
... Indirect support for this argument comes from the growing evidence on counterproductive effects of getting a fact straight. Research finds that even after successful belief updating after reading a fact-check, partisans do not adjust their evaluations of the politician under scrutiny according to the conclusions of the fact-check (Nyhan et al., 2019;Swire-Thompson et al., 2019;Thorson, 2016); those with less interest in the issue under scrutiny also decrease their epistemic political efficacy (Pingree et al., 2014). In sum, it is possible that individuals continue to engage in biased reasoning processes and form identity-congruent attitudes, even when they report desirable belief updates. ...
... Further, we extend the implications of what Thorson (2016) calls "belief echoes"-the phenomenon of negative political information continuing to shape related attitudes after the information has been discredited. While previous studies such as Nyhan et al. (2019) and Swire-Thompson et al. (2019) found instances of "belief echoes" by showing that citizens retain their evaluations of the politician who made a claim of fact even they accept that the claim is false, our work indicates that "belief echoes" can extend from ineffective to counterproductive in the context of media perceptions: people see the news media that did a formal fact-check as more biased, even when they side with the conclusions of the fact-check. ...
Article
Concerns over misinformation have inspired research on how people are influenced by, and form perceptions of, media messages that aim to correct false claims. We juxtapose two seemingly incongruent expectations from the theories of motivated reasoning and hostile media perceptions, uncovering the unique effects of presenting a political news story with corrective information as a “fact-check.” We test our theoretical expectations through two online survey experiments. We find that compared to a conventional style of news reporting, a news story presented in a fact-checking genre significantly increases how accurately people are able to evaluate factual information, but it also comes with an important counterproductive effect: people will be more likely to perceive the journalist and the story as biased. We discuss the implications of our findings in theorizing the persuasion effects of corrective information in the contemporary media environment.
... In the second validation step, we applied the dictionaries to our tweet corpus and calculated the semantic similarity D b and D f between the article and the belief-speaking and fact-speaking dictionaries, their factual beliefs in response to corrections 9 , their political behaviours and attitudes may be largely unaffected 10,11 . Second, perhaps most concerningly, in some circumstances people may even come to value overt dishonesty as a signal of 'authenticity' 12 . ...
Article
Full-text available
The spread of online misinformation on social media is increasingly perceived as a problem for societal cohesion and democracy. The role of political leaders in this process has attracted less research attention, even though politicians who ‘speak their mind’ are perceived by segments of the public as authentic and honest even if their statements are unsupported by evidence. By analysing communications by members of the US Congress on Twitter between 2011 and 2022, we show that politicians’ conception of honesty has undergone a distinct shift, with authentic belief speaking that may be decoupled from evidence becoming more prominent and more differentiated from explicitly evidence-based fact speaking. We show that for Republicans—but not Democrats—an increase in belief speaking of 10% is associated with a decrease of 12.8 points of quality (NewsGuard scoring system) in the sources shared in a tweet. In contrast, an increase in fact-speaking language is associated with an increase in quality of sources for both parties. Our study is observational and cannot support causal inferences. However, our results are consistent with the hypothesis that the current dissemination of misinformation in political discourse is linked to an alternative understanding of truth and honesty that emphasizes invocation of subjective belief at the expense of reliance on evidence.
... As a result, reducing false-news sharing on social media is considered by many experts to provided by specialized "fact-checkers," who check the veracity of specific pieces of content that are posted online, find false content, and label it with suitable warnings. Despite initial concerns that corrections may backfire (Nyhan and Reifler 2010), there is now increasing evidence that tagging inaccurate news with warnings tends to increase people's ability to discern truth from falsehood (Pennycook, Cannon, and Rand 2018;Wood and Porter 2019;Pennycook, McPhetres, et al. 2020;Swire-Thompson et al. 2020;Yaqub et al. 2020). Additionally, the effect of fact-checkers can be reinforced by social media platforms that can use ranking algorithms to reduce the likelihood of exposing users to content labeled as false by fact-checkers. ...
Article
Full-text available
Professional fact-checking of individual news headlines is an effective way to fight misinformation, but it is not easily scalable, because it cannot keep pace with the massive speed at which news content gets posted on social media. Here we provide evidence for the effectiveness of ratings of news sources, instead of individual news articles. In a large pre-registered experiment with quota-sampled Americans, we find that participants are less likely to share false headlines (and more discerning of true versus false headlines) when 1-to-5 star trustworthiness ratings were applied to news headlines. This is true both when the ratings are generated by fact-checkers and by laypeople (although the effect is stronger using fact-checker ratings). We also observe a positive spillover effect: sharing discernment also increases for headlines whose source was not rated, likely because the presence of ratings on some headlines prompts users to reflect on source quality more generally. This study suggests that displaying information regarding the trustworthiness of news sources provides a scalable approach for reducing the spread of low-quality information.
... It is well-known that changes to beliefs and attitudes tend to not translate to equivalent changes in behavioural intentions and behaviours [66,67]. In fact, other research has found that misinformation corrections tend to have stronger impact on the targeted misconceptions than on related behaviours or behavioural intentions, including vaccination intentions [e.g., 2,45,68,69]. In the present study, it is possible that the observed effects are true small effects that would have been statistically significant with greater power and potentially meaningful at scale. ...
Article
Full-text available
Individuals often continue to rely on misinformation in their reasoning and decision making even after it has been corrected. This is known as the continued influence effect, and one of its presumed drivers is misinformation familiarity. As continued influence can promote misguided or unsafe behaviours, it is important to find ways to minimize the effect by designing more effective corrections. It has been argued that correction effectiveness is reduced if the correction repeats the to-be-debunked misinformation, thereby boosting its familiarity. Some have even suggested that this familiarity boost may cause a correction to inadvertently increase subsequent misinformation reliance; a phenomenon termed the familiarity backfire effect. A study by Pluviano et al. (2017) found evidence for this phenomenon using vaccine-related stimuli. The authors found that repeating vaccine "myths" and contrasting them with corresponding facts backfired relative to a control condition, ironically increasing false vaccine beliefs. The present study sought to replicate and extend this study. We included four conditions from the original Pluviano et al. study: the myths vs. facts, a visual infographic, a fear appeal, and a control condition. The present study also added a "myths-only" condition, which simply repeated false claims and labelled them as false; theoretically, this condition should be most likely to produce familiarity backfire. Participants received vaccine-myth corrections and were tested immediately post-correction, and again after a seven-day delay. We found that the myths vs. facts condition reduced vaccine misconceptions. None of the conditions increased vaccine misconceptions relative to control at either timepoint, or relative to a pre-intervention baseline; thus, no backfire effects were observed. This failure to replicate adds to the mounting evidence against familiarity backfire effects and has implications for vaccination communications and the design of debunking interventions.
... Hacktivists in our sample echo the sentiment regarding the social media users' susceptibility to false information found in scientific literature: laziness to check facts [P2] [89], resistance to authoritative suggestions [P7] [57], allegiance [P13] [120], and simple ignorance [P16] [17]. As people that resort to action, hacktivists do feel the obligation to propose ways for addressing this susceptibility. ...
Preprint
Full-text available
In this study, we interviewed 22 prominent hacktivists to learn their take on the increased proliferation of misinformation on social media. We found that none of them welcomes the nefarious appropriation of trolling and memes for the purpose of political (counter)argumentation and dissemination of propaganda. True to the original hacker ethos, misinformation is seen as a threat to the democratic vision of the Internet, and as such, it must be confronted on the face with tried hacktivists' methods like deplatforming the "misinformers" and doxing or leaking data about their funding and recruitment. The majority of the hacktivists also recommended interventions for raising misinformation literacy in addition to targeted hacking campaigns. We discuss the implications of these findings relative to the emergent recasting of hacktivism in defense of a constructive and factual social media discourse.
... The efficacy of fact-checking as an intervention strategy could also be mitigated by people's desire to keep their beliefs and trust the sources of inaccurate information even after learning about its incorrectness (Swire- Thompson et al., 2019). Surmising from the consistent political leanings among the population in Hong Kong, regardless of frequent exposures to misinformation and increasing availability of fact-checking stories, the influence of fact-checking in the middle of political upheaval might be limited. ...
Chapter
While the field of professional fact-checking has grown steadily in Hong Kong as a countermeasure to tackle misinformation, concerns over politicisation and misappropriation of this practice have also become salient. Investigation of widely shared information in public, particularly political statements, could be a delicate affair in the Special Administrative Region of China. Despite the challenges, many stakeholders—from media organisations to community groups to governmental authorities—engage in fact-checking constantly while educators have also incorporated it into media literacy programmes. In this chapter, the author explores the historical development of fact-checking amid political upheavals and discusses its impact and implications on the news industry and educational sectors in Hong Kong.
... Warning labels were, e.g., shown to reduce users' intentions to share false news stories on Facebook [55]. However, research also showed that even if people see and understand a correction about misinformation, their feelings towards a source may remain unchanged [93]. Furthermore, Dias et al. also found that showing the source of a news article does not affect whether users perceive a headline as accurate or whether they would consider sharing a headline [23]. ...
Preprint
Full-text available
During the COVID-19 pandemic, the World Health Organization provided a checklist to help people distinguish between accurate and misinformation. In controlled experiments in the United States and Germany, we investigated the utility of this ordered checklist and designed an interactive version to lower the cost of acting on checklist items. Across interventions, we observe non-trivial differences in participants' performance in distinguishing accurate and misinformation between the two countries and discuss some possible reasons that may predict the future helpfulness of the checklist in different environments. The checklist item that provides source labels was most frequently followed and was considered most helpful. Based on our empirical findings, we recommend practitioners focus on providing source labels rather than interventions that support readers performing their own fact-checks, even though this recommendation may be influenced by the WHO's chosen order. We discuss the complexity of providing such source labels and provide design recommendations.
... As populist discourses focus on conflict, dramatization and more often than not the circumvention of expert knowledge, populist elites are eager to embrace the post-truth repertoire. What may still be surprising, however, is that they seem to be getting away with it, even when caught in the act (Hameleers, 2020;Nyhan et al., 2020;Swire-Thompson et al., 2020). ...
Chapter
Criticized by some, due to the naïve suggestion that there previously was an era of “truth”, the term “post-truth” is a periodizing concept that sees the foundations of liberal democracies under siege not only by the increased prevalence of mis- and disinformation but also by the increased acceptance of it. First, this entry provides a diagnosis of the so-called post-truth era. Second, it highlights three synergetic agents and linked developments that are argued to be at the root of post-truth: 1) social media, 2) the crisis of journalism, 3) the supply side of politics (especially, the professionalization of political communication and the rise of populism). Finally, the entry critically assesses currently discussed antidotes to post-truth.
... The beneficial effects of debunking can last several weeks 92,100,179 , although the effects can wear off quicker 145 . There is also evidence that corrections that reduce misinformation belief can have downstream effects on behaviours or intentions 94,95,180,181such as a person's inclination to share a social media post or their voting intentions -but not always 91,96,182 . ...
Poster
Full-text available
Critical thinking for sustainable development therefore focuses on the soft skills of positive values and attitudes while at the same time embracing social, economic, political, and environmental transformation for the good of everyone irrespective of age, gender, ethnicity, or status in society. Green marketing is developing and selling environmentally friendly goods or services. It helps improve credibility, enter a new audience segment, and stand out among competitors as more and more people become environmentally conscious. Using eco-friendly paper and inks for print marketing materials. Skipping the printed materials altogether and option for electronic marketing. Having a recycling program and responsible waste disposal practices. Using eco-friendly product packaging. Critical thinking helps people better understand themselves, their motivations and goals. When you can deduce information to find the most important parts and apply those to your life, you can change your situation and promote personal growth and overall happiness. The reason why innovation benefits from critical thinking is simple; critical thinking is used when judgment is needed to produce a desired set of valued outcomes. That is why the majority of innovation outcomes reflect incremental improvements built on a foundation of critically thought-out solutions. The results indicate that there are four factors that effectively influence fulfillment of green marketing, specifically, green labeling, compatibility, product value and green advertising. A green mission statement becomes the foundation of a company's sustainability efforts. It provides the organization and its stakeholders with an understanding of what's most important and what your company can do to protect the natural world and be more socially responsible.
... This might indicate that politicians can use these accusations without fearing backlashes on how the electorate perceives them. Similarly, previous research shows that participants do not change their perceptions of a politician who disseminated misinformation even when they acknowledge that said information is indeed incorrect (e.g., Swire-Thompson et al., 2020). While these studies investigated the effects of politicians' use of disinformation, we studied the effects of politicians' use of accusations of disinformation. ...
Article
Full-text available
Populist politicians increasingly accuse opposing media of spreading disinformation or “fake news.” However, empirical research on the effects of these accusations is scarce. This survey experiment (N = 1,330) shows that disinformation accusations reduce audience members’ trust in the accused news outlet and perceived accuracy of the news message, while trust in the accusing politician is largely unaffected. However, only individuals with strong populist attitudes generalize disinformation accusations to the media as an institution and reduce their general media trust. The phrase “fake news” does not amplify any of these effects. These findings suggest that politicians can undermine the credibility of journalism without much repercussion—a mechanism that might also threaten other authoritative information sources in democracies such as scientists and health authorities.
... Furthermore, correcting factual knowledge may not lead to an improvement in the more fundamental underlying beliefs and belief systems, which are likely of more importance for determining behavior. For example, even when correcting a politician's false claims successfully changes factual views of their supporters, this may not diminish the supporters' level of support for the politician [49,50]. Additionally, fact-checking may be limited in its ability to slow the spread of misinformation on social media -prior research suggests that although reshares of rumors which have been commented on with a link to fact-checking site Snopes.com ...
... In addition to citizens, officials and representatives could make decisions on the grounds of pseudo-information. Research has explored the role and effects of politicians on pseudo-information spread (e.g., Farhall, 2019;Swire-Thompson et al., 2020). Although elected representatives are assisted by teams of advisors and legislation is a long process, they are not immune. ...
Article
Today’s public sphere is largely shaped by a dynamic digital public space where lay people conform a commodified marketplace of ideas. Individuals trade, create, and generate information, as well as consume others’ content, whereby information as public space commodity splits between this type of content and that provided by the media, and governmental institutions. This paper first explains how and why our current digital media context opens the door to pseudo-information (i.e., misinformation, disinformation, etc.). Furthermore, the paper introduces several concrete empirical efforts in the literature within a unique volume that attempt to provide specific and pragmatic steps to tackle pseudo-information, reducing the potential harm for established democracies that today’s digital environment may elicit by fueling an ill-informed society.
... The negative relationship between informativeness and believability is also consistent with recent research on misinformation (e.g., fake news and conspiracy theories). Even if low in believability, misinformation can be perceived to be 'informative if true', and therefore has the potential to strongly sway opinion [48,49] and be widely shared online [50, see also 51]. ...
Article
Full-text available
To investigate impression formation, researchers tend to rely on statements that describe a person’s behavior (e.g., “Alex ridicules people behind their backs”). These statements are presented to participants who then rate their impressions of the person. However, a corpus of behavior statements is costly to generate, and pre-existing corpora may be outdated and might not measure the dimension(s) of interest. The present study makes available a normed corpus of 160 contemporary behavior statements that were rated on 4 dimensions relevant to impression formation: morality, competence, informativeness, and believability. In addition, we show that the different dimensions are non-independent, exhibiting a range of linear and non-linear relationships, which may present a problem for past research. However, researchers interested in impression formation can control for these relationships (e.g., statistically) using the present corpus of behavior statements.
... Also, if Trump and his allies honestly believed that he won the 2020 election, despite the evidence that Biden won and that there was no fraud involved, it would, however, constitute an example of knowledge resistance among political elites. Important to note, at the same time, is that initial studies suggest that the use of mis-and disinformation by political actors such as Trump does not have negative backlash effects on how voters perceive these actors (Nyhan et al., 2019;Swire-Thompson et al., 2020). A second supply chain is more clandestine. ...
... Warning labels were, e.g., shown to reduce users' intentions to share false news stories on Facebook [55]. However, research also showed that even if people see and understand a correction about misinformation, their feelings towards a source may remain unchanged [93]. Furthermore, Dias et al. also found that showing the source of a news article does not affect whether users perceive a headline as accurate or whether they would consider sharing a headline [23]. ...
Conference Paper
Full-text available
During the COVID-19 pandemic, the World Health Organization provided a checklist to help people distinguish between accurate and misinformation. In controlled experiments in the United States and Germany, we investigated the utility of this ordered checklist and designed an interactive version to lower the cost of acting on checklist items. Across interventions, we observe non-trivial differences in participants' performance in distinguishing accurate and misinformation between the two countries and discuss some possible reasons that may predict the future helpfulness of the checklist in different environments. The checklist item that provides source labels was most frequently followed and was considered most helpful. Based on our empirical findings, we recommend practitioners focus on providing source labels rather than interventions that support readers performing their own fact-checks, even though this recommendation may be influenced by the WHO's chosen order. We discuss the complexity of providing such source labels and provide design recommendations.
... person's attitudes and the worldview) are also being taken into account in explaining CIE. It is proposed, for example, that motivated reasoning [43] is responsible for misinformation compliance when misinformation relates to political views or prejudices [9,11,13,20,44]; however, see: [45][46][47][48]. Furthermore, more skeptical people tend to reject misinformation and accept retractions more correctly [49]. ...
Article
Full-text available
The continued influence effect of misinformation (CIE) is a phenomenon in which certain information, although retracted and corrected, still has an impact on event reporting, reasoning, inference, and decisions. The main goal of this paper is to investigate to what extent this effect can be reduced using the procedure of inoculation and how it can be moderated by the reliability of corrections' sources. The results show that the reliability of corrections' sources did not affect their processing when participants were not inoculated. However, inoculated participants relied on misinformation less when the correction came from a highly credible source. For this source condition, as a result of inoculation, a significant increase in belief in retraction, as well as a decrease in belief in misinformation was also found. Contrary to previous reports, belief in misinformation rather than belief in retraction predicted reliance on misinformation. These findings are of both great practical importance as certain boundary conditions for inoculation efficiency have been discovered to reduce the impact of the continued influence of misinformation, and theoretical, as they provide insight into the mechanisms behind CIE. The results were interpreted in terms of existing CIE theories as well as within the remembering framework, which describes the conversion from memory traces to behavioral manifestations of memory.
... Correction to misinformation is most likely to exert the greatest effect when it is delivered immediately, aligned with one's political agenda, and attributed to the same source that delivered the misinformation (Walter & Tukachinsky, 2020). Even if individuals update their beliefs after a correction or warning of fake news, they may refuse to subsequently update their attitudes about the issue or the source spreading the misinformation (Nyhan et al., 2020;Porter et al., 2019;Swire-Thompson et al., 2019). Rather than reactive methods that attempt to dispel misinformation after people have already been exposed to it, proactive methods that seek to prevent exposure to misinformation are more likely to be effective long term (see Greenspan & Loftus, 2021, for a review). ...
Preprint
Delayed allegations of sexual misconduct have garnered much media attention, especially when allegations involve public figures such as politicians. In the current chapter, we discuss two main tenets related to the politics of sexual misconduct allegations. First, we argue that, although individuals may wait years or decades before reporting valid experiences of sexual misconduct, delayed reporting is not without mnemonic consequences. Memory undergoes deterioration and distortion over time, so even in valid cases, the fading and reconstruction of event details are highly likely to take place. Further, as time passes, one’s susceptibility to misinformation and false memory production increase alongside natural processes of memory deterioration. We offer a framework to evaluate delayed allegations of sexual misconduct where we outline several event characteristics (e.g., repetition, exposure to post-event information) that contribute to memory reliability. We use two high-profile allegations of sexual misconduct involving United States Supreme Court nominees to illustrate these processes. In the second half of the paper, we discuss the influence of various socio-political factors (e.g., political orientation, social media, social movements) on adults’ perceptions of sexual misconduct allegations. We conclude by highlighting the need to balance media exposure and scientific scrutiny to ensure that investigations of sexual misconduct in political domains are fair and just. Keywords: memory; sexual misconduct; politics; misinformation
... These results fit in with the body of literature suggesting that, in many contexts, self-interest has limited effects on political decision-making (Sears and Funk 1991;, and can be overwhelmed by group-interest, party cues, or system serving beliefs. It also concords with prior findings showing that Trump supporters can be resistant to changing their opinion about Trump in response to negative information about him, such as information about his false claims (Swire- Thompson et al. 2019). ...
Article
Full-text available
People presumably strive to maximize their own benefit whenever possible, so it is puzzling when they vote for leaders who may not have their best interest at heart. We tested whether support for a political leader is diminished when supporters learn they are financially disadvantaged by the leader’s policies. In a two-stage experiment (Time 1 n = 601, Time 2 n = 343) with pre-registered hypotheses, Trump voters predicted their expected tax refund (or payment), and then reported their tax outcome immediately after the filing deadline. Afterwards, we confronted half of the participants with the discrepancy between their actual and predicted tax outcome. Having lower-than-expected tax outcomes was not associated with reduced support for Trump either on its own, or in combination with being reminded of this outcome. However, it led participants who were dissatisfied with their tax outcome to downgrade the importance of lowering taxes, possibly in an effort to reduce dissonance and justify continued support for Trump. Subjective tax outcome satisfaction did predict Trump support, but was dwarfed in magnitude by other variables such as system justification and political orientation. Thus, people may find ways to rationalize information that goes against their self-interest into their partisan world-view. © 2022 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
... In the pretest, participants were presented with the 42 claims: 21 facts and 21 misinformation items. An equal number of misinformation and facts were given so that participants would not be biased toward true or false responses in later veracity judgements (Swire-Thompson, Ecker, et al., 2020), though for the purpose of this study, we were only interested in the misinformation items. At the commencement of the study, only participants in the correction group received instructions indicating that they would be told whether each statement was true or false. ...
Article
The backfire effect is when a correction increases belief in the very misconception it is attempting to correct, and it is often used as a reason not to correct misinformation. The current study aimed to test whether correcting misinformation increases belief more than a no-correction control. Furthermore, we aimed to examine whether item-level differences in backfire rates were associated with test-retest reliability or theoretically meaningful factors. These factors included worldview-related attributes, including perceived importance and strength of precorrection belief, and familiarity-related attributes, including perceived novelty and the illusory truth effect. In 2 nearly identical experiments, we conducted a longitudinal pre/post design with N = 388 and 532 participants. Participants rated 21 misinformation items and were assigned to a correction condition or test-retest control. We found that no items backfired more in the correction condition compared to test-retest control or initial belief ratings. Item backfire rates were strongly negatively correlated with item reliability (ρ = -.61/-.73) and did not correlate with worldview-related attributes. Familiarity-related attributes were significantly correlated with backfire rate, though they did not consistently account for unique variance beyond reliability. While there have been previous papers highlighting the nonreplicable nature of backfire effects, the current findings provide a potential mechanism for this poor replicability. It is crucial for future research into backfire effects to use reliable measures, report the reliability of their measures, and take reliability into account in analyses. Furthermore, fact-checkers and communicators should not avoid giving corrective information due to backfire concerns. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... The beneficial effects of debunking can last several weeks 92,100,179 , although the effects can wear off quicker 145 . There is also evidence that corrections that reduce misinformation belief can have downstream effects on behaviours or intentions 94,95,180,181such as a person's inclination to share a social media post or their voting intentions -but not always 91,96,182 . ...
Article
Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.
... Brydges i Ecker, 2018;Ecker i in., 2020a;2020b;2020c;Rich i in., 2017;Swire i in., 2017a) lub tematyki politycznej (np. Aird i in., 2018;Swire i in., 2017b;Swire-Thompson i in., 2020). Paradygmat ten w większym stopniu bada bezpośrednią wiarę w dezinformację niż wnioskowanie na jej podstawie. ...
Thesis
Full-text available
Efekt przedłużonego wpływu dezinformacji (CIE) jest zjawiskiem polegającym na tym, że pewna informacja, mimo że została wycofana i skorygowana, nadal ma wpływ na relacje o zdarzeniu, rozumowanie, wnioskowanie i decyzje. W niniejszej pracy przedstawiono eksperyment, który miał na celu zbadanie, w jakim stopniu efekt ten uda się zredukować przy użyciu procedury inokulacji, polegającej na „zaszczepieniu” przeciwko wpływowi, w tym dezinformacji, oraz jak efekt ten może być moderowany przez wiarygodność korekt. Potwierdzono większość z postawionych hipotez. Wyniki pokazały, że wiarygodność źródeł korekt nie miała wpływu na ich przetwarzanie, gdy do inokulacji nie dochodziło, jednak wśród osób zaszczepionych doszło do znaczącej redukcji polegania na dezinformacji, jeśli jej korekta pochodziła z wysoce wiarygodnego źródła. Dla tego warunku źródła, w wyniku inokulacji, doszło również do znaczącego zwiększenia wiary w wycofanie, a także zmniejszenia wiary w dezinformację. Wbrew poprzednim doniesieniom okazało się również, że to wiara w dezinformację, a nie w wycofanie jest predyktorem polegania na dezinformacji. Ustalenia te mają duże znaczenie z perspektywy praktycznej, ponieważ odkryto warunki brzegowe techniki redukowania wpływu dezinformacji o sporej aplikowalności, a także teoretycznej, ponieważ umożliwiają one wgląd w mechanizmy odpowiedzialne za CIE. Wyniki interpretowano zarówno w związku z dotychczasowymi teoriami CIE, jak również w ramach modelu pamiętania.
... Thus, much research effort has focused on the conditions that promote effective updating of the situation model (for reviews see Chan et al., 2017;Lewandowsky et al., 2012;Seifert, 2002;Walter & Tukachinsy, 2020). A multitude of factors have been explored, including the timing of correction (e.g., Cook et al., 2017;Ithisuphalap et al., 2020), prior encounters of misinformation (e.g., ;Ecker et al., 2017 ;, prior beliefs (e.g., Ecker & Ang, 2019;Swire et al., 2017a, b;Swire-Thompson et al., 2020), and individual differences (e.g., Chang et al., 2019;. Here, we focus on the content of the correction and the importance of the misinformation to the unfolding narrative. ...
Article
Full-text available
Background: The term “continued influence effect” (CIE) refers to the phenomenon that discredited and obsolete information continues to affect behavior and beliefs. The practical relevance of this work is particularly apparent as we confront fake news everyday. Thus, an important question becomes, how can we mitigate the continued influence of misinformation? Decades of research have identified several factors that contribute to the CIE reduction, but few have reported successful elimination. Across three studies, we evaluated the relative contribution of three factors (i.e., targeting the misinformation, providing an alternative explanation, and relative importance of the misinformation content) to the reduction of the CIE. Results: Across three studies and two different CIE measures, we found that alternative provision consistently resulted in CIE reduction. Furthermore, under certain conditions, the combination of alternative inclusion and direct targeting of misinformation in the correction statement resulted in successful elimination of the CIE, such that individuals who encountered that type of correction behaved similarly to baseline participants who never encountered the (mis)information. In contrast, under one CIE measure, participants who received correction statements that failed to include those elements referenced the (mis)information as frequently as baseline participants who never encountered a correction. Finally, we delineated several component processes involved in misinformation outdating and found that the extent of outdating success varied as a function of the causality of misinformation. Conclusions: The damaging effects of fake news are undeniable, and the negative consequences are exacerbated in the digital age. Our results contribute to our understanding of how fake news persists and how we may begin to mitigate their effects.
... These data suggest that when people are asked to engage in cognitive processing to interpret data in the laboratory, as opposed to expressing their pre-existing attitudes in a survey, then conservatives and liberals engage seemingly identical cognitive processes. Similar nearly-symmetrical processing of corrections to misinformation has been reported by Swire-Thompson, Ecker, Lewandowsky, & Berinsky (2020) with Trump voters and Sanders voters. 6 Tables 6 and S6. 6 There are, however, exceptions to this symmetry. ...
Article
Some scientific propositions are so well established that they are no longer debated by the relevant scientific community, such as the fact that greenhouse gas emissions are altering the Earth's climate. In many cases, such scientifically settled issues are nonetheless rejected by segments of the public. U.S. surveys have repeatedly shown that the rejection of scientific evidence across a broad range of domains is preferentially associated with rightwing or libertarian worldviews, with little evidence for rejection of scientific evidence by people on the political left. We report two preregistered representative surveys (each N > 1000) that (1) sought to explain this apparent political asymmetry and (2) continued the search for the rejection of scientific evidence on the political left. To address the first question, we focused on Merton's classic analysis of the norms of science, such as communism and universalism, which continue to be internalized by the scientific community but which are not readily reconciled with conservative values. Both studies show that people's political worldviews are associated with their attitudes towards those scientific norms, and that those attitudes predict people's acceptance of vaccinations and climate science. The norms of science may thus be in latent conflict with the worldviews of a substantial segment of the public. To address the second question, we examined people's views on the role of inheritance in determining people's intelligence, given that the belief in the power of learning and environmental factors to shape human development is a guiding principle of leftwing thought. We find no association between core measures of political worldviews and people's view of heritability of intelligence, although two subordinate constructs, nationalism and social dominance orientation, were associated with belief in heritability.
... Furthermore, correcting factual knowledge may not lead to an improvement in the more fundamental underlying beliefs and belief systems, which are likely of more importance for determining behavior. For example, even when correcting a politician's false claims successfully changes factual views of their supporters, this may not diminish the supporters' level of support for the politician [49,50]. Additionally, fact-checking may be limited in its ability to slow the spread of misinformation on social media -prior research suggests that although reshares of rumors which have been commented on with a link to fact-checking site Snopes.com ...
... Debunking may therefore only exacerbate societal polarisation and widen the public gap with scientists and the broader elites they are part of. And even when people know that certain information is untrue because of corrective debunking measures, they may often continue to endorse that information simply to express their identity and subcultural affiliations (Nyhan et al., 2019;Schaffner & Luks, 2018;Swire-Thompson et al., 2020). The ironic truth of debunking efforts may ultimately be that it is not so much the truthfulness of information that counts, but people's social distance to the producers and adjudicators of knowledge. ...
Article
Full-text available
Various societal and academic actors argue that conspiracy theories should be debunked by insisting on the truthfulness of real “facts” provided by established epistemic institutions. But are academic scholars the appropriate actors to correct people’s beliefs and is that the right and most productive thing to do? Drawing on years of ethnographic research experiences in the Dutch conspiracy milieu, I explain in this paper why debunking conspiracy theories is not possible (can scholars actually know the real truth?), not professional (is taking sides in truth wars what we should do?), and not productive (providing more “correct” information won’t work as knowledge acceptance is not just a cognitive/epistemic issue). Instead of reinstalling the modernist legitimation narrative of science, I argue in this paper for an alternative that is both epistemologically stronger and sociologically more effective. Building from research and experiments with epistemic democracy in the field of science and technology studies, I propose to have “deliberative citizen knowledge platforms”, instead of elite experts groups alone, asses the quality of public information. Such societally representative bodies should enjoy more legitimacy and epistemic diversity to better deal with conspiracy theories and the broader societal conflicts over truth and knowledge they represent.
Chapter
Delayed allegations of sexual misconduct have garnered much media attention, especially when allegations involve public figures such as politicians. In the current chapter, we discuss two main tenets related to the politics of sexual misconduct allegations. First, we argue that although individuals may wait years or decades before reporting valid experiences of sexual misconduct, delayed reporting is not without mnemonic consequences. Memory undergoes deterioration and distortion over time, so even in valid cases, the fading and reconstruction of event details are highly likely to take place. Further, as time passes, one’s susceptibility to misinformation and false memory production increase alongside natural processes of memory deterioration. We offer a framework to evaluate delayed allegations of sexual misconduct where we outline several event characteristics (e.g., repetition, exposure to post-event information) that contribute to memory reliability. We use two high-profile allegations of sexual misconduct involving US Supreme Court nominees to illustrate these processes. In the second half of the paper, we discuss the influence of various sociopolitical factors (e.g., political orientation, social media, social movements) on adults’ perceptions of sexual misconduct allegations. We conclude by highlighting the need to balance media exposure and scientific scrutiny to ensure that investigations of sexual misconduct in political domains are fair and just.
Article
Transparency and observability have been shown to foster ethical decision-making as people tend to comply with an underlying norm for honesty. However, in situations implying a social norm for dishonesty, this might be different. In a die-rolling experiment, we investigate whether observability can also have detrimental effects. We thus introduce a norm nudge toward honesty or dishonesty and make participants’ decisions observable and open to the judgement of other participants in order to manipulate the observability of people’s decisions as well as the underlying social norm. We find that a nudge toward honesty indeed increases the level of honesty, suggesting that such a norm nudge can successfully induce behavioral change. Our introduction of social image concerns via observability, however, does not affect honesty and does not interact with our norm nudge.
Article
Research on the continued influence effect (CIE) of misinformation has demonstrated that misinformation continues to influence people's beliefs and judgments even after it has been corrected. Although most theorizing about the CIE attempts to explain why corrections do not eliminate belief in and influences of the misinformation, the present research takes a different approach and focuses instead on why corrections do reduce belief in misinformation (even if not entirely). We examined how a correction can change perceptions of the original source of the misinformation and how these changes in perceptions can mediate continued influence effects. We also examined causal evidence linking manipulations of misinformation source perceptions to continued belief and misinformation-relevant inferential reasoning. Study 1 demonstrated that an external correction (i.e., a new source labeling misinformation as false) influences perceptions of the misinformation source, and these perceptions of the misinformation source then correlated with belief in the misinformation. Study 2 replicated the results of Study 1 and used source derogation to manipulate misinformation source perceptions and further lessen continued belief. Study 3 was a preregistered replication of previous results using new methodology. These studies suggest that perceptions of the misinformation source is one mechanism that can cause changes in belief in misinformation, and changes in the perception of a source can be achieved simply by correcting the source or through other means. This approach can be used to find other mechanisms responsible for reducing belief in misinformation.
Article
Throughout his political career Donald Trump has utilized name-calling when referring to his opponents. These pejoratives are a ubiquitous part of political discourse in contemporary society. Scholarly research has yet to examine the effect that this type of incivility has on individuals’ evaluations of both the attacker (i.e. the person using name-calling) and the victim. Our research aims to fill this gap by testing the effect of name-calling through the implementation of a national survey experiment. We test the effect of name-calling on candidate evaluations by randomly inserting a pejorative in front of a fictitious candidate’s name in a news story. Our findings indicate that name-calling often backfires. Respondents who saw the pejorative tend to rate the attacker lower. Our findings also show an odd partisan symmetry in how respondents rate this behavior by their co-partisans, i.e. both Republicans and Democrats punish Democratic candidates that use name-calling but ignore Republicans’ use of it.
Article
How do perceptions of local immigrant populations influence immigration policy views? Building on findings that Americans may not accurately perceive population dynamics, we argue that objective measures do not fully capture the effects of local context on public opinion. Our research uses novel subjective experimental reminders about current levels of and recent changes in local immigrant populations to explore how these perceptions impact immigration policy views. In a survey experiment, we asked 2,400 Americans to consider current levels of or recent changes in their local immigrant population. Asking subjects to consider current levels of local immigrant populations modestly increases support for pro-immigrant policies, with particularly strong effects among non-White and Republicans. These effects may be driven by positive perceptions of immigrants and have implications for understanding the role of local community frames in shaping public opinion about immigration, particularly for groups who do not typically support permissive immigration policies.
Article
Full-text available
Upon a surge of misinformation surrounding COVID-19, fact-checking has received much attention as a tool to fight the rampant misinformation. However, such correction efforts have faced challenges from partisans’ biased information processing. For example, partisans trust or distrust a fact-checking message based on whether the message benefits or harms their supporting party. To minimize such politically biased processing of corrective health information, this experimental study examined how different source labels of fact-checkers (human experts vs. AI vs. user consensus) affect partisans’ perceived credibility of fact-checking messages about COVID-19. Our findings showed that AI and user consensus (vs. human experts) source labels on fact-checking messages significantly reduced partisan-based motivated reasoning in evaluating fact-checking message credibility.
Article
Misinformed beliefs are difficult to change. Refutations that target false claims typically reduce false beliefs, but tend to be only partially effective. In this study, a social norming approach was explored to test whether provision of peer norms could provide an alternative or complementary approach to refutation. Three experiments investigated whether a descriptive norm-by itself or in combination with a refutation-could reduce the endorsement of worldview-congruent claims. Experiment 1 found that using a single point estimate to communicate a norm affected belief but had less impact than a refutation. Experiment 2 used a verbally-presented distribution of four values to communicate a norm, which was largely ineffective. Experiment 3 used a graphically-presented social norm with 25 values, which was found to be as effective at reducing claim belief as a refutation, with the combination of both interventions being most impactful. These results provide a proof of concept that normative information can aid in the debunking of false or equivocal claims, and suggests that theories of misinformation processing should take social factors into account.
Article
Commentators say we have entered a “post-truth” era. As political lies and “fake news” flourish, citizens appear not only to believe misinformation, but also to condone misinformation they do not believe. The present article reviews recent research on three psychological factors that encourage people to condone misinformation: partisanship, imagination, and repetition. Each factor relates to a hallmark of “post-truth” society: political polarization, leaders who push “alterative facts,” and technology that amplifies disinformation. By lowering moral standards, convincing people that a lie’s “gist” is true, or dulling affective reactions, these factors not only reduce moral condemnation of misinformation, but can also amplify partisan disagreement. We discuss implications for reducing the spread of misinformation.
Article
This paper presents a between-subjects design experiment with 478 people in India to investigate how rural and urban social media users perceive credible and fake posts, and how different types of sources impact their perceptions of information credibility and sharing behaviors. Our findings reveal that: (1) rural social media users were less adept in differentiating between credible and fake posts than their urban counterparts, and (2) source effects on trust and sharing intent manifested differently for urban and rural users. For example, fake posts from family members garnered greater trust among urban users but were trusted the least by rural users. In case of sharing Facebook posts, urban users were more willing to share fake posts from family, whereas, rural users were more inclined to share fake posts from journalists. Drawing on these findings, we propose design interventions to counteract fake news in low-resource environments of the Global South.
Chapter
Ivor Gaber and Caroline Fisher explore attempts by the Conservatives’ public relations operation, their ‘spin’ machine, to influence the campaign agenda. The piece focuses on the work of Boris Johnson’s influential adviser Dominic Cummings and others to shape the electoral narrative through use of what the authors call ‘strategic lying’. Such interventions are placed in context, along with reflections on the extent to which deception has come to define contemporary politics and how, in particular, this practice informed the 2019 campaign.
Article
Full-text available
The continued influence effect (CIE) is a phenomenon of the continuous influence of misinformation on the inferences of an individual, despite the misinformation being retracted. The aim of this article is to present this phenomenon, compare it to the misinformation effect (PME) known in Polish literature and indicate the practical implications of research on it. The literature review was carried out, presenting the 30-year history of research on the continued influence effect, discussing its mechanisms and determinants, as well as the ways to reduce it. An attempt was also made to organize the nomenclature related to misinformation, focusing on the native meaning of this word. As eyewitness testimonies are prone to distortion and often retracted or corrected during an investigation, this may expose various judicial authorities to the effects of CIE, such as improper decisions and unfair sentences. Therefore, it is proposed to pay attention to the research on CIE and its forensic context in order to find ways to minimize the effects of this phenomenon.
Article
The viral spread of misinformation poses a threat to societies around the world. Recently, researchers have started to study how motivated reasoning about news content influences misinformation susceptibility. However, because the importance of source credibility in the persuasion process is well-documented, and given that source similarity contributes to credibility evaluations, this raises the question of whether individuals are more susceptible to misinformation from ideologically congruent news sources because they find them to be more credible. In a large between-subject pilot (N = 656) and a pre-registered online mixed-subject experiment with a US sample (N = 150) using simulated social media posts, we find clear evidence that both liberals and con-servatives judge misinformation to be more accurate when the source is politically congruent, and that this effect is mediated by perceived source credibility. We show that source effects play a greater role in veracity judgements for liberals than conservatives, but that individuals from both sides of the spectrum judge politically congruent sources as less slanted and more credible. These findings add to our current understanding of source effects in online news environments and provide evidence for the influential effect of perceived source similarity and perceived credibility in misinformation susceptibility.
Article
Full-text available
Objective During COVID-19, access to trustworthy news and information is vital to help people understand the crisis. The consumption of COVID-19-related information is likely an important factor associated with the increased anxiety and psychological distress that has been observed. We aimed to understand how people living with a kidney condition access information about COVID-19 and how this impacts their anxiety, stress, and depression. Methods Participants living with chronic kidney disease (CKD) were recruited from 12 sites across England, UK. Respondents were asked to review how often they accessed and trusted 11 sources of potential COVID-19 information. The Depression, Anxiety and Stress Scale-21 Items was used to measure depression, anxiety, and stress. The 14-item Short Health Anxiety Inventory measured health anxiety. Results 236 participants were included (age 62.8 (11.3) years, male (56%), transplant recipients (51%), non-dialysis (49%)). The most frequently accessed source of health information was television/radio news, followed by official government press releases, and medical institution press releases. The most trusted source was via consultation with healthcare staff. Higher anxiety, stress, and depression was associated with less access and trust in official government press releases. Education status had a large influence on information trust and access. Conclusions Traditional forms of media remain a popular source of health information in those living with kidney conditions. Interactions with healthcare professionals were the most trusted source of health information. Our results provide evidence for problematical associations of COVID-19 related information exposure with psychological strain and could serve as an orientation for recommendations.
Article
Research has consistently shown that misinformation can continue to affect inferential reasoning after a correction. This phenomenon is known as the continued influence effect (CIE). Recent studies have demonstrated that CIE susceptibility can be predicted by individual differences in stable cognitive abilities. Based on this, it was reasoned that CIE susceptibility ought to have some degree of stability itself; however, this has never been tested. The current study aimed to investigate the temporal stability of retraction sensitivity, arguably a major determinant of CIE susceptibility. Participants were given parallel forms of a standard CIE task four weeks apart, and the association between testing points was assessed with an intra-class correlation coefficient and confirmatory factor analysis. Results suggested that retraction sensitivity is relatively stable and can be predicted as an individual-differences variable. These results encourage continued individual-differences research on the CIE and have implications for real-world CIE intervention.
Article
Providing corrective information can reduce factual misperceptions among the public but it tends to have little effect on people’s underlying attitudes. Our study examines how the impact of misinformation corrections is moderated by media choice. In our experiment, participants are asked to read a news article published by Fox News or MSNBC, each highlighting the positive economic impact of legal immigration in the United States. While the news content is held constant, our treatment manipulates whether participants are allowed to freely choose a media outlet or are randomly assigned. Our results demonstrate the importance of people’s ability to choose: While factual misperceptions are easily corrected regardless of how people gained access to information, subsequent opinion change is conditional on people’s prior willingness to seek out alternative sources. As such, encouraging people to broaden their media diet may be more effective to combat misinformation than disseminating fact-checks alone.
Article
The accomplishments of the students are reflected by ideological and political emotions within the school. A major factor is the risk factors in the lack of team cohesion in the ideological and political schools, the expansion of the environmental network, and the promotion of concurrent quality assurance. In this paper, the Heuristic Political Effective Evaluation Reinforcement Learning Algorithm (HPEERLA) has been proposed to enhance group coordination, environmental network extension, and double process improvement promotion in ideological and political education. Systematic optimization analysis is integrated with HPEERLA to increase university productivity, description of practice, evaluation and academic reports in ideological and political education. The newly proposed model combines learning, association, identity, self-adaptation, and data processing, thus addressing their respective limitations. The simulation analysis is performed based on sensitivity, performance, and efficiency proves the reliability of the proposed framework. The experimental analysis of HPEERLA for Students participation in the democratic ratio is 85.36%, Students' Social Work Activities ratio as 82.74%, Students Contribution to Media in the political ratio is 88.25%, Political Engagement ratio is 89.45%, Students Efficiency in Political Evaluation ratio is 94.25%.
Article
False and misleading information is readily accessible in people's environments, oftentimes reaching people repeatedly. This repeated exposure can significantly affect people's beliefs about the world, as has been noted by scholars in political science, communication, and cognitive, developmental, and social psychology. In particular, repetition increases belief in false information, even when the misinformation contradicts prior knowledge. We review work across these disciplines, identifying factors that may heighten, diminish, or have no impact on these adverse effects of repetition on belief. Specifically, we organize our discussion around variations in what information is repeated, to whom the information is repeated, how people interact with this repetition, and how people's beliefs are measured. A key cross‐disciplinary theme is that the most influential factor is how carefully or critically people process the false information. However, several open questions remain when comparing findings across different fields and approaches. We conclude by noting a need for more interdisciplinary work to help resolve these questions, as well as a need for more work in naturalistic settings so that we can better understand and combat the effects of repeated circulation of false and misleading information in society. This article is categorized under: Psychology > Memory Psychology > Reasoning and Decision Making
Article
Full-text available
Advancing theorizations of communication in post-truth politics, where computational/big data or cognitive bias approaches often dominate the description of and proposed solutions to the problem, this article aims to theorize the cultural production of social trust, which underpins public truth-making. It argues that performing mediated trust is preconditional to public truth-making (oft-overlooked in post-truth accounts). Advocating that a more detailed theory of post-truth political performances requires amalgamating intra- and interdisciplinary resources and broadening perspectives, it unites insights from social trust theory, reality television (RTV) studies, gender studies, and political communication. It identifies and critiques an aggressive emotional and a palpably toxic (especially white) masculinist logic in a popular strand of post-truth political performance. This conjuncturally specific, traditionally aggressive masculinist post-truth political communication is best understood as a transposable style, set of practices, and disposition toward them – a cultural logic called “aggro-truth.” Aggro-truth thus moves beyond the general concept and label of post-truth by a. showing that it has a particular, widely circulating, sub-form with its own particular cultural logic for operationalizing mediated trust in post-truth tellers (such as Donald Trump); and b. demonstrating how that logic works by focusing on Trump, while noting broad evidence of transnational variations for further research.
Chapter
Leadership introduces distinctive risks of ethical failure. These risks are often associated with the heightened responsibilities of leadership and the necessary inequality that leading a group often involves. But people who are prone to ethical failure are also prone to self-selection into leadership positions. In order to understand and prevent ethical failure in leadership, it is not enough to take steps to address and prevent ethical failure among existing leaders. Rather, avoiding ethical failure may also require rethinking the ways that leaders are selected.
Article
Full-text available
Are citizens willing to accept journalistic fact-checks of misleading claims from candidates they support and to update their attitudes about those candidates? Previous studies have reached conflicting conclusions about the effects of exposure to counter-attitudinal information. As fact-checking has become more prominent, it is therefore worth examining how respondents respond to fact-checks of politicians—a question with important implications for understanding the effects of this journalistic format on elections. We present results to two experiments conducted during the 2016 campaign that test the effects of exposure to realistic journalistic fact-checks of claims made by Donald Trump during his convention speech and a general election debate. These messages improved the accuracy of respondents’ factual beliefs, even among his supporters, but had no measurable effect on attitudes toward Trump. These results suggest that journalistic fact-checks can reduce misperceptions but often have minimal effects on candidate evaluations or vote choice.
Article
Full-text available
In the 'post-truth era', political fact-checking has become an issue of considerable significance. A recent study in the context of the 2016 US election found that fact-checks of statements by Donald Trump changed participants' beliefs about those statements-regardless of whether participants supported Trump-but not their feelings towards Trump or voting intentions. However, the study balanced corrections of inaccurate statements with an equal number of affirmations of accurate statements. Therefore, the null effect of fact-checks on participants' voting intentions and feelings may have arisen because of this artificially created balance. Moreover, Trump's statements were not contrasted with statements from an opposing politician, and Trump's perceived veracity was not measured. The present study (N = 370) examined the issue further, manipulating the ratio of corrections to affirmations, and using Australian politicians (and Australian participants) from both sides of the political spectrum. We hypothesized that fact-checks would correct beliefs and that fact-checks would affect voters' support (i.e. voting intentions, feelings and perceptions of veracity), but only when corrections outnumbered affirmations. Both hypotheses were supported, suggesting that a politician's veracity does sometimes matter to voters. The effects of fact-checking were similar on both sides of the political spectrum, suggesting little motivated reasoning in the processing of fact-checks.
Article
Full-text available
This article develops a model to explain contradictory findings on the effects of repeated exposure to persuasive communication. The model’s starting point is the repetition frequency of a given stimulus. This determines - in interaction with variables relating to the stimulus, the recipients, and the context - whether the repeated stimulus is perceived consciously or unconsciously. If the recipient perceives the stimulus consciously, this can lead to habituation to the stimulus and to peripheral processing; alternatively, the recipient can become sensitized to the stimulus, which can result in central processing and perceiving the stimulus as a persuasive attempt. Moreover, mere-exposure effects can also have an influence on the perception of the stimulus, independent of the other paths described.
Article
Full-text available
This study investigated the cognitive processing of true and false political information. Specifically, it examined the impact of source credibility on the assessment of veracity when information comes from a polarizing source (Experiment 1), and effectiveness of explanations when they come from one’s own political party or an opposition party (Experiment 2). Participants rated their belief in factual and incorrect statements that Donald Trump made on the campaign trail; facts were subsequently affirmed and misinformation retracted. Participants then re-rated their belief immediately or after a delay. Experiment 1 found that (1) if information was attributed to Trump, Republican supporters of Trump believed it more than if it was presented without attribution, whereas the opposite was true for Democrats; and (2) although Trump supporters reduced their belief in misinformation items following a correction, they did not change their voting preferences. Experiment 2 revealed that the explanation’s source had relatively little impact, and belief updating was more influenced by perceived credibility of the individual initially purporting the information. These findings suggest that people use political figures as a heuristic to guide evaluation of what is true or false, yet do not necessarily insist on veracity as a prerequisite for supporting political candidates.
Article
Full-text available
Decision scientists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, and like issues that turn on empirical evidence. This paper describes a study of three of them: the predominance of heuristic-driven information processing by members of the public; ideologically motivated reasoning; and the cognitive-style correlates of political conservativism. The study generated both observational and experimental data inconsistent with the hypothesis that political conservatism is distinctively associated with either un-reflective thinking or motivated reasoning. Conservatives did no better or worse than liberals on the Cognitive Reflection Test (Frederick, 2005), an objective measure of information-processing dispositions associated with cognitive biases. In addition, the study found that ideologically motivated reasoning is not a consequence of over-reliance on heuristic or intuitive forms of reasoning generally. On the contrary, subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition. These findings corroborated an alternative hypothesis, which identifies ideologically motivated cognition as a form of information processing that promotes individuals' interests in forming and maintaining beliefs that signify their loyalty to important affinity groups. The paper discusses the practical significance of these findings, including the need to develop science communication strategies that shield policy-relevant facts from the influences that turn them into divisive symbols of political identity.
Article
Full-text available
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4-to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Article
Full-text available
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation. We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread. We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people’s memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing. We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals, educators, and science communicators—design effective misinformation retractions, educational tools, and public-information campaigns.
Article
Full-text available
Several lines of research have found that information previously encoded into memory can influence inferences and judgments, even when more recent information discredits it. Previous theories have attributed this to difficulties in editing memory: failing to successfully trace out and alter inferences or explanations generated before a correction. However, in Exps 1A and 1B, Ss who had received an immediate correction made as many inferences based on misinformation as Ss who had received the correction later in the account (despite presumably having made more inferences requiring editing). In a 2nd experiment, the availability of the misinformation within the comprehension context was tested. Results showed that Ss continued to make inferences involving discredited information when it afforded causal structure, but not when only incidentally mentioned or primed during an intervening task. Exps 3A and 3B found that providing a plausible causal alternative, rather than simply negating misinformation, mitigated 1 effect. The findings suggest that misinformation can still influence inferences one generates after a correction has occurred; however, providing an alternative that replaces the causal structure it affords can reduce the effects of misinformation. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A recent innovation in televised election debates is a continuous response measure (commonly referred to as the "worm") that allows viewers to track the response of a sample of undecided voters in real-time. A potential danger of presenting such data is that it may prevent people from making independent evaluations. We report an experiment with 150 participants in which we manipulated the worm and superimposed it on a live broadcast of a UK election debate. The majority of viewers were unaware that the worm had been manipulated, and yet we were able to influence their perception of who won the debate, their choice of preferred prime minister, and their voting intentions. We argue that there is an urgent need to reconsider the simultaneous broadcast of average response data with televised election debates.
Article
Full-text available
Voter turnout increased sharply in 2004. At the same time, 2004 marked a change in campaign strategy, as both presidential campaigns and allied organizations placed unprecedented emphasis on voter mobilization. This article attempts to assess the degree to which grassroots mobilization efforts contributed to the surge in voter turnout. We conclude that although grassroots efforts generated millions of additional votes, they probably account for less than one-third of the observed increase in turnout. Increased turnout in 2004 primarily reflects the importance that voters accorded the presidential contest.
Article
Full-text available
Balancing the pros and cons of two options is undoubtedly a very appealing decision procedure, but one that has received scarce scientific attention so far, either formally or empirically. We describe a formal framework for pros and cons decisions, where the arguments under consideration can be of varying importance, but whose importance cannot be precisely quantified. We then define eight heuristics for balancing these pros and cons, and compare the predictions of these to the choices made by 62 human participants on a selection of 33 situations. The Levelwise Tallying heuristic clearly emerges as a winner in this competition. Further refinements of this heuristic are considered in the discussion, as well as its relation to Take the Best and Cumulative Prospect Theory.
Article
Misinformation often continues to influence people’s memory and inferential reasoning after it has been retracted; this is known as the continued influence effect (CIE). Previous research investigating the role of attitude‐based motivated reasoning in this context has found conflicting results: Some studies have found that worldview can have a strong impact on the magnitude of the CIE, such that retractions are less effective if the misinformation is congruent with a person’s relevant attitudes, in which case the retractions can even backfire. Other studies have failed to find evidence for an effect of attitudes on the processing of misinformation corrections. The present study used political misinformation—specifically fictional scenarios involving misconduct by politicians from left‐wing and right‐wing parties—and tested participants identifying with those political parties. Results showed that in this type of scenario, partisan attitudes have an impact on the processing of retractions, in particular (1) if the misinformation relates to a general assertion rather than just a specific singular event and (2) if the misinformation is congruent with a conservative partisanship.
Article
We develop and test a theory to address a puzzling pattern that has been discussed widely since the 2016 U.S. presidential election and reproduced here in a post-election survey: how can a constituency of voters find a candidate “authentically appealing” (i.e., view him positively as authentic) even though he is a “lying demagogue” (someone who deliberately tells lies and appeals to non-normative private prejudices)? Key to the theory are two points: (1) “common-knowledge” lies may be understood as flagrant violations of the norm of truth-telling; and (2) when a political system is suffering from a “crisis of legitimacy” (Lipset 1959) with respect to at least one political constituency, members of that constituency will be motivated to see a flagrant violator of established norms as an authentic champion of its interests. Two online vignette experiments on a simulated college election support our theory. These results demonstrate that mere partisanship is insufficient to explain sharp differences in how lying demagoguery is perceived, and that several oft-discussed factors—information access, culture, language, and gender—are not necessary for explaining such differences. Rather, for the lying demagogue to have authentic appeal, it is sufficient that one side of a social divide regards the political system as flawed or illegitimate.
Article
What makes people deny wrongdoing that their group has inflicted on others? Prior research argues that refusing to acknowledge past misbehavior contributes to intergroup conflict, making historical misinformation important to understand and address. In particular, feeling a lack of control may make people more vulnerable to these misperceptions—a claim we test in a preregistered survey experiment examining beliefs about the Palestinian exodus during the creation of the state of Israel. Consistent with expectations, Jewish Israelis who were asked to recall an event in which they lacked control were more vulnerable to arguments (incorrectly) denying any Jewish responsibility for the exodus. By contrast, corrective information successfully reduced misperceptions regardless of feelings of control. However, corrections had no effect on attitudes toward the outgroup or support for the peace process, which suggests that historical misperceptions may be more of a symptom of intergroup conflict than a cause of its persistence.
Article
We tested whether conservatives and liberals are similarly or differentially likely to deny scientific claims that conflict with their preferred conclusions. Participants were randomly assigned to read about a study with correct results that were either consistent or inconsistent with their attitude about one of several issues (e.g., carbon emissions). Participants were asked to interpret numerical results and decide what the study concluded. After being informed of the correct interpretation, participants rated how much they agreed with, found knowledgeable, and trusted the researchers’ correct interpretation. Both liberals and conservatives engaged in motivated interpretation of study results and denied the correct interpretation of those results when that interpretation conflicted with their attitudes. Our study suggests that the same motivational processes underlie differences in the political priorities of those on the left and the right.
Article
Why does public conflict over societal risks persist in the face of compelling and widely accessible scientific evidence? We conducted an experiment to probe two alternative answers: the ‘science comprehension thesis’ (SCT), which identifies defects in the public's knowledge and reasoning capacities as the source of such controversies; and the ‘identity-protective cognition thesis’ (ICT), which treats cultural conflict as disabling the faculties that members of the public use to make sense of decision-relevant science. In our experiment, we presented subjects with a difficult problem that turned on their ability to draw valid causal inferences from empirical data. As expected, subjects highest in numeracy – a measure of the ability and disposition to make use of quantitative information – did substantially better than less numerate ones when the data were presented as results from a study of a new skin rash treatment. Also as expected, subjects’ responses became politically polarized – and even less accurate – when the same data were presented as results from the study of a gun control ban. But contrary to the prediction of SCT, such polarization did not abate among subjects highest in numeracy; instead, it increased . This outcome supported ICT, which predicted that more numerate subjects would use their quantitative-reasoning capacity selectively to conform their interpretation of the data to the result most consistent with their political outlooks. We discuss the theoretical and practical significance of these findings.
Article
We fielded an experiment in the 2012 Cooperative Congressional Election Study testing the theory that motivated reasoning governs reactions to news about misdeeds on the campaign trail. Treated subjects either encountered a fabricated news story involving phone calls with deceptive information about polling times or one involving disappearing yard signs (the offending party was varied at random). Control subjects received no treatment. We then inquired about how the treated subjects felt about dirty tricks in political campaigns and about all subjects’ trust in government. We find that partisans process information about dirty campaign tricks in a motivated way, expressing exceptional concern when the perpetrators are political opponents. However, there is almost no evidence that partisans’ evaluations of dirty political tricks in turn color other political attitudes, such as political trust.
Article
Individuals are not merely passive vessels of whatever beliefs and opinions they have been exposed to; rather, they are attracted to belief systems that resonate with their own psychological needs and interests, including epistemic, existential, and relational needs to attain certainty, security, and social belongingness. Jost, Glaser, Kruglanski, and Sulloway (2003) demonstrated that needs to manage uncertainty and threat were associated with core values of political conservatism, namely respect for tradition and acceptance of inequality. Since 2003 there have been far more studies on the psychology of left-right ideology than in the preceding half century, and their empirical yield helps to address lingering questions and criticisms. We have identified 181 studies of epistemic motivation (involving 130,000 individual participants) and nearly 100 studies of existential motivation (involving 360,000 participants). These databases, which are much larger and more heterogeneous than those used in previous meta-analyses, confirm that significant ideological asymmetries exist with respect to dogmatism, cognitive/perceptual rigidity, personal needs for order/structure/closure, integrative complexity, tolerance of ambiguity/uncertainty, need for cognition, cognitive reflection, self-deception, and subjective perceptions of threat. Exposure to objectively threatening circumstances—such as terrorist attacks, governmental warnings, and shifts in racial demography—contribute to modest “conservative shifts” in public opinion. There are also ideological asymmetries in relational motivation, including the desire to share reality, perceptions of within-group consensus, collective self-efficacy, homogeneity of social networks, and the tendency to trust the government more when one's own political party is in power. Although some object to the very notion that there are meaningful psychological differences between leftists and rightists, the identification of “elective affinities” between cognitive-motivational processes and contents of specific belief systems is essential to the study of political psychology. Political psychologists may contribute to the development of a good society not by downplaying ideological differences or advocating “Swiss-style neutrality” when it comes to human values, but by investigating such phenomena critically, even—or perhaps especially—when there is pressure in society to view them uncritically.
Article
Some scientifically well-established results—such as the fact that emission of greenhouse gases produces global warming—are rejected by sizable proportions of the population in the United States and other countries. Rejection of scientific findings is mostly driven by motivated cognition: People tend to reject findings that threaten their core beliefs or worldview. At present, rejection of scientific findings by the U.S. public is more prevalent on the political right than the left. Yet the cognitive mechanisms driving rejection of science, such as the superficial processing of evidence toward the desired interpretation, are found regardless of political orientation. General education and scientific literacy do not mitigate rejection of science but, rather, increase the polarization of opinions along partisan lines. In contrast, specific knowledge about the mechanisms underlying a scientific result—such as human-made climate change—can increase the acceptance of that result.
Article
The omnipresence of political misinformation in the today's media environment raises serious concerns about citizens' ability make fully informed decisions. In response to these concerns, the last few years have seen a renewed commitment to journalistic and institutional fact-checking. The assumption of these efforts is that successfully correcting misinformation will prevent it from affecting citizens' attitudes. However, through a series of experiments, I find that exposure to a piece of negative political information persists in shaping attitudes even after the information has been successfully discredited. A correction--even when it is fully believed--does not eliminate the effects of misinformation on attitudes. These lingering attitudinal effects, which I call "belief echoes," are created even when the misinformation is corrected immediately, arguably the gold standard of journalistic fact-checking. Belief echoes can be affective or cognitive. Affective belief echoes are created through a largely unconscious process in which a piece of negative information has a stronger impact on evaluations than does its correction. Cognitive belief echoes, on the other hand, are created through a conscious cognitive process during which a person recognizes that a particular negative claim about a candidate is false, but reasons that its presence increases the likelihood of other negative information being true. Experimental results suggest that while affective belief echoes are created across party lines, cognitive belief echoes are more likely when a piece of misinformation reinforces a person's pre-existing political views. The existence of belief echoes provide an enormous incentive for politicians to strategically spread false information with the goal of shaping public opinion on key issues. However, results from two more experiments show that politicians also suffer consequences for making false claims, an encouraging finding that has the potential to constrain the behavior of politicians presented with the opportunity to strategically create belief echoes. While the existence of belief echoes may also provide a disincentive for the media to engage in serious fact-checking, evidence also suggests that such efforts can also have positive consequences by increasing citizens' trust in media.
Article
Why does public conflict over societal risks persist in the face of compelling and widely accessible scientific evidence? We conducted an experiment to probe two alternative answers: the “Science Comprehension Thesis” (SCT), which identifies defects in the public’s knowledge and reasoning capacities as the source of such controversies; and the “Identity-protective Cognition Thesis” (ICT) which treats cultural conflict as disabling the faculties that members of the public use to make sense of decision-relevant science. In our experiment, we presented subjects with a difficult problem that turned on their ability to draw valid causal inferences from empirical data. As expected, subjects highest in Numeracy — a measure of the ability and disposition to make use of quantitative information — did substantially better than less numerate ones when the data were presented as results from a study of a new skin-rash treatment. Also as expected, subjects’ responses became politically polarized — and even less accurate — when the same data were presented as results from the study of a gun-control ban. But contrary to the prediction of SCT, such polarization did not abate among subjects highest in Numeracy; instead, it increased. This outcome supported ICT, which predicted that more Numerate subjects would use their quantitative-reasoning capacity selectively to conform their interpretation of the data to the result most consistent with their political outlooks. We discuss the theoretical and practical significance of these findings.
Article
Across three separate experiments, I find that exposure to negative political information continues to shape attitudes even after the information has been effectively discredited. I call these effects “belief echoes.” Results suggest that belief echoes can be created through an automatic or deliberative process. Belief echoes occur even when the misinformation is corrected immediately, the “gold standard” of journalistic fact-checking. The existence of belief echoes raises ethical concerns about journalists’ and fact-checking organizations’ efforts to publicly correct false claims.
Article
Does external monitoring improve democratic performance? Fact-checking has come to play an increasingly important role in political coverage in the United States, but some research suggests it may be ineffective at reducing public misperceptions about controversial issues. However, fact-checking might instead help improve political discourse by increasing the reputational costs or risks of spreading misinformation for political elites. To evaluate this deterrent hypothesis, we conducted a field experiment on a diverse group of state legislators from nine U.S. states in the months before the November 2012 election. In the experiment, a randomly assigned subset of state legislators was sent a series of letters about the risks to their reputation and electoral security if they were caught making questionable statements. The legislators who were sent these letters were substantially less likely to receive a negative fact-checking rating or to have their accuracy questioned publicly, suggesting that fact-checking can reduce inaccuracy when it poses a salient threat.
Article
Since Watergate, more than two hundred fifty members of the House of Representatives have been involved in various scandals. The author finds that roughly 40 percent of incumbents did not “survive” their scandal. Incumbents who stood for reelection lost 5 percent of the general election vote share, on average, but the electoral repercussions vary across types of scandals and could be magnified in the presence of a quality challenger. A scandal-tainted incumbent defending his or her seat does not necessarily fare better than an untainted open-seat candidate, a finding that provides a justification for stronger ethics rules.
Article
Correlational studies have found candidate traits to be an important determinant of vote preferences but cannot rule out reverse causality processes in explaining these findings. The present study demonstrates the independent impact of trait inferences on candidate evaluations using experimentally controlled candidate profiles of hypothetical U.S. congressmen. Using the scandal situation as a testing ground, this experiment examines whether task-relevant, competence traits actually have greater impact on political judgments than the more general, warmth-related trait qualities. Two types of scandals are considered (marital infidelity and tax evasion), both implying negative trustworthiness characteristics of the officeholder. Results demonstrate that trait inferences do have a causal impact on global evaluations. Consistent with past survey studies, competence qualities appear to be more important than warmth qualities but only for those with greater political information levels.
Article
Recent research on the self-validation hypothesis suggests that source credibility identified after message processing can influence the confidence people have in their own thoughts generated in response to persuasive messages (Briñol, Petty, & Tormala, 2004). The present research explored the implications of this effect for the possibility that high credibility sources can be associated with more or less persuasion than low credibility sources. In two experiments, it is demonstrated that when people generate primarily positive thoughts in response to a message (e.g., because the message contains strong arguments) and then learn of the source, high source credibility leads to more favorable attitudes than does low source credibility. When people have primarily negative thoughts in response to a message (e.g., because it contains weak arguments), however, this effect is reversed—that is, high source credibility leads to less favorable attitudes than does low source credibility.
Many Americans believe fake news is sowing confusion
  • Barthel M.
Barthel, M., Mitchell, A., & Holcomb, J. (2016). Many Americans believe fake news is sowing confusion. Pew Research Center, 15, 12.
Honesty/ethics in professions
  • Gallup
Gallup. (2017). Honesty/ethics in professions. Retrieved from https://news.gallup.com/poll/1654/honesty-ethics-professions. aspx
Democratic design and democratic reform: The case of Australia
  • Reilly B.
Reilly, B. (2016). Democratic design and democratic reform: The case of Australia. Taiwan Journal of Democracy, 12(2), 1-16.
How to check if your Facebook data was used by Cambridge Analytica
  • B Chappell
Chappell, B. (2018). How to check if your Facebook data was used by Cambridge Analytica. NPR. Retrieved from https:// www.npr.org/sections/thetwo-way/2018/04/10/601163176/
Bernie Sander’s file
  • Politifact
Politifact. (2018a). Bernie Sander's file. Retrieved from http://www.politifact.com/personalities/bernie-s/
Examining trolls and polarization with a retweet network. International Conference on Web Search and Data Mining: Workshop on Misinformation and Misbehavior Mining on the Web
  • L. G. Steward
  • A. Arif
  • K. Starbird
Syntax and semantics, speech acts
  • H. P. Grice
Fast and frugal heuristics: The tools of bounded rationality. Blackwell Handbook of Judgment and Decision Making
  • G Gigerenzer
Gigerenzer, G. (2004). Fast and frugal heuristics: The tools of bounded rationality. Blackwell Handbook of Judgment and Decision Making, 62, 88.
New studies on political fact-checking: Growing, influential; but less popular among GOP readers
  • L Graves
  • B Nyhan
  • J Reifler
Graves, L., Nyhan, B., & Reifler, J. (2015). New studies on political fact-checking: Growing, influential; but less popular among GOP readers. American Press Institute. Retrieved from https://www.americanpressinstitute.org/fact-checking-project/ new-research-on-political-fact-checking-growing-and-influential-but-partisanship-is-a-factor/
Government at a glance 2017
  • Oecd
OECD. (2017). Government at a glance 2017. OECD. Retrieved from http://www.oecd.org/gov/government-at-aglance-22214399.htm