What can be done to combat political misinformation? One prominent intervention involves attaching warnings to headlines of news stories that have been disputed by third-party fact-checkers. Here we demonstrate a hitherto unappreciated potential consequence of such a warning: an implied truth effect, whereby false headlines that fail to get tagged are considered validated and thus are seen as more accurate. With a formal model, we demonstrate that Bayesian belief updating can lead to such an implied truth effect. In Study 1 (n = 5,271 MTurkers), we find that although warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized implied truth effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study 2 (n = 1,568 MTurkers), we find the same effects in the context of decisions about which headlines to consider sharing on social media. We also find that attaching verifications to some true headlines—which removes the ambiguity about whether untagged headlines have not been checked or have been verified—eliminates, and in fact slightly reverses, the implied truth effect. Together these results contest theories of motivated reasoning while identifying a potential challenge for the policy of using warning tags to fight misinformation—a challenge that is particularly concerning given that it is much easier to produce misinformation than it is to debunk it.
This paper was accepted by Elke Weber, judgment and decision making.
All content in this area was uploaded by Gordon Pennycook on Feb 10, 2020
Content may be subject to copyright.
A preview of the PDF is not available
... However, by analyzing participants' abilities to identify fake news headlines and their propensity to engage in analytical reasoning, Pennycook and Rand (2017) found consistent evidence that analytic thinking has a major role in how people judge the accuracy of fake news, whereas those who are more inclined to think analytically are less likely to perceive fake news as true. Additionally, this relationship held true, regardless of different political ideologies and was robust to controlling for age, gender and education. ...
... In a recent study, Pennycook (2017) found that even a single exposure to fake news headlines increases the subsequent perceptions of accuracy of itthat is, the illusory truth effect is also applicable to misinformation context. For a specific headline, the single exposure increased the perceived accuracy of one fake news article from 18.5% to 35.5%. ...
... A first possibility, based on social and cognitive psychology, suggests an overall effect of aging on memory. Following this, the memory would deteriorate with aging, rendering older individuals more susceptible to the "illusory truth" effect (Pennycook, 2017) and additional effects related to availability heuristic and belief persistencefor instance, making it harder for an induvial to correctly recall news sources. ...
This study investigates the spread of fake news during the Presidential Elections of 2018 in Brazil and how distinct social media and websites are used as distribution platforms and sources of disinformation. For such, a pre-existing data set of 346 fake news stories collected during the elections served as a starting point. Initially, through a reverse search process, the main websites responsible for disseminating disinformation were mapped. These sources were then analysed in terms of traffic and partisanship.
Beyond a prevalence of right-wing fake news sources, a high concentration of web traffic was found. Five websites were responsible for almost 80% of all pageviews (or impressions) from all the 58 identified fake news sources. Furthermore, in order to investigate the circulation of disinformation on Facebook, Twitter and WhatsApp, the data set was filtered into the 58 most relevant unique fake news stories, which were later classified by political bias, engagement (number of shares), and segregated in four narratives.
Firstly, it was found that all the analysed social media served as relevant distribution platforms for fake news, once 32 out of the 58 fake news stories circulated in all of them. Yet, Facebook was found to be more relevant than Twitter for that purpose.
Secondly, the four major narratives that shaped the fake news stories were mostly related to an intense polarization and declining rates of trust in public institutions and media vehicles. Among these, fake news related to anti-left/anti-workers were predominant.
Similarly to the first analysis, partisanship was noticeable during the spread of disinformation, as there were ten times more pro-Bolsonaro (or anti-Haddad) fake news stories than the polar opposite.
Finally, the findings indicate that, while Facebook and Twitter were relevant distribution
platforms, WhatsApp had a major impact on closed groups due to the reinforced cognitive effects and externalities that corroborate to the susceptibility and spread of fake news on social media.
... This participant suggested that the absence of nudges creates an illusion of validity of content without nudges. Indeed, recent research points out the same phenomenon when false reports are not tagged, resulting in a false sense of being validated [87]. One way to address this concern is, again, to be transparent about not being nudged with an additional tool tip message for the news items that are not nudged. ...
... Lastly, audience misperceptions of non-nudged content being credible indicate an additional challenge in design. This effect has also been demonstrated in a recent work [87]. One way to solve this problem would be to add nudges to all content. ...
... However, we see two reasons that suggest that our results demonstrating the utility of nudges in credibility assessment could extend to naturalistic Twitter setup as well. First, research shows that self-reported measures pertaining to social media usage correlate with observed behavior in the wild [39,87]. Second, large-scale survey-based research on nudges pertaining to news on social media, show that nudges affect related attitude, such as sharing intention of misinformation [81,89]. ...
Struggling to curb misinformation, social media platforms are experimenting with design interventions to enhance consumption of credible news on their platforms. Some of these interventions, such as the use of warning messages, are examples of nudges -- a choice-preserving technique to steer behavior. Despite their application, we do not know whether nudges could steer people into making conscious news credibility judgments online and if they do, under what constraints. To answer, we combine nudge techniques with heuristic based information processing to design NudgeCred -- a browser extension for Twitter. NudgeCred directs users' attention to two design cues: authority of a source and other users' collective opinion on a report by activating three design nudges -- Reliable, Questionable, and Unreliable, each denoting particular levels of credibility for news tweets. In a controlled experiment, we found that NudgeCred significantly helped users (n=430) distinguish news tweets' credibility, unrestricted by three behavioral confounds -- political ideology, political cynicism, and media skepticism. A five-day field deployment with twelve participants revealed that NudgeCred improved their recognition of news items and attention towards all of our nudges, particularly towards Questionable. Among other considerations, participants proposed that designers should incorporate heuristics that users' would trust. Our work informs nudge-based system design approaches for online media.
... While concerns about the "backfire" effect continue, several studies have failed to find or replicate this effect (e.g., Nyhan Nevertheless, much of the recent literature suggests audiences will accept authoritative cues over ideological ones and update their beliefs in the direction of the fact check (Bullock et al. 2013;Duncan 2020;Wood and Porter 2019). In the context of social media labels, some experimental research has demonstrated that being exposed to a "disputed flag" limits the likelihood of that user sharing information with a warning label (Mena 2020;Pennycook et al. 2020). Studies that focus on the effect of disputed flags have found a modest reduction in the belief of fake news stories (Gao et al. 2018;Pennycook et al. 2020). ...
... In the context of social media labels, some experimental research has demonstrated that being exposed to a "disputed flag" limits the likelihood of that user sharing information with a warning label (Mena 2020;Pennycook et al. 2020). Studies that focus on the effect of disputed flags have found a modest reduction in the belief of fake news stories (Gao et al. 2018;Pennycook et al. 2020). In particular, when disputed flags reduced the effect of mis-and-disinformation, users were exposed to the flag before the false or misleading information (Pennycook, Cannon, and Rand 2018). ...
Recently, social media platforms have introduced several measures to counter misleading information. Among these measures are state media labels which help users identify and evaluate the credibility of state-backed news. YouTube was the first platform to introduce labels that provide information about state-backed news channels. While previous work has examined the efficiency of information labels in controlled lab settings, few studies have examined how state media labels affect user perceptions of content from state-backed outlets. This paper proposes new methodological and theoretical approaches to investigate the effect of state media labels on user engagement with content. Drawing on a content analysis of 8,071 YouTube comments posted before and after the labelling of five state-funded channels (Al Jazeera English, CGTN, RT, TRT World, and Voice of America), this paper analyses the effect state media labels had on user engagement with state-backed media content. We found the labels had no impact on the amount of likes videos received before and after the policy introduction, except for RT which received less likes after it was labelled. However, for RT, comments left by users were associated with 30 percent decrease in the likelihood of observing a critical comment following the policy implementation, and a 70 percent decrease in likelihood of observing a critical comment about RT as a media source. While other state-funded broadcasters, like Al Jazeera English and VOA News, received fewer critical comments after YouTube introduced its policy; this relationship was associated with how political the video was, rather than the policy change. Our study contributes to the ongoing discussion on the efficacy of platform governance in relation to state-backed media, showing that audience preferences impact the effectiveness of labels.
... First, it may be the case that the types of tweets Twitter labeled (e.g., falsehoods about the electoral process) were the types of tweets that would have had the highest levels of spread independent of being labeled; nor can we in any way rule out that users might have engaged with them even more if the platform had not applied the label. Second, research suggests that warning labels reduce people's willingness to believe false information; despite the tweets' broad exposure, the label could have lowered users' trust in the false content (Pennycook et al., 2020). However, our data do show the wide reach of election-related messages that the platform had marked as disputed. ...
We analyze the spread of Donald Trump’s tweets that were flagged by Twitter using two intervention strategies—attaching a warning label and blocking engagement with the tweet entirely. We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels. Additionally, the messages that had been blocked on Twitter remained popular on Facebook, Instagram, and Reddit, being posted more often and garnering more visibility than messages that had either been labeled by Twitter or received no intervention at all. Taken together, our results emphasize the importance of considering content moderation at the ecosystem level.
... Using actual true and false content from social media (and that is representative of the broader categories) is particularly important if one wants to make the argument that their favored intervention is likely to have an impact if implemented in the real world. This research has looked at factors such as fact-checking (Brashier et al., 2021;Clayton et al., 2019;Pennycook, Bear, et al., 2020), emphasizing news sources/publishers (Dias et al., 2020), digital media literacy tips (Guess, Lerner, et al., 2020), inoculation and other educational approaches (van der Linden et al., 2020), and subtly prompting people to think about accuracy to improve sharing decisions Pennycook, McPhetres, et al., 2020). ...
Coincident with the global rise in concern about the spread of misinformation on social media, there has been influx of behavioral research on so-called “fake news” (fabricated or false news headlines that are presented as if legitimate) and other forms of misinformation. These studies often present participants with news content that varies on relevant dimensions (e.g., true v. false, politically consistent v. inconsistent, etc.) and ask participants to make judgments (e.g., accuracy) or choices (e.g., whether they would share it on social media). This guide is intended to help researchers navigate the unique challenges that come with this type of research. Principle among these issues is that the nature of news content that is being spread on social media (whether it is false, misleading, or true) is a moving target that reflects current affairs in the context of interest. Steps are required if one wishes to present stimuli that allow generalization from the study to the real-world phenomenon of online misinformation. Furthermore, the selection of content to include can be highly consequential for the study’s outcome, and researcher biases can easily result in biases in a stimulus set. As such, we advocate for pretesting materials and, to this end, report our own pretest of 224 recent true and false news headlines, both relating to U.S. political issues and the COVID-19 pandemic. These headlines may be of use in the short term, but, more importantly, the pretest is intended to serve as an example of best practices in a quickly evolving area of research.
Do emotions we experience after reading headlines help us discern true from false information or cloud our judgement? Understanding whether emotions are associated with distinguishing truth from fiction and sharing information has implications for interventions designed to curb the spread of misinformation. Among 1,341 Facebook users in Nigeria, we find that emotions—specifically happiness and surprise—are associated with greater belief in and sharing of false, relative to true, COVID-19 headlines. Respondents who are older are more reflective, and do not support the ruling party are better at discerning true from false COVID-19 information.
This article explores some of the critical challenges facing self-regulation and the regulatory environment for digital platforms. We examine several historical examples of firms and industries that attempted self-regulation before the Internet. All dealt with similar challenges involving multiple market actors and potentially harmful content or bias in search results: movies and video games, radio and television advertising, and computerized airline reservation systems. We follow this historical discussion with examples of digital platforms in the Internet era that have proven problematic in similar ways, with growing calls for government intervention through sectoral regulation and content controls. We end with some general guidelines for when and how specific types of platform businesses might self-regulate more effectively. Although our sample is small and exploratory, the research suggests that a combination of self-regulation and credible threats of government regulation may yield the best results. We also note that effective self-regulation need not happen exclusively at the level of the firm. When it is in their collective self-interest, as occurred before the Internet era, coalitions of firms within the same market and with similar business models may agree to abide by a jointly accepted set of rules or codes of conduct.
Misinformation, fake news and rumors have been an issue of concern for societies and nations. Societies, countries and even organizations experience the negative impact of misinformation, fake news and rumors. These are the forms of information or news that are unverified and could be false. That is why these have immense potential to harm the social system and beliefs. The use of the internet and social media is very common in the dissemination of misinformation. Fake accounts, bots-operated accounts or semi-automated accounts are predominantly used to spread misinformation, fake news and rumors. Some websites are also engaged in the process of creation and aggregation of unverified information. This chapter aims to define different categories of false and unverified information. The impact of misinformation, fake news and rumors and the causes of their dissemination have been explored. This chapter also analyzes the methods, tools and techniques that could be used to detect false information and discourage its dissemination. It is observed that there is a need to create awareness and educate internet users to spot and detect misinformation. Social media platforms and the government authorities are also using algorithms and other technological frameworks to detect and eliminate misinformation, fake news and rumors from the web. The findings of the study might be useful for internet users, academicians, policymakers, entrepreneurs and managers of news and social media industry. The outcomes of the study can be helpful in predicting, explaining and controlling the process of creation and dissemination of misinformation, fake news and rumors.
Objective
Fake news represents a particularly egregious and direct avenue by which inaccurate beliefs have been propagated via social media. We investigate the psychological profile of individuals who fall prey to fake news.
Method
We recruited 1,606 participants from Amazon's Mechanical Turk for three online surveys.
Results
The tendency to ascribe profundity to randomly generated sentences – pseudo‐profound bullshit receptivity – correlates positively with perceptions of fake news accuracy, and negatively with the ability to differentiate between fake and real news (media truth discernment). Relatedly, individuals who overclaim their level of knowledge also judge fake news to be more accurate. We also extend previous research indicating that analytic thinking correlates negatively with perceived accuracy by showing that this relationship is not moderated by the presence/absence of the headline's source (which has no effect on accuracy), or by familiarity with the headlines (which correlates positively with perceived accuracy of fake and real news).
Conclusion
Our results suggest that belief in fake news may be driven, to some extent, by a general tendency to be overly accepting of weak claims. This tendency, which we refer to as reflexive open‐mindedness, may be partly responsible for the prevalence of epistemically suspect beliefs writ large.
This article is protected by copyright. All rights reserved.
There is an increasing imperative for psychologists and other behavioral scientists to understand how people behave on social media. However, it is often very difficult to execute experimental research on actual social media platforms, or to link survey responses to online behavior in order to perform correlational analyses. Thus, there is a natural desire to use self-reported behavioral intentions in standard survey studies to gain insight into online behavior. But are such hypothetical responses hopelessly disconnected from actual sharing decisions? Or are online survey samples via sources such as Amazon Mechanical Turk (MTurk) so different from the average social media user that the survey responses of one group give little insight into the on-platform behavior of the other? Here we investigate these issues by examining 67 pieces of political news content. We evaluate whether there is a meaningful relationship between (i) the level of sharing (tweets and retweets) of a given piece of content on Twitter, and (ii) the extent to which individuals (total N = 993) in online surveys on MTurk reported being willing to share that same piece of content. We found that the same news headlines that were more likely to be hypothetically shared on MTurk were actually shared more frequently by Twitter users, r = .44. For example, across the observed range of MTurk sharing fractions, a 20 percentage point increase in the fraction of MTurk participants who reported being willing to share a news headline on social media was associated with 10x as many actual shares on Twitter. This finding suggests that self-reported sharing intentions collected in online surveys are likely to provide some meaningful insight into what participants would actually share on social media.
Reducing the spread of misinformation, especially on social media, is a major challenge. We investigate one potential approach: having social media platform algorithms preferentially display content from news sources that users rate as trustworthy. To do so, we ask whether crowdsourced trust ratings can effectively differentiate more versus less reliable sources. We ran two preregistered experiments ( n = 1,010 from Mechanical Turk and n = 970 from Lucid) where individuals rated familiarity with, and trust in, 60 news sources from three categories: ( i ) mainstream media outlets, ( ii ) hyperpartisan websites, and ( iii ) websites that produce blatantly false content (“fake news”). Despite substantial partisan differences, we find that laypeople across the political spectrum rated mainstream sources as far more trustworthy than either hyperpartisan or fake news sources. Although this difference was larger for Democrats than Republicans—mostly due to distrust of mainstream sources by Republicans—every mainstream source (with one exception) was rated as more trustworthy than every hyperpartisan or fake news source across both studies when equally weighting ratings of Democrats and Republicans. Furthermore, politically balanced layperson ratings were strongly correlated ( r = 0.90) with ratings provided by professional fact-checkers. We also found that, particularly among liberals, individuals higher in cognitive reflection were better able to discern between low- and high-quality sources. Finally, we found that excluding ratings from participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd. Our findings indicate that having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.
Are citizens willing to accept journalistic fact-checks of misleading claims from candidates they support and to update their attitudes about those candidates? Previous studies have reached conflicting conclusions about the effects of exposure to counter-attitudinal information. As fact-checking has become more prominent, it is therefore worth examining how respondents respond to fact-checks of politicians—a question with important implications for understanding the effects of this journalistic format on elections. We present results to two experiments conducted during the 2016 campaign that test the effects of exposure to realistic journalistic fact-checks of claims made by Donald Trump during his convention speech and a general election debate. These messages improved the accuracy of respondents’ factual beliefs, even among his supporters, but had no measurable effect on attitudes toward Trump. These results suggest that journalistic fact-checks can reduce misperceptions but often have minimal effects on candidate evaluations or vote choice.
Delusion-prone individuals may be more likely to accept even delusion-irrelevant implausible ideas because of their tendency to engage in less analytic and less actively open-minded thinking. Consistent with this suggestion, two online studies with over 900 participants demonstrated that although delusion-prone individuals were no more likely to believe true news headlines, they displayed an increased belief in “fake news” headlines, which often feature implausible content. Mediation analyses suggest that analytic cognitive style may partially explain these individuals’ increased willingness to believe fake news. Exploratory analyses showed that dogmatic individuals and religious fundamentalists were also more likely to believe false (but not true) news, and that these relationships may be fully explained by analytic cognitive style. Our findings suggest that existing interventions that increase analytic and actively open-minded thinking might be leveraged to help reduce belief in fake news.
Why do people believe blatantly inaccurate news headlines ("fake news")? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news - even for headlines that align with individuals' political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant's ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one's political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se - a finding that opens potential avenues for fighting fake news.
Can citizens heed factual information, even when such information challenges their partisan and ideological attachments? The “backfire effect,” described by Nyhan and Reifler (Polit Behav 32(2):303–330. https://doi.org/10.1007/s11109-010-9112-2, 2010), says no: rather than simply ignoring factual information, presenting respondents with facts can compound their ignorance. In their study, conservatives presented with factual information about the absence of Weapons of Mass Destruction in Iraq became more convinced that such weapons had been found. The present paper presents results from five experiments in which we enrolled more than 10,100 subjects and tested 52 issues of potential backfire. Across all experiments, we found no corrections capable of triggering backfire, despite testing precisely the kinds of polarized issues where backfire should be expected. Evidence of factual backfire is far more tenuous than prior research suggests. By and large, citizens heed factual information, even when such information challenges their ideological commitments.
To what extent do survey experimental treatment effect estimates generalize to other populations and contexts? Survey experiments conducted on convenience samples have often been criticized on the grounds that subjects are sufficiently different from the public at large to render the results of such experiments uninformative more broadly. In the presence of moderate treatment effect heterogeneity, however, such concerns may be allayed. I provide evidence from a series of 15 replication experiments that results derived from convenience samples like Amazon’s Mechanical Turk are similar to those obtained from national samples. Either the treatments deployed in these experiments cause similar responses for many subject types or convenience and national samples do not differ much with respect to treatment effect moderators. Using evidence of limited within-experiment heterogeneity, I show that the former is likely to be the case. Despite a wide diversity of background characteristics across samples, the effects uncovered in these experiments appear to be relatively homogeneous.