ArticlePDF Available

Prior Exposure Increases Perceived Accuracy of Fake News

Authors:

Abstract and Figures

The 2016 US Presidential Election brought considerable attention to the phenomenon of “fake news”: entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual fake news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake news headlines occurs despite a low level of overall believability, and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories, and that tagging such stories as disputed is not an effective solution to this problem. Interestingly, however, we also find that prior exposure does not impact entirely implausible statements (e.g., “The Earth is a perfect square”). These observations indicate that although extreme implausibility is a boundary condition of the illusory truth effect, only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and impact of repetition on beliefs is greater than previously assumed.
Content may be subject to copyright.
A preview of the PDF is not available
... For instance, repetition increases the perceived accuracy of trivia statements 25 , opinion statements 26 , and rumors 27 even when these statements are highly implausible 28 or contradict participants' prior knowledge 29 . The illusory truth effect has also been observed in the domain of news content, with repeated fake news and real news both being perceived as more accurate than novel types of such news 30 . This effect has also been reported in children as young as five years old 31 . ...
... If analytical thinking is associated with media truth discernment 19,20 , and if analytical thinking develops linearly between the ages of 8 and 15 32 , then media truth discernment would also increase linearly during early adolescence, sustained in part by the development of reasoning abilities. Specifically, we expected that the difference between the true and fake news ratings increased with age. 2. If both children and adults experience the illusory truth effect 30,31 , then adolescents should also be sensitive to this effect and rate repeated news as more accurate than novel ones (no expectation concerning the developmental trend). 3. ...
... Participants were exposed to half of the news items before rating their accuracy. We computed participants' levels of media truth discernment (i.e., the difference in the ratings they provided between real and fake news) and the illusory truth effect (i.e., the difference in the ratings they provided between familiarized and nonfamiliarized news) 30 . Participants also completed the CRT. ...
Article
Full-text available
The spread of online fake news is emerging as a major threat to human society and democracy. Previous studies have investigated media truth discernment among adults but not among adolescents. Adolescents might face a greater risk of believing fake news, particularly fake news that is shared via social media, because of their vulnerabilities in terms of reasoning. In the present study, we investigated (1) the development of media truth discernment and the illusory truth effect from adolescence to adulthood and (2) whether the development of media truth discernment and the illusory truth effect are related to the development of reasoning ability. To accomplish this task, we recruited 432 adolescents aged 11 to 14 years as well as 132 adults. Participants were asked to rate the perceived accuracy of both real and fake news headlines. Participants were exposed to half of the news items before entering the rating phase. Finally, participants completed the Cognitive Reflection Test (CRT). Media truth discernment (i.e., the difference between participants’ ratings of fake and real news) developed linearly with increasing age, and participants rated familiarized headlines as more accurate than novel headlines at all ages (i.e., the illusory truth effect). Finally, media truth development (but not the illusory truth effect) was related to the development of reasoning abilities with increasing age. Our findings highlight the urgent need to improve logical thinking among adolescents to help them detect fake news online.
... This phenomenon remains robust to various procedural variables such as subject, statement type, presentation mode, repetition interval, etc. (Dechêne et al, 2010). The illusory truth effect is particularly significant for propaganda, as it offers a mechanism to improve the believability of the propaganda message (Pennycook et al, 2018). The illusory truth effect is so central to propaganda that it underpins the famous law of propaganda often attributed to Goebbels: "Repeat a lie often enough, and it becomes the truth. ...
Preprint
Full-text available
At least since Francis Bacon, the slogan 'knowledge is power' has been used to capture the relationship between decision-making at a group level and information. We know that being able to shape the informational environment for a group is a way to shape their decisions; it is essentially a way to make decisions for them. This paper focuses on strategies that are intentionally, by design, impactful on the decision-making capacities of groups, effectively shaping their ability to take advantage of information in their environment. Among these, the best known are political rhetoric, propaganda, and misinformation. The phenomenon this paper brings out from these is a relatively new strategy, which we call slopaganda. According to The Guardian, News Corp Australia is currently churning out 3000 'local' generative AI (GAI) stories each week. In the coming years, such 'generative AI slop' will present multiple knowledge-related (epistemic) challenges. We draw on contemporary research in cognitive science and artificial intelligence to diagnose the problem of slopaganda, describe some recent troubling cases, then suggest several interventions that may help to counter slopaganda.
... While these differences might be taken to imply that the reduced support nullifies the effects of misinformation exposure after attachment, we caution against this interpretation. Viewing false information, even if the viewer initially doubts its validity, can increase their likelihood of agreeing with it later [52]. Thus, each view prevented by a community note is meaningful. ...
Preprint
Social networks scaffold the diffusion of information on social media. Much attention has been given to the spread of true vs. false content on online social platforms, including the structural differences between their diffusion patterns. However, much less is known about how platform interventions on false content alter the engagement with and diffusion of such content. In this work, we estimate the causal effects of Community Notes, a novel fact-checking feature adopted by X (formerly Twitter) to solicit and vet crowd-sourced fact-checking notes for false content. We gather detailed time series data for 40,074 posts for which notes have been proposed and use synthetic control methods to estimate a range of counterfactual outcomes. We find that attaching fact-checking notes significantly reduces the engagement with and diffusion of false content. We estimate that, on average, the notes resulted in reductions of 45.7% in reposts, 43.5% in likes, 22.9% in replies, and 14.0% in views after being attached. Over the posts' entire lifespans, these reductions amount to 11.4% fewer reposts, 13.0% fewer likes, 7.3% fewer replies, and 5.7% fewer views on average. In reducing reposts, we observe that diffusion cascades for fact-checked content are less deep, but not less broad, than synthetic control estimates for non-fact-checked content with similar reach. This structural difference contrasts notably with differences between false vs. true content diffusion itself, where false information diffuses farther, but with structural patterns that are otherwise indistinguishable from those of true information, conditional on reach.
... As a result, individuals may be more inclined to believe poorly supported claims that align with their strongly held beliefs 40 . Moreover, the availability heuristic is also relevant, as it involves the likelihood of believing information based on previous exposure to it 41 . In fact, a single exposure to a headline containing misinformation can increase people's later belief in the headline 42 . ...
Article
Full-text available
This study analyzed dental caries-related Facebook posts in Brazilian Portuguese to identify misinformation and predict user interaction factors. A sample of 500 posts (between August 2016 and August 2021), was obtained by CrowdTangle. Two independent and calibrated investigators (intraclass correlation coefficient varying from 0.80 to 0.98) characterized the posts based on their time of publication, author’s profile, sentiment, aim of content, motivation, and facticity. Most posts (90.2%) originated from Brazil, and they were predominantly shared by business profiles (94.2%). Approximately 67.2% of these posts focused on preventive dental issues, driven by noncommercial interests in 88.8% of cases. Misinformation was present in 39.6% of the posts, particularly those with a positive sentiment and commercial motivation. Business profiles and positive sentiment were identified as predictive factors for higher post engagement. These findings highlight a significant proportion of dental caries-related posts containing misinformation, especially when associated with positive emotions and commercial motivation.
... O tema tem sido abordado por diferentes áreas de pesquisa, como Sistemas de Informação (Li & Sakamoto, 2014;Zhang & Ghorbani, 2020), Comportamento do Consumidor (Talwar et al., 2019;Visentin, Pizzi & Pichierri, 2019;Di Domenico et al., 2021) e Psicologia (Allcott & Gentzkow, 2017;Pennycook & Rand, 2021;Van Bavel et al., 2021), para citar apenas algumas. As notícias falsas, ou fake news, são definidas como "conteúdo totalmente fabricado e frequentemente partidário, que é apresentado como factual" (Pennycook et al., 2018(Pennycook et al., , p. 1865. Allcott e Gentzkow (2017, p. 213), por outro lado, definem fake news como "artigos de notícias que são intencionalmente e comprovadamente falsos e que podem enganar os leitores". ...
Conference Paper
Este artigo propõe um modelo baseado em processos e fundamentado na Teoria da Difusão das Inovações para investigar como as notícias falsas, ou fake news, são compartilhadas por meio das redes sociais. Para isso, os dados foram categorizados, segundo a taxonomia de processo, como: input, output, players e activities. Verificou-se que os traços individuais associados à tomada de decisão, as características das fake news percebidas e os atributos do canal de comunicação são determinantes para a atitude de um indivíduo frente a uma notícia falsa recebida. Ademais, de acordo com o modelo proposto, o processo de divulgação de fake news envolve tempo e ocorre em cinco estágios principais: conhecimento, persuasão, decisão, implementação e confirmação. Ao longo dos estágios a atitude dos indivíduos se forma, porém apenas durante o estágio da confirmação as notícias falsas serão compartilhadas para suas respectivas redes. Por fim, conclui-se que as mídias sociais, mediadas pelo poder dos algoritmos e das câmaras de eco, têm potencial para aumentar, grandemente, a disseminação de informações falsas. Palavras-chavel Fake News, Mídias Sociais, Desinformação, Difusão da Inovação, DOI, TAM.
... Most relevant studies involved questioning participants before and after exposing them to misinformation to gauge the influence of misinformation on their choices and reasoning. Some found that minimum exposure increases familiarity and subsequent perceptions of accuracy, both immediately and after a week [9], whereas others found that such exposure does not materialise into a change in behaviour [4]. ...
Preprint
Full-text available
Despite extensive research and development of tools and technologies for misinformation tracking and detection, we often find ourselves largely on the losing side of the battle against misinformation. In an era where misinformation poses a substantial threat to public discourse, trust in information sources, and societal and political stability, it is imperative that we regularly revisit and reorient our work strategies. While we have made significant strides in understanding how and why misinformation spreads, we must now broaden our focus and explore how technology can help realise new approaches to address this complex challenge more efficiently.
... check. Subsequently, following prior research (Pennycook, Cannon, and Rand 2018) and our pre-registration, a subset of subjects were excluded for three reasons: (i) subjects who self-reported to have answered randomly, (ii) failed the second attention check, (iii) responded excessively fast (i.e., the duration was less than around 6 seconds for responding to the user perceptions). These users still received remuneration after the exclusion. ...
Preprint
Large language models (LLMs) are increasingly prevalent in recommender systems, where LLMs can be used to generate personalized recommendations. Here, we examine how different LLM-generated explanations for movie recommendations affect users' perceptions of cognitive, affective, and utilitarian needs and consumption intentions. In a pre-registered, between-subject online experiment (N=759) and follow-up interviews (N=30), we compare (a) LLM-generated generic explanations, and (b) LLM-generated contextualized explanations. Our findings show that contextualized explanations (i.e., explanations that incorporate users' past behaviors) effectively meet users' cognitive needs while increasing users' intentions to watch recommended movies. However, adding explanations offers limited benefits in meeting users' utilitarian and affective needs, raising concerns about the proper design and implications of LLM-generated explanations. Qualitative insights from interviews reveal that referencing users' past preferences enhances trust and understanding but can feel excessive if overused. Furthermore, users with more active and positive engagement with the recommender system and movie-watching get substantial gains from contextualized explanations. Overall, our research clarifies how LLM-generated recommendations influence users' motivations and behaviors, providing valuable insights for the future development of user-centric recommender systems, a key element in social media platforms and online ecosystems.
Article
Full-text available
How good are people at judging the veracity of news? We conducted a systematic literature review and pre-registered meta-analysis of 303 effect sizes from 67 experimental articles evaluating accuracy ratings of true and fact-checked false news (NParticipants = 194,438 from 40 countries across 6 continents). We found that people rated true news as more accurate than false news (Cohen’s d = 1.12 [1.01, 1.22]) and were better at rating false news as false than at rating true news as true (Cohen’s d = 0.32 [0.24, 0.39]). In other words, participants were able to discern true from false news and erred on the side of skepticism rather than credulity. We found no evidence that the political concordance of the news had an effect on discernment, but participants were more skeptical of politically discordant news (Cohen’s d = 0.78 [0.62, 0.94]). These findings lend support to crowdsourced fact-checking initiatives and suggest that, to improve discernment, there is more room to increase the acceptance of true news than to reduce the acceptance of fact-checked false news.
Article
Full-text available
Numerous psychological findings have shown that incidental exposure to ideas makes those ideas seem more true, a finding commonly referred to as the ‘illusory truth’ effect. Under many accounts of the illusory truth effect, initial exposure to a statement provides a metacognitive feeling of ‘fluency’ or familiarity that, upon subsequent exposure, leads people to infer that the statement is more likely to be true. However, genuine beliefs do not only affect truth judgements about individual statements, they also imply other beliefs and drive decision-making. Here, we consider whether exposure to ‘premise’ statements affects people’s truth ratings for novel ‘implied’ statements, a pattern of findings we call the ‘illusory implication’ effect. We argue these effects would constitute evidence for genuine belief change from incidental exposure and identify a handful of existing findings that offer preliminary support for this claim. Building upon these, we conduct three new preregistered experiments to further test this hypothesis, finding additional evidence that exposure to ‘premise’ statements affected participants’ truth ratings for novel ‘implied’ statements, including for considerably more distant implications than those previously explored. Our findings suggest that the effects of incidental exposure reach further than previously thought, with potentially consequential implications for concerns around mis- and dis-information.
Article
Full-text available
People frequently continue to use inaccurate information in their reasoning even after a credible retraction has been presented. This phenomenon is often referred to as the continued influence effect of misinformation. The repetition of the original misconception within a retraction could contribute to this phenomenon, as it could inadvertently make the “myth” more familiar—and familiar information is more likely to be accepted as true. From a dual-process perspective, familiarity-based acceptance of myths is most likely to occur in the absence of strategic memory processes. We thus examined factors known to affect whether strategic memory processes can be utilized; age, detail, and time. Participants rated their belief in various statements of unclear veracity, and facts were subsequently affirmed and myths were retracted. Participants then re-rated their belief either immediately or after a delay. We compared groups of young and older participants, and we manipulated the amount of detail presented in the affirmative/corrective explanations, as well as the retention interval between encoding and a retrieval attempt. We found that (1) older adults over the age of 65 were worse at sustaining their post-correction belief that myths were inaccurate, (2) a greater level of explanatory detail promoted more sustained belief change, and (3) fact affirmations promoted more sustained belief change in comparison to myth retractions over the course of one week (but not over three weeks). This supports the notion that familiarity is indeed a driver of continued influence effects.
Article
Full-text available
This study investigated the cognitive processing of true and false political information. Specifically, it examined the impact of source credibility on the assessment of veracity when information comes from a polarizing source (Experiment 1), and effectiveness of explanations when they come from one’s own political party or an opposition party (Experiment 2). Participants rated their belief in factual and incorrect statements that Donald Trump made on the campaign trail; facts were subsequently affirmed and misinformation retracted. Participants then re-rated their belief immediately or after a delay. Experiment 1 found that (1) if information was attributed to Trump, Republican supporters of Trump believed it more than if it was presented without attribution, whereas the opposite was true for Democrats; and (2) although Trump supporters reduced their belief in misinformation items following a correction, they did not change their voting preferences. Experiment 2 revealed that the explanation’s source had relatively little impact, and belief updating was more influenced by perceived credibility of the individual initially purporting the information. These findings suggest that people use political figures as a heuristic to guide evaluation of what is true or false, yet do not necessarily insist on veracity as a prerequisite for supporting political candidates.
Article
Full-text available
People frequently rely on information even after it has been retracted, a phenomenon known as the continued-influence effect of misinformation. One factor proposed to explain the ineffectiveness of retractions is that repeating misinformation during a correction may inadvertently strengthen the misinformation by making it more familiar. Practitioners are therefore often encouraged to design corrections that avoid misinformation repetition. The current study tested this recommendation, investigating whether retractions become more or less effective when they include reminders or repetitions of the initial misinformation. Participants read fictional reports, some of which contained retractions of previous information, and inferential reasoning was measured via questionnaire. Retractions varied in the extent to which they served as misinformation reminders. Retractions that explicitly repeated the misinformation were more effective in reducing misinformation effects than retractions that avoided repetition, presumably because of enhanced salience. Recommendations for effective myth debunking may thus need to be revised.
Article
To what extent do survey experimental treatment effect estimates generalize to other populations and contexts? Survey experiments conducted on convenience samples have often been criticized on the grounds that subjects are sufficiently different from the public at large to render the results of such experiments uninformative more broadly. In the presence of moderate treatment effect heterogeneity, however, such concerns may be allayed. I provide evidence from a series of 15 replication experiments that results derived from convenience samples like Amazon’s Mechanical Turk are similar to those obtained from national samples. Either the treatments deployed in these experiments cause similar responses for many subject types or convenience and national samples do not differ much with respect to treatment effect moderators. Using evidence of limited within-experiment heterogeneity, I show that the former is likely to be the case. Despite a wide diversity of background characteristics across samples, the effects uncovered in these experiments appear to be relatively homogeneous.
Article
Following the 2016 US presidential election, many have expressed concern about the effects of false stories ("fake news"), circulated largely through social media. We discuss the economics of fake news and present new data on its consumption prior to the election. Drawing on web browsing data, archives of fact-checking websites, and results from a new online survey, we find: 1) social media was an important but not dominant source of election news, with 14 percent of Americans calling social media their "most important" source; 2) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times; 3) the average American adult saw on the order of one or perhaps several fake news stories in the months around the election, with just over half of those who recalled seeing them believing them; and 4) people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks.
Article
Political misperceptions can distort public debate and undermine people's ability to form meaningful opinions. Why do people often hold these false or unsupported beliefs, and why is it sometimes so difficult to convince them otherwise? We argue that political misperceptions are typically rooted in directionally motivated reasoning, which limits the effectiveness of corrective information about controversial issues and political figures. We discuss factors known to affect the prevalence of directionally motivated reasoning and assess strategies for accurately measuring misperceptions in surveys. Finally, we address the normative implications of misperceptions for democracy and suggest important topics for future research.
Article
Online news domains increasingly rely on social media to drive traffic to their websites. Yet we know surprisingly little about how a social media conversation mentioning an online article actually generates clicks. Sharing behaviors, in contrast, have been fully or partially available and scrutinized over the years. While this has led to multiple assumptions on the diffusion of information, each assumption was designed or validated while ignoring actual clicks. We present a large scale, unbiased study of social clicks---that is also the first data of its kind---gathering a month of web visits to online resources that are located in 5 leading news domains and that are mentioned in the third largest social media by web referral (Twitter). Our dataset amounts to 2.8 million shares, together responsible for 75 billion potential views on this social media, and 9.6 million actual clicks to 59,088 unique resources. We design a reproducible methodology and carefully correct its biases. As we prove, properties of clicks impact multiple aspects of information diffusion, all previously unknown:(i) Secondary resources, that are not promoted through headlines and are responsible for the long tail of content popularity, generate more clicks both in absolute and relative terms; (ii) Social media attention is actually long-lived, in contrast with temporal evolution estimated from shares or receptions; (iii) The actual influence of an intermediary or a resource is poorly predicted by their share count, but we show how that prediction can be made more precise.
Article
Survey experiments have become a central methodology across the social sciences. Researchers can combine experiments’ causal power with the generalizability of population-based samples. Yet, due to the expense of population-based samples, much research relies on convenience samples (e.g. students, online opt-in samples). The emergence of affordable, but non-representative online samples has reinvigorated debates about the external validity of experiments. We conduct two studies of how experimental treatment effects obtained from convenience samples compare to effects produced by population samples. In Study 1, we compare effect estimates from four different types of convenience samples and a population-based sample. In Study 2, we analyze treatment effects obtained from 20 experiments implemented on a population-based sample and Amazon's Mechanical Turk (MTurk). The results reveal considerable similarity between many treatment effects obtained from convenience and nationally representative population-based samples. While the results thus bolster confidence in the utility of convenience samples, we conclude with guidance for the use of a multitude of samples for advancing scientific knowledge.