ArticlePDF Available

Prior Exposure Increases Perceived Accuracy of Fake News

Authors:

Abstract

The 2016 U.S. presidential election brought considerable attention to the phenomenon of “fake news”: entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual fake-news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake-news headlines occurs despite a low level of overall believability and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem. It is interesting, however, that we also found that prior exposure does not impact entirely implausible statements (e.g., “The earth is a perfect square”). These observations indicate that although extreme implausibility is a boundary condition of the illusory truth effect, only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and impact of repetition on beliefs is greater than has been previously assumed.
Journal of Experimental Psychology:
General
Prior Exposure Increases Perceived Accuracy of Fake
News
Gordon Pennycook, Tyrone D. Cannon, and David G. Rand
Online First Publication, September 24, 2018. http://dx.doi.org/10.1037/xge0000465
CITATION
Pennycook, G., Cannon, T. D., & Rand, D. G. (2018, September 24). Prior Exposure Increases
Perceived Accuracy of Fake News. Journal of Experimental Psychology: General. Advance
online publication. http://dx.doi.org/10.1037/xge0000465
Prior Exposure Increases Perceived Accuracy of Fake News
Gordon Pennycook, Tyrone D. Cannon, and David G. Rand
Yale University
The 2016 U.S. presidential election brought considerable attention to the phenomenon of “fake news”:
entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one
mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual
fake-news headlines presented as they were seen on Facebook, we show that even a single exposure
increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover,
this “illusory truth effect” for fake-news headlines occurs despite a low level of overall believability and
even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s
political ideology. These results suggest that social media platforms help to incubate belief in blatantly
false news stories and that tagging such stories as disputed is not an effective solution to this problem.
It is interesting, however, that we also found that prior exposure does not impact entirely implausible
statements (e.g., “The earth is a perfect square”). These observations indicate that although extreme
implausibility is a boundary condition of the illusory truth effect, only a small degree of potential
plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and
impact of repetition on beliefs is greater than has been previously assumed.
Keywords: fake news, news media, social media, fluency, illusory truth effect
Supplemental materials: http://dx.doi.org/10.1037/xge0000465.supp
The ability to form accurate beliefs, particularly about issues of
great importance, is key to people’s success as individuals as well
as the functioning of their societal institutions (and, in particular,
democracy). Across a wide range of domains, it is critically
important to correctly assess what is true and what is false:
Accordingly, differentiating real from unreal is at the heart of
society’s constructs of rationality and sanity (Corlett, Krystal,
Taylor, & Fletcher, 2009; Sanford, Veckenstedt, Moritz, Balzan, &
Woodward, 2014). Yet the ability to form and update beliefs about
the world sometimes goes awry—and not just in the context of
inconsequential, small-stakes decisions.
The potential for systematic inaccuracy in important beliefs has
been particularly highlighted by the widespread consumption of
disinformation during the 2016 U.S. presidential election. This is
most notably exemplified by so-called fake news—that is, news
stories that were fabricated (but presented as if from legitimate
sources) and promoted on social media to deceive the public for
ideological and/or financial gain (Lazer et al., 2018). An analysis
of the top performing news articles on Facebook in the months
leading up to the election revealed that the top fake-news articles
actually outperformed the top real-news articles in terms of shares,
likes, and comments (Silverman, Strapagiel, Shaban, & Hall,
Gordon Pennycook and Tyrone D. Cannon, Department of Psychology,
Yale University; David G. Rand, Department of Psychology, Department
of Economics, and School of Management, Yale University.
David G. Rand is now at the Sloan School of Management and Depart-
ment of Brain & Cognitive Sciences, Massachusetts Institute of Technol-
ogy.
All data are available online (https://osf.io/txf46/). A working paper
version of the article was posted online via the Social Sciences Research
Network (https://ssrn.com/abstract2958246) and on ResearchGate (https://
www.researchgate.net/publication/317069544_Prior_Exposure_Increases_
Perceived_Accuracy_of_Fake_News). Portions of this research were pre-
sented in 2017 and 2018 at the following venues: (a) Conference for
Combating Fake News: An Agenda for Research and Action, Harvard Law
School and Northeastern University; (b) Canadian Society for Brain, Be-
haviour and Cognitive Science Annual Conference, in Regina, Saskatche-
wan; (c) Brown University, Department of Psychology; (d) Harvard Uni-
versity, Department of Psychology; (e) Yale University, Institute for
Network Science and Technology and Ethics Study Group; (f) University
of Connecticut, Cognitive Science Colloquium; (g) University of Regina,
Department of Psychology; (h) University of Saskatchewan, Department of
Psychology; and (i) the 2018 Annual Convention of the Society of Per-
sonality and Social Psychology, in Atlanta, Georgia.
This research was supported by a Social Sciences and Humanities
Council of Canada Banting Postdoctoral Fellowship (to Gordon Penny-
cook), National Institute of Mental Health Grant MH081902 (to Tyrone D.
Cannon), and grants from the Templeton World Charity Foundation and
the Defense Advanced Research Projects Agency (to David G. Rand). The
content is solely the responsibility of the authors and does not necessarily
reflect the official views of the Social Sciences and Humanities Council of
Canada, the National Institute of Mental Health, or the Defense Advanced
Research Projects Agency.
Correspondence concerning this article should be addressed to Gordon
Pennycook, who is now at the Faculty of Business Administration, Uni-
versity of Regina, Regina, Saskatchewan, Canada, S4S 0A2. E-mail:
grpennycook@gmail.com
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Journal of Experimental Psychology: General
© 2018 American Psychological Association 2018, Vol. 1, No. 999, 000
0096-3445/18/$12.00 http://dx.doi.org/10.1037/xge0000465
1
2016). Although it is unclear to what extent fake news influenced
the outcome of the presidential election (Allcott & Gentzkow,
2017), there is no question that many people were deceived by
entirely fabricated (and often quite fanciful) fake-news stories—
people including, for example, high-ranking government officials,
such as Pakistan’s defense minister (Goldman, 2016). How is it
that so many people came to believe stories that were patently and
demonstrably untrue? What mechanisms underlie these false be-
liefs that might be called mass delusions?
Here, we explore one potential answer: prior exposure. Given
the ease with which fake news can be created and distributed on
social media platforms (Shane, 2017), combined with the increas-
ing tendency to consume news via social media (Gottfried &
Shearer, 2016), it is likely that people are being exposed to
fake-news stories with much greater frequency than in the past.
Might exposure per se help to explain people’s tendency to believe
outlandish political disinformation?
The Illusory Truth Effect
There is a long tradition of work in cognitive science demon-
strating that prior exposure to a statement (e.g., “The capybara is
the largest of the marsupials”) increases the likelihood that partic-
ipants will judge it to be accurate (Arkes, Boehm, & Xu, 1991;
Bacon, 1979; Begg, Anas, & Farinacci, 1992; Dechêne, Stahl,
Hansen, & Wänke, 2010; Fazio, Brashier, Payne, & Marsh, 2015;
Hasher, Goldstein, & Toppino, 1977; Polage, 2012; Schwartz,
1982). The dominant account of this “illusory truth effect” is that
repetition increases the ease with which statements are processed
(i.e., processing fluency), which in turn is used heuristically to
infer accuracy (Alter & Oppenheimer, 2009; Begg et al., 1992;
Reber, Winkielman, & Schwarz, 1998; Unkelbach, 2007; Wang,
Brashier, Wing, Marsh, & Cabeza, 2016; Whittlesea, 1993; but see
Unkelbach & Rom, 2017). Past studies have shown this phenom-
enon using a range of innocuous and plausible statements, such as
obscure trivia questions (Bacon, 1979) or assertions about con-
sumer products (Hawkins & Hoch, 1992; Johar & Roggeveen,
2007). Repetition can even increase the perceived accuracy of
plausible but false statements among participants who are subse-
quently able to identify the correct answer (Fazio et al., 2015).
Here we ask whether illusory truth effects extend to fake news.
Given that the fake-news stories circulating on social media are
quite different from the stimuli that have been employed in pre-
vious illusory truth experiments, in that they are implausible and
highly partisan, finding such an effect for fake news extends the
scope (and real-world relevance) of the illusory truth effect and, as
we argue, informs theoretical models of the effect. Indeed, there
are numerous reasons to think that simple prior exposure will not
increase the perceived accuracy of fake news.
Implausibility as a Potential Boundary Condition of
the Illusory Truth Effect
Fake-news stories are constructed with the goal of drawing
attention and are therefore often quite fantastical and implausible.
For example, Pennycook and Rand (2018a) gave participants a set
of politically partisan fake-news headlines collected from online
websites (e.g., ‘Trump to Ban All TV Shows That Promote Gay
Activity Starting With Empire as President’) and found that they
were judged as accurate only 17.8% of the time. To contrast this
figure with the existing illusory truth literature, Fazio et al. (2015)
found that false trivia items were judged to be true around 40% of
the time, even when restricting the analysis to participants who
were subsequently able to recognize the statement as false. Thus,
these previous statements (such as ‘chemosynthesis is the name of
the process by which plants make their food’), despite being
untrue, are much more plausible than are typical fake-news head-
lines. This may have consequences for whether repetition increases
perceived accuracy of fake news: When it is completely obvious
that a statement is false, it may be perceived as inaccurate regard-
less of how fluently it is processed. Although such an influence of
plausibility is not explicitly part of the fluency-conditional model
of illusory truth proposed by Fazio and colleagues (under which
knowledge influences judgment only when people do not rely on
fluency), the possibility of such an effect is acknowledged in their
discussion when they state that they “expect that participants
would draw on their knowledge, regardless of fluency, if state-
ments contained implausible errors” (p. 1000). Similarly, when
summarizing a meta-analysis of illusory truth effects, Dechêne et
al. (2010) argued that “statements have to be ambiguous, that is,
participants have to be uncertain about their truth status because
otherwise the statements’ truthfulness will be judged on the basis
of their knowledge” (p. 239). Thus, investigating the potential for
an illusory truth effect for fake news is not simply important
because it helps one to understand the spread of fake news but also
because it allows one to test heretofore untested (but common)
intuitions about the boundary conditions of the effect.
Motivated Reasoning as a Potential Boundary
Condition of the Illusory Truth Effect
Another striking feature of fake news that may counteract the
effect of repetition—and that is absent from prior studies of the
illusory truth effect—is the fact that fake-news stories are not only
political in nature but often extremely partisan. Although prior
work has shown the illusory truth effect on average for (relatively
innocuous) social–political opinion statements (Arkes, Hackett, &
Boehm, 1989), the role of individual differences in ideological
discordance has not been examined. Of importance, people have a
strong motivation to reject the veracity of stories that conflict with
their political ideology (Flynn, Nyhan, & Reifler, 2017; Kahan,
2013; Kahan et al., 2012), and the hyperpartisan nature of fake
news makes such conflicts likely for roughly half the population.
Furthermore, the fact that fake-news stories are typically of im-
mediate real-world relevance—and therefore, presumably, more
impactful on a person’s beliefs and actions than are the relatively
trivial pieces of information considered in previous work on the
illusory truth effect—should make people more inclined to think
carefully about the accuracy of such stories, rather than rely on
simple heuristics when making accuracy judgments. Thus, there is
reason to expect that people may be resistant to illusory truth
effects for partisan fake-news stories that they have politically
motivated reasons to reject.
The Current Work
Although there are reasons why, in theory, people should not
believe fake news (even if they have seen it before), it is clear that
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
2PENNYCOOK, CANNON, AND RAND
many people do in fact find such stories credible. If repetition
increases perceptions of accuracy for even highly implausible and
partisan content, then increased exposure may (at least partly)
explain why fake-news stories have recently proliferated. Here we
assess this possibility with a set of highly powered and preregis-
tered experiments. In a first study, we explored the impact of
extreme implausibility on the illusory truth effect in the context of
politically neutral statements. We found that implausibility does
indeed present a boundary condition for illusory truth, such that
repetition does not increase perceived accuracy of statements that
essentially no one believes at baseline. In two more studies,
however, we found that— despite being implausible, partisan, and
provocative—fake-news headlines that are repeated are in fact
perceived as more accurate. Taken together, these results shed
light on how people come to have patently false beliefs, help to
inform efforts to reduce such beliefs, and extend understanding of
the basis of illusory truth effects.
Study 1: Extreme Implausibility Boundary Condition
Although existing models of the illusory truth effect do not
explicitly take plausibility into account, we hypothesized that prior
exposure should not increase perceptions of accuracy for state-
ments that are prima facie implausible—that is, statements for
which individuals hold extremely certain prior beliefs. In other
words, when strong internal reasons exist to reject the veracity of
a statement, it should not matter how fluently the statement is
processed.
To assess implausibility as a boundary condition for the illusory
truth effect, we created statements that participants would certainly
know to be false (i.e., extremely implausible statements such as
“The earth is a perfect square”) and manipulated prior exposure
using a standard illusory truth paradigm (via Fazio et al., 2015).
We also included unknown (but plausible) true and false trivia
statements from a set of general knowledge norms (Tauber, Dun-
losky, & Rawson, 2013). To balance out the set, we also gave
participants obvious known truths (see Table 1 for example items
from each set). Participants first rated the “interestingness” of half
of the items, and following an unrelated intervening questionnaire,
they were asked to assess the accuracy of all items. Thus, half of
the items in the assessment stage were previously presented (i.e.,
familiarized), and half were novel. If implausibility is a boundary
condition for the illusory truth effect, there should be no significant
effect of repetition on extremely implausible (known) falsehoods.
We expected to replicate the standard illusory truth effect for
unknown (but plausible) trivia statements. For extremely plausible
known true statements, there may be a ceiling effect on accuracy
judgments that precludes an effect of repetition (cf. results for
fluency on known truths; Unkelbach, 2007).
Method
All data are available online (https://osf.io/txf46/). We prereg-
istered our hypotheses, primary analyses, and sample size (https://
osf.io/txf46/). Although one-tailed tests are justified in the case of
preregistered directional hypotheses, here we followed conven-
tional practices and used two-tailed tests throughout (the use of
one-tailed vs. two-tailed tests does not qualitatively alter our
results). All participants were recruited from Amazon’s Mechan-
ical Turk (Horton, Rand, & Zeckhauser, 2011), which has been
shown to be a reliable resource for research on political ideology
(Coppock, 2016; Krupnikov & Levine, 2014; Mullinix, Leeper,
Druckman, & Freese, 2015). These studies were approved by the
Yale Human Subject Committee.
Participants. Our target sample was 500. In total, 566 par-
ticipants completed some portion of the study. We had complete
data for 515 participants (51 participants dropped out). Partic-
ipants were removed if they indicated responding randomly
(N50), searching online for any of the claims (N24;1of
whom did not respond), or going through the familiarization
stage without doing the task (N32). These exclusions were
preregistered. The final sample (N409; mean age 35.8
years) included 171 male and 235 female participants (three did
not indicate their sex).
Materials. We created four known falsehoods (i.e., extremely
implausible statements) and four known truths statements (see the
online supplemental materials for a full list). We also used 10 true
and 10 false trivia questions framed as statements (via Tauber et
al., 2013). Trivia items were sampled from largely unknown facts
(see Table 1).
Procedure. We used a procedure parallel to that used by Fazio
et al. (2015). Participants were first asked to rate the “interesting-
ness” of the items on a 6-point scale ranging from 1 (very unin-
teresting)to6(very interesting). Half of the items were presented
in this familiarization stage (counterbalanced). Participants then
completed a few demographic questions and the Positive and
Negative Affect Schedule (PANAS; Watson, Clark, & Tellegen,
1988). This filler stage consisted of 25 questions and took approx-
imately 2 min. Demographic questions consisted of age (“What is
your age?”), sex (“What is your sex?”), education (“What is the
highest level of school you have completed or the highest degree
you have received,” with eight typical education-level options),
English fluency (“Are you fluent in English?”), and zip code
(“Please enter the ZIP code for your primary residence. Reminder:
This survey is anonymous”). Finally, participants were asked to
assess the accuracy of the statements on a 6-point scale ranging
from 1 (definitely false)to6(definitely true). At the end of the
survey, participants were asked about random responding (“Did
you respond randomly at any point during the study?”) and use of
search engines (“Did you search the Internet [via Google or
otherwise] for any of the news headlines?”). Both were accompa-
nied by a yes–no response option and the following clarification:
“Note: Please be honest! You will get your HIT regardless of your
response.”
Table 1
Example Items From Study 1, Which Vary as a Function of
Whether they are Known/Unknown and True/False
Type Item
Known
True There are more than fifty stars in the universe.
False (implausible) The earth is a perfect square.
Unknown
True Billy the Kid’s last name was Bonney.
False Angel Falls is located in Brazil.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
3
PRIOR EXPOSURE AND FAKE NEWS
Results
Following our preregistration, the key comparison was between
familiarized and novel implausible items. As predicted, repetition
did not increase perceptions of accuracy for implausible (known
false) statements (p.462; see Table 2), whereas there was a
significant effect of repetition for both true and false trivia (un-
known) statements (ps.001). There was no significant effect of
repetition on very plausible (known true) statements (p.078).
These results were supported by a significant interaction between
knowledge (known, unknown) and exposure (familiarized, novel),
F(1, 408) 82.17, MSE .35, p.001,
2
.17. Specifically,
there was no significant overall effect of repetition for known
items, F(1, 408) .91, MSE .30, p.341,
2
.002, but a
highly significant overall effect for unknown items, F(1, 408)
107.99, MSE .47, p.001,
2
.21.
Discussion
Although we replicated prior results indicating a positive effect
of repetition on ambiguously plausible statements, regardless of
their correctness, we observed no significant effect of repetition on
accuracy judgments for statements that were patently false.
Study 2: Fake News
Study 1 established that, at least, extreme implausibility is a
boundary condition for the illusory truth effect. Nonetheless, given
that fake-news stories are highly (but not entirely) implausible
(Pennycook & Rand, 2017), it is unclear whether their level of
plausibility would be sufficient to allow prior exposure to inflate
the perceived accuracy of fake news. It is also unclear what impact
the highly partisan nature of fake-news stimuli, and the motivated
reasoning to which this partisanship may lead (i.e., reasoning
biased toward conclusions that are concordant with previous opin-
ion; Kahan, 2013; Kunda, 1990; Mercier & Sperber, 2011; Red-
lawsk, 2002), would have on any potential illusory truth effect.
Motivated reasoning may cause people to see politically discordant
stories as disproportionally inaccurate, such that the illusory truth
effect may be diluted (or reversed) when headlines are discordant.
We assessed these questions in Study 2.
In addition to assessing the baseline impact of repetition on fake
news, we also investigated the impact of explicit warnings about a
lack of veracity on the illusory truth effect, given that warnings
have been shown to be effective tools for diminishing (although
not abolishing) the memorial effects of misinformation (Ecker,
Lewandowsky, & Tang, 2010). Furthermore, such warnings are a
key part of efforts to combat fake news—for example, Facebook’s
first major intervention against fake news consisted of flagging
stories shown to be false with a caution symbol and the text
Disputed by 3rd Party Fact-Checkers (Mosseri, 2016). To this end,
half of the participants were randomly assigned to a warning
condition in which this caution symbol and a disputed warning
were applied to the fake-news headlines.
Prior work has shown that participants rate repeated trivia
statements as more accurate than novel statements, even when they
were told that the source was inaccurate (Begg et al., 1992).
Specifically, Begg and colleagues (1992) attributed statements in
the familiarization stage to people with either male or female
names and then told participants that either all male or all female
individuals were lying. Participants were then presented with re-
peated and novel statements—all without sources—and they rated
previously presented statements as more accurate even if they had
been attributed to the lying gender in the familiarization stage. This
provides evidence that the illusory truth effect survives manipula-
tions that decrease belief in statements at first exposure. Nonethe-
less, Begg and colleagues employed a design that was different in
a variety of ways from our warning manipulation. Primarily, Begg
and colleagues provided information about veracity indirectly: For
any given statement presented during their familiarization phase,
participants had to complete the additional step, at encoding, of
mapping the source’s gender into the information provided about
which gender was unreliable in order to inform their initial judg-
ment about accuracy. The disputed warnings we tested here, con-
versely, did not involve this extra mapping step. Thus, by assessing
their impact on the illusory truth effect, we tested whether the
scope of Begg and colleagues’ findings extends to this more
explicit warning, while also generating practically useful insight
into the efficacy of this specific fake-news intervention.
Method
Participants. We had an original target sample of 500 partic-
ipants in our preregistration. We then completed a full replication
of the experiment with another 500 participants. Given the simi-
larity across the two samples, the data sets were combined for the
main analysis (the results are qualitatively similar when examining
the two experiments separately; see the online supplemental ma-
terials). The first wave was completed on January 16, and the
second wave was completed on February 3 (both in 2017). In total,
1,069 participants from Mechanical Turk completed some portion
of the survey. However, 64 did not finish the study and were
removed (33 from the no-warning condition and 31 from the
warning condition). A further 32 participants indicated responding
randomly at some point during the study and were removed. We
also removed participants who reported searching for the headlines
(N18) or skipping through the familiarization stage (N6).
These exclusions were preregistered for Studies 1 and 3 but were
accidentally omitted from the preregistration for Study 2. The
results are qualitatively identical with the full sample, but we
report analyses with participants removed to retain consistency
across our studies. The final sample (N949; mean age 37.1)
included 449 male and 489 female participants (11 did not re-
spond).
Table 2
Comparison of Familiarized and Novel Items for Known or
Unknown True and False Statements in Study 1
Type Familiarized Novel Difference t(df)p
Known
True 5.59 (.8) 5.66 (.6) .07 1.77 (408) .078
False (implausible) 1.13 (.6) 1.11 (.5) .02 .74 (408) .462
Unknown
True 4.12 (.7) 3.79 (.8) .33 6.65 (408) .001
False 3.77 (.7) 3.39 (.7) .38 9.44 (408) .001
Note. Data presented are means, with standard deviations in parentheses.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
4PENNYCOOK, CANNON, AND RAND
Materials and procedure. Participants engaged in a three-
stage experiment. In the familiarization stage, participants were
shown six news headlines that were factually accurate (real news)
and six others that were entirely untrue (fake news). The headlines
were presented in a format identical to that of Facebook posts (i.e.,
a headline with an associated photograph above it and a lede
sentence and byline below it; see Figure 1A; fake and real-news
headlines can be found in the Appendix—images for each item, as
presented to participants, can be found at the following link:
https://osf.io/txf46/). Participants were randomized into two con-
ditions: (a) The warning condition, where all of the fake-news
headlines (but none of the real-news headlines) in the familiariza-
tion stage were accompanied by a Disputed by 3rd Party Fact-
Checkers tag” (see Figure 1B), or (b) the control condition, where
fake and real-news headlines were displayed without warnings. In
the familiarization stage, participants engaged with the news head-
lines in an ecologically valid way: They indicated whether they
would share each headline on social media. Specifically, partici-
pants were asked “Would you consider sharing this story online
(for example, through Facebook or Twitter)?” and were given
three response options (No, Maybe, Yes). For purposes of data
analysis, No was coded as 0 and Maybe and Yes were coded as 1.
1
The participants then advanced to the distractor stage, in which
they completed a set of filler demographic questions. These in-
cluded age, sex, education, proficiency in English, political party
(Democratic, Republican, Independent, other), social and eco-
nomic conservatism (separate items),
2
and two questions about the
2016 election. For these election-related questions, participants
were first asked to indicate who they voted for (given the follow-
ing options: Hillary Clinton, Donald Trump, Other Candidate
[such as Jill Stein or Gary Johnson], I did not vote for reasons
outside my control, I did not vote but I could have, and I did not
vote out of protest). Participants were then asked “If you absolutely
had to choose between only Clinton and Trump, who would you
prefer to be the next President of the United States.” This binary
response was then used as our political ideology variable for the
concordance–discordance analysis. Specifically, for participants
who indicated a preference for Trump, pro-Republican stories
were scored as politically concordant and pro-Democrat stories
were scored as politically discordant; for participants who in-
dicated a preference for Clinton, pro-Democrat stories were
scored as politically concordant and pro-Republican stories
were scored as politically discordant. The filler stage took
approximately 1 min.
Finally, participants entered the assessment stage, where they
were presented with 24 news headlines—the 12 headlines they saw
in the familiarization stage and 12 new headlines (six fake news,
six real news)—and rated each for familiarity and accuracy. Which
headlines were presented in the familiarization stage was counter-
balanced across participants, and headline order was randomized
for every participant in both Stage 1 and Stage 3. Moreover, the
items were balanced politically, with half being pro-Democrat and
half pro-Republican. The fake-news headlines were selected from
Snopes.com, a third-party website that fact-checks news stories.
The real headlines were contemporary stories from mainstream
news outlets. For each item, participants were first asked “Have
you seen or heard about this story before?” and were given three
1
This was not preregistered for Study 2; however, it was for Study 3.
Hence, we used this analysis strategy to retain consistency across the two
fake-news studies. The results are qualitatively similar if the social media
question is scored continuously.
2
Participants answered the prompt “On social issues I am” with Strongly
Liberal, Somewhat Liberal, Moderate, Somewhat Conservative, or
Strongly Conservative. The same was true for the economic conservatism
item except the prompt was “On economic issues I am.”
Figure 1. Sample fake-news headline without (Panel A) and with (Panel B) a Disputed by 3rd Party
Fact-Checkers warning, as presented in Studies 2 and 3. The original image (but not the headline) has been
replaced with a stock military image (under a CC0 license) for copyright purposes. Image is from https://www
.pexels.com/photo/flight-sky-sunset-men-54098/. See the online article for the color version of this figure.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
5
PRIOR EXPOSURE AND FAKE NEWS
response options (No, Unsure, Yes). For the purposes of data
analysis, No and Unsure were combined (this was preregistered in
Study 3 but not in Study 2). As in other work on perceptions of
news accuracy (Pennycook & Rand, 2017, 2018a, 2018b), partic-
ipants were then asked “To the best of your knowledge, how
accurate is the claim in the above headline?” and they rated
accuracy on the following 4-point scale: 1 (not at all accurate), 2
(not very accurate),3(somewhat accurate), and 4 (very accurate).
We focused on judgments about news headlines, as opposed to full
articles, because much of the public’s engagement with news on
social media involves reading only story headlines (Gabielkov,
Ramachandran, Chaintreau, & Legout, 2016).
At the end of the survey, participants were asked about random
responding, use of search engines to check accuracy of the stimuli,
and whether they skipped through the familiarization stage (“At
the beginning of the survey [when you were asked whether you
would share the stories on social media], did you just skip through
without reading the headlines?”). All were accompanied by a
yes–no response option.
Our preregistration specified the comparison between familiar-
ized and novel fake news, separately in the warning and no-
warning conditions, as the key analyses. However, for complete-
ness, we report the full set of analyses that emerge from our
mixed-design analysis of variance (ANOVA). Our political con-
cordance analysis deviates somewhat from the analysis that was
preregistered, and our follow-up analysis that focuses on unfamil-
iar headlines was not preregistered. Our full preregistration is
available at the following link: https://osf.io/txf46/.
Results
As a manipulation check for our familiarization procedure,
we submitted familiarity ratings (recorded during the assess-
ment stage) to a 2 (type: fake, real) 2 (exposure: familiarized,
novel) 2 (warning: warning, no-warning) mixed-design
ANOVA. Critically, there was a main effect of exposure such
that familiarized headlines were rated as more familiar (M
44.7%, SD 35.6) than were novel headlines (M16.2%,
SD 15.5), F(1, 947) 578.76, MSE .13, p.001,
2
.38, and a significant simple effect was present within every
combination of news type and warning condition (all ts14.0,
all ps.001). This indicates that our social media sharing task
in the familiarization stage was sufficient to capture partici-
pants’ attention (further analysis of familiarity judgments can
be found in the online supplemental materials).
As a manipulation check for attentiveness to the Disputed by 3rd
Party Fact-Checkers warning, we submitted the willingness to
share news articles on social media measure (from the familiar-
ization stage) to a 2 (type: fake, real) 2 (condition: warning,
no-warning) mixed-design ANOVA. This analysis revealed a sig-
nificant main effect of type, such that our participants were more
willing to share real stories (M41.6%, SD 31.8) than fake
stories (M29.7%, SD 29.8), F(1, 947) 131.16, MSE .05,
p.0017,
2
.12. More important, there was a significant main
effect of condition, F(1, 947) 15.33, MSE .13, p.001,
2
.016, which was qualified by an interaction between type and
condition, F(1, 947) 19.65, MSE .05, p.001,
2
.020,
such that relative to the no-warning condition, participants in the
warning condition reported being less willing to share fake-news
headlines (which actually bore the warnings in the warning con-
dition; warning: M23.9%, SD 28.3; no-warning: M35.2%,
SD 30.2), t(947) 5.93, p.001, d.39, whereas there was
no significant difference across conditions in sharing of real news
(which did not have warnings in either condition; warning: M
40.6%, SD 32.2; no-warning: M42.6%, SD 31.5; t1).
Thus, participants clearly paid attention to the warnings.
We now turn to perceived accuracy, our main focus. Perceived
accuracy was entered into a 2 (type: fake, real) 2 (exposure:
familiarized, novel) 2 (warning: warning, no-warning) mixed-
design ANOVA (see Table 3 for means and standard deviations).
Demonstrating the presence of an illusory truth effect, there was a
significant main effect of exposure, F(1, 947) 93.65, MSE
.12, p.001,
2
.09, such that headlines presented in the
familiarization stage (M2.24, SD .42) were rated as more
accurate than were novel headlines (M2.13, SD .39). There
was also a significant main effect of headline type, such that
real-news headlines (M2.67, SD .48) were rated as much
more accurate than were fake-news headlines (M1.71, SD
.46), F(1, 945) 2,424.56, MSE .36, p.001,
2
.72.
However, there was no significant interaction between exposure
and type of news headline (F1). In particular, prior exposure
increased accuracy ratings even when considering only fake-news
headlines (see Figure 2; familiarized: M1.77, SD .56; novel:
M1.65, SD .48), t(948) 7.60, p.001, d.25. For
example, nearly twice as many participants (92.1% increase, from
38 to 73 out of 949 total) judged the fake-news headlines presented
to them during the familiarization stage as accurate (mean accu-
racy rating above 2.5), compared to the stories presented to them
for the first time in the assessment stage. Although both of these
participant counts are only a small fraction of the total sample, the
fact that a single exposure to the fake stories doubled the number
of credulous participants suggests that repetition effects may have
a substantial impact in daily life, where people can see fake-news
headlines cycling many times through their social media news-
feeds.
What effect did the presence of warnings on fake news in the
familiarization stage have on later judgments of accuracy and,
potentially, the effect of repetition? The ANOVA just described
revealed a significant main effect of the warning manipulation,
F(1, 947) 5.39, MSE .53, p.020,
2
.005, indicating that
the warning decreased perceptions of news accuracy. However,
this was qualified by an interaction between warning and type,
F(1, 947) 5.83, MSE .36, p.016,
2
.006. Whereas the
presence of warnings on fake news in the assessment stage had no
effect on perceptions of real-news accuracy (warning: M2.67,
SD .49; no-warning: M2.67, SD .48; t1), participants
rated fake news as less accurate in the warning condition (warning:
M1.66, SD .46; no-warning: M1.76, SD .46), t(947)
3.40, p.001, d.22. Furthermore, there was a marginally
significant interaction between exposure and warning, F(1, 947)
3.32, MSE .12, p.069,
2
.004, such that the decrease in
overall perceptions of accuracy was significant for familiarized
items (warning: M2.21, SD .41; no-warning: M2.28,
SD .43), t(947) 2.77, p.006, d.18, but not novel items,
(warning: M2.12, SD .38; no-warning: M2.15, SD .39),
t(947) 1.36, p.175, d.09. That is, the warning decreased
perceptions of accuracy for items that were presented in the
familiarization stage— both fake stories that were labeled with
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
6PENNYCOOK, CANNON, AND RAND
warnings and the real stories presented without warnings (see
footnote 3)— but not for items that were not presented in the
familiarization stage.
There was no significant three-way interaction, however, be-
tween headline type, exposure, and warning condition (F1). As
a consequence, the repetition effect was evident for fake-news
headlines in the warning condition, t(460) 4.89, p.001, d
.23, as well as in the no-warning condition, t(487) 5.81, p
.001, d.26 (see Figure 2). That is, participants rated familiarized
fake-news headlines that they were explicitly warned about as
more accurate were than novel fake-news headlines that they were
not warned about (despite the significant negative effect of warn-
ings on perceived accuracy of fake news reported earlier). In fact,
there was no significant interaction between the exposure and
warning manipulations when isolating the analysis to fake-news
headlines, F(1, 947) 1.00, MSE .12, p.317,
2
.001,
Thus, the warning seems to have created a general sense of
distrust—thereby reducing perceived accuracy for both familiar-
ized and novel fake-news headlines—rather than making people
particularly distrust the stories that were labeled as disputed.
As a secondary analysis,
3
we also investigated whether the
effect of prior exposure is robust to political concordance (i.e.,
whether headlines were congruent or incongruent with one’s po-
litical stance). Mean perceptions of news accuracy for politically
concordant and discordant items as a function of type, exposure,
and warning condition can be found in Table 3. Perceived accuracy
was entered into a 2 (political valence: concordant, discordant)
2 (type: fake, real) 2 (exposure: familiarized, novel) 2
(warning: warning, no-warning) mixed-design ANOVA. First, as a
manipulation check, politically concordant items were rated as far
more accurate than were politically discordant items overall, F(1,
945) 573.08, MSE .34, p.001,
2
.38 (see Table 3).
Nonetheless, we observed no significant interaction between the
repetition manipulation and political valence, F(1, 945) 2.24,
MSE .15, p.135,
2
.002. The illusory truth effect was
evident for fake-news headlines that were politically discordant,
t(946) 4.70, p.001, d.15, as well as concordant, t(946)
7.19, p.001, d.23. Political concordance interacted signifi-
cantly with type of news story, F(1, 945) 138.91, MSE .23,
p.001,
2
.13, such that the difference between perceptions
of real and fake news (i.e., discernment) was greater for politically
concordant headlines (real news: M2.90, SD .59; fake news:
M1.80, SD .56), than for politically discordant headlines
(real news: M2.44, SD .53; fake news: M1.61, SD .48),
t(946) 11.8, p.001, d.38 (see Pennycook & Rand, 2018a,
for a similar result). All other interactions with political concor-
dance were not significant (all Fs1.5, ps.225).
The illusory truth effect also persisted when analyzing only
news headlines that the participants marked as unfamiliar (i.e., in
the same mixed-design ANOVA as mentioned earlier but analyz-
ing only stories the participants were not consciously aware of
having seen in the familiarization stage or at some point prior to
the experiment; familiarized: M1.90, SD .53; novel: M
1.83, SD .49), F(1, 541)
4
11.82, MSE .17, p.001,
2
.02 (see the online supplemental materials for details and further
statistical analysis).
Discussion
The results of Study 2 indicate that a single prior exposure is
sufficient to increase perceived accuracy for both fake and real
news. This occurs even (a) when fake news is labeled as Disputed
by 3rd Party Fact-Checkers during the familiarization stage (i.e.,
during encoding at first exposure), (b) among fake (and real) news
headlines that are inconsistent with one’s political ideology, and
(c) when isolating the analysis to news headlines that participants
were not consciously aware of having seen in the familiarization
stage.
Study 3: Fake News, 1-Week Interval
We next sought to assess the robustness of our finding that
repetition increases perceptions of fake-news accuracy by making
two important changes to the design of Study 2. First, we assessed
the persistence of the repetition effect by inviting participants back
after a weeklong delay (following previous research that has
shown illusory truth effects to persist over substantial periods of
time; e.g., Hasher et al., 1977; Schwartz, 1982). Second, we
restricted our analyses to only those items that were unfamiliar to
participants when entering the study, which allows for a cleaner
novel baseline.
3
These analyses were not preregistered, although we did preregister a
parallel analysis where pro-Democrat and pro-Republican items would be
analyzed separately while comparing liberals and conservatives. The pres-
ent analysis simply combines the data into a more straightforward analysis
and uses the binary Clinton–Trump choice to distinguish liberals and
conservatives. The effect of prior exposure was significant for fake news
when political concordance was determined based on Democrat–Republican
party affiliation: politically concordant, t(609) 4.8, p.001; politically
disconcordant, t(609) 2.9, p.004.
4
Degrees of freedom are lower here because this analysis includes only
individuals who were unfamiliar with at least one item in each cell of the
design (familiarized–novel and fake news–real news).
Table 3
Comparison of Familiarized and Novel Items for Politically
Concordant and Discordant Items in the Warning and
No-Warning Conditions
Type and warning status Familiarized Novel t(df)p
Politically concordant
Fake news
No Warning 1.93 (.7) 1.78 (.6) 5.46 (486) .001
Warning 1.81 (.7) 1.68 (.6) 4.69 (459) .001
Real news
No Warning 2.98 (.6) 2.83 (.7) 5.45 (486) .001
Warning 2.92 (.7) 2.86 (.7) 2.17 (459) .031
Politically discordant
Fake news
No Warning 1.72 (.6) 1.60 (.5) 3.91 (486) .001
Warning 1.60 (.6) 1.53 (.5) 2.66 (459) .008
Real news
No Warning 2.50 (.6) 2.39 (.6) 3.85 (486) .001
Warning 2.49 (.6) 2.40 (.6) 3.03 (459) .003
Note. Data presented are means, with standard deviations in parentheses.
Politically concordant items consisted of pro-Democrat items for Clinton
supporters and pro-Republican items for Trump supporters (and vice versa
for politically discordant items).
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
7
PRIOR EXPOSURE AND FAKE NEWS
Method
Participants. Our target sample was 1,000 participants from
Mechanical Turk. This study was completed on February 1 and 2,
2017. Participants who completed Study 2 were not permitted to
complete Study 3. In total, 1,032 participants completed the study,
40 of which dropped out or had missing data (14 from the no-
warning condition, 26 from the warning condition). Participants
who reported responding randomly (N29), skipping over the
familiarization phase (N1), or searching online for the headlines
(N22) were removed. These exclusions were preregistered. The
final sample (N940; mean age 36.8) included 436 male and
499 female participants (five did not respond).
Materials and procedure. The design was identical to that of
Study 2 (including the warning and no-warning conditions), with
a few exceptions. First, the length of the distractor stage was
increased by adding 20 unrelated questionnaire items to the de-
mographics questions (namely, the PANAS, as in Study 1). This
filler stage took approximately 2 min to complete. Furthermore,
participants were invited to return for a follow-up session 1 week
later in which they were presented with the same headlines they
had seen in the assessment stage plus a set of novel headlines not
included in Session 1 (N566 participants responded to the
follow-up invitation). To allow full counterbalancing, we pre-
sented participants with eight headlines in the familiarization
phase, 16 headlines in the accuracy judgment phase (of which
eight were those shown in the familiarization phase), and 24
headlines in the follow-up session a week later (of which 16 were
those shown in the assessment phase of Session 1), again main-
taining an equal number of real–fake and pro-Democrat–pro-
Republican headlines within each block. The design of Study 3
therefore allowed us to assess the temporal stability of the repeti-
tion effect within both Session 1 (over the span of a distractor task)
and Session 2 (over the span of a week).
Second, during the familiarization stage participants were asked
to indicate whether each headline was familiar, instead of whether
they would share the story on social media (the social media
question was moved to the assessment stage). This modification
allowed us to restrict our analyses to only those items that were
unfamiliar to participants when entering the study (i.e., they said
“no” when asked about familiarity),
5
allowing for a cleaner as-
sessment of the causal effect of repetition (903 of the 940 partic-
ipants in Session 1 were previously unfamiliar with at least one
story of each type and were thus included in the main text analysis,
as were 527 out of the 566 participants in Session 2; see the online
supplemental materials for analyses with all items and all partic-
ipants). Fake- and real-news headlines as presented to participants
can be found at the following link: https://osf.io/txf46/.
As in Experiment 2, our preregistration specified the compari-
son between familiarized and novel fake news in both the warning
and no-warning conditions (and for both sessions) as the key
analyses, although in this case we preregistered the full 2 (type:
fake, real) 2 (exposure: familiarized, novel) 2 (warning:
warning, no-warning) mixed-design ANOVA. We also preregis-
tered the political concordance analysis. Finally, we preregistered
the removal of cases where participants were familiar with the
news headlines as a secondary analysis, but we focus on it as a
primary analysis here because this is the novel feature relative to
the case in Study 2 (primary analyses including all participants are
discussed in footnote 8). Our preregistration is available at the
following link: https://osf.io/txf46/.
5
Whereas participants indicated their familiarity with familiarized items
prior to completing the accuracy judgments, they indicated their familiarity
with novel items after completing the accuracy judgments. Thus, it is
possible that seeing the news headlines in the accuracy-judgment phase
would increase perceived familiarity. There was no evidence for this,
however, because mean familiarity judgment (scored continuously) did not
significantly differ based on whether the judgment was made before or
after the accuracy judgment phase, t(939) .68, SE .01, p.494, d
.02. Participants were unfamiliar with 81.2% of the fake-news headlines
and 49.2% of the real-news headlines.
Figure 2. Exposing participants to fake-news headlines in Study 2 increased accuracy ratings, even when the
stories were tagged with a warning indicating that they had been disputed by third-party fact-checkers. Panel A:
Mean accuracy ratings for fake-news headlines as a function of repetition (familiarized stories were shown
previously during the familiarization stage; novel stories were shown for the first time during the assessment
stage) and presence or absence of a warning that fake-news headlines had been disputed. Error bars indicate 95%
confidence intervals. Panel B: Distribution of participant-average accuracy ratings for the fake-news headlines,
comparing the six familiarized stories shown during the familiarization stage (red; lower “mountain” for print
version readers) with the six novel stories shown for the first time in the assessment stage (blue; the higher
“mountain”). We collapsed across warning and no-warning conditions because the repetition effect did not differ
significantly by condition. See the online article for the color version of this figure.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
8PENNYCOOK, CANNON, AND RAND
Results
Perceived accuracy was entered into a 2 (type: fake, real) 2
(exposure: familiarized, novel) 2 (warning: warning, no-
warning) mixed-design ANOVA. Replicating the illusory truth
effect from Study 2, there was a clear causal effect of prior
exposure on accuracy in Session 1of Study 3 despite the longer
distractor stage: Headlines presented in the familiarization stage
(M2.01, SD .54) were rated as more accurate than were novel
headlines (M1.92, SD .49), F(1, 721) 22.52, MSE .23,
p.001,
2
.03. Again replicating the results of Study 2, there
was a significant main effect of type, such that real stories (M
2.31, SD .63) were rated as much more accurate than were fake
stories (M1.63, SD .52), F(1, 721) 934.57, MSE .36,
p.001,
2
.57, but there was no significant interaction
between exposure and type of news headline, F(1, 721) 2.65,
MSE .20, p.104,
2
.004. Accordingly, prior exposure
increased perceived accuracy even when considering only fake-
news headlines (see Figure 3A and 3B), t(902)
6
5.99, p.001,
d.20 (89.5% increase in number of participants judging famil-
iarized fake-news headlines as accurate compared to novel fake-
news headlines; i.e., from 38 to 72 participants out of 903).
Unlike the case in Study 2, there was no main effect of the
warning manipulation on overall perceptions of accuracy (i.e.,
across the aggregate of fake and real news; F1). However, there
was a marginally significant interaction between type of news
story and warning condition, F(1, 721) 2.95, MSE .36, p
.086,
2
.004. Regardless, the fake-news warnings in the famil-
iarization stage had no significant overall effect on perceptions of
fake-news accuracy in the assessment stage (warning: M1.61,
SD .50; no-warning: M1.66, SD .54), t(932)
7
1.54, p
.123, d.10. There was also no significant effect of the warning
on perceptions of real-news accuracy (warning: M2.32, SD
.63; no-warning: M2.30, SD .63; t1), no significant
interaction between the repetition and warning manipulations, F(1,
721) 1.89, MSE .23, p.169,
2
.003), and no significant
three-way interaction between warning, exposure, and type of
news story (F1).
8
Nonetheless, it should be noted that famil-
iarized fake-news headlines (i.e., the fake-news headlines that
were warned about in the familiarization stage) were rated as less
accurate (M1.64, SD .59) than were the same headlines in the
control (no-warning) condition (M1.73, SD .63), t(925)
2.14, p.032, d.14, suggesting that the warning did have some
effect on accuracy judgments. However, this effect was smaller
than in Study 2 and did not extend to nonwarned (and not famil-
iarized) fake news. This is perhaps due to the smaller number of
items in the familiarization stage of Study 3.
Following our preregistration, we also analyzed the effect of
exposure for fake-news headlines separately in the warning and
no-warning conditions. The repetition effect was evident for
fake-news headlines in both the warning condition (familiar-
ized: M1.63, SD .58; novel: M1.55, SD .52),
t(447) 3.07, p.002, d.14, and the no-warning condition
(familiarized: M1.71, SD .61; novel: M1.58, SD
.54), t(454) 5.41, p.001, d.25. Furthermore, familiar-
ized fake-news headlines were judged as more accurate than
were novel ones for both political discordant items (familiar-
ized: M1.60, SD .67; novel: M1.51, SD .63),
t(858) 3.41, p.001, d.12, and political concordant items
(familiarized: M1.72, SD .77; novel: M1.59, SD
.67), t(801) 4.93, p.001, d.18; note that an ANOVA
including concordance indicated that there was no significant
interaction between repetition and political concordance for
fake news, F(1, 769) 1.46, MSE .32, p.228,
2
.002.
9
Following up 1 week later, we continued to find a clear causal
effect of repetition on accuracy ratings: Perceived accuracy of a
story increased linearly with the number of times the participants
had been exposed to that story. Using linear regression with robust
standard errors clustered on participant,
10
we found a significant
positive relationship between number of exposures and accuracy
overall (familiarized twice: M2.00, SD .53; familiarized
once: M1.94, SD .53; novel: M1.90, SD .51; b.046),
t(537) 3.68, p.001, and when considering only fake-news
headlines (see Figure 3C; familiarized twice: M1.70, SD .58;
familiarized once: M1.66, SD .58; novel: M1.60, SD
.53; b.048), t(526) 3.66, p.001 (64% increase in number
of participants judging fake-news headlines as accurate among
stories seen twice compared to novel fake-news headlines; i.e.,
from 25 to 41 participants out of 527). Once again, this relation-
ship was evident for fake news in both the warning condition
(familiarized twice: M1.67, SD .59; familiarized once: M
1.63, SD .56; novel: M1.60, SD .52; b.036), t(276)
1.97, p.050, and the no-warning condition (familiarized twice:
M1.73, SD .57; familiarized once: M1.70, SD .59;
novel: M1.61, SD .53; b.061), t(249) 3.27, p.001;
6
Only unfamiliar headlines are included, and therefore missing data
account for missing participants in some cells of the design. Degrees of
freedom vary throughout because the maximum number of participants is
included in each analysis.
7
Degrees of freedom change here because this analysis includes the
maximum number of individuals who were unfamiliar with at least one
fake-news item.
8
In our (also) preregistered analysis that includes both previously fa-
miliar and unfamiliar items, there is a main effect of repetition, F(1, 938)
18.98, MSE .16, p.001,
2
.02, but (unlike in Study 2) a significant
interaction between exposure and warning condition, F(1, 938) 7.81,
MSE .16, p.005,
2
.01. There was a significant repetition effect
for fake news in the no-warning condition, t(475) 5.31, SE .03, p
.001, d.24, but no effect in the warning condition, t(463) 1.30, SE
.03, p.193, d.06. It is possible that prior knowledge of the items
facilitated explicit recall of the warning, which may have mitigated the
illusory truth effect (see the online supplemental materials for means and
further analyses).
9
We focused on fake-news headlines here because the political concor-
dance manipulation cuts the number of items in half. Including real news
in this analysis decreases the number of participants markedly because the
analysis of variance (ANOVA) requires each participant to contribute at
least one observation to each cell of the design. Nonetheless, the full
ANOVA reveals a significant main effect of repetition, F(1, 312) 8.94,
p.003,
2
.03, and no interaction with political concordance (F1).
The effect of prior exposure was also significant for fake news when
political concordance was determined based on Democrat–Republican
party affiliation: politically concordant, t(494) 4.1, p.001; politically
discordant, t(529) 2.3, p.020.
10
This specific analysis was not preregistered. Rather, the preregistra-
tion called for a comparison of the full 16 items from Session 1 with the
eight novel items in Session 2. This, too, revealed a significant main effect
of repetition (using the same analysis of variance as in the Session 1
analysis), F(1, 453) 12.91, p.001,
2
.03. However, such an
analysis does not illuminate the increasing effect of exposure, hence our
deviation from the preregistration (see the online supplemental materials
for further details and analyses).
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
9
PRIOR EXPOSURE AND FAKE NEWS
note that there was no significant interaction between the repetition
and warning manipulations (b⫽⫺.025), t(526) .96, p.337
(see Figure 3C). This relationship was also evident for fake-news
headlines that were politically discordant (familiarized twice: M
1.62, SD .72; familiarized once: M1.61, SD .68; novel:
M1.54, SD .62; b.041), t(525) 2.28, p.023, as well
as politically concordant (familiarized twice: M1.78, SD .77;
familiarized once: M1.71, SD .75; novel: M1.66, SD
.70; b.061), t(523) 3.24, p.001.
Discussion
The results of Study 3 further demonstrated that prior exposure
increases perceived accuracy of fake news. This occurred regard-
less of political discordance and among previously unfamiliar
headlines that were explicitly warned about during familiarization.
Crucially, the effect of repetition on perceived accuracy persisted
after a week and increased with an additional repetition. This
suggests that fake-news credulity compounds with increasing ex-
posures and maintains over time.
General Discussion
Although repetition did not impact accuracy judgments of to-
tally implausible statements, across two preregistered experiments
with a total of more than 1,800 participants we found consistent
evidence that repetition did increase the perceived accuracy of
fake-news headlines. Indeed, a single prior exposure to fake-news
headlines was sufficient to measurably increase subsequent per-
ceptions of their accuracy. Although this effect was relatively
small (d.20 –.21), it increased with a second exposure, thereby
suggesting a compounding effect of repetition across time. Explic-
Figure 3. The illusory truth effect for fake news is persistent, lasting over a longer filler stage in Study 3 and
continuing to be observed in a follow-up session 1 week later. Panel A: Mean accuracy ratings for fake-news
headlines in Session 1 of Study 3 as a function of repetition and presence or absence of a warning that fake-news
headlines had been disputed. Error bars indicate 95% confidence intervals. Panel B: Distribution of participant-
average accuracy ratings for the fake-news headlines in Study 3, comparing the four headlines shown during the
familiarization stage (red; the lower “mountain” for print version readers) with the four novel headlines shown
for the first time in the assessment stage (blue; the higher “mountain”). We collapsed across warning and
no-warning conditions because the repetition effect did not differ significantly by condition. Panel C: Mean
accuracy ratings for fake-news headlines in the follow-up session conducted 1 week later, as a function of
number of exposures to the story (two times for headlines previously presented in the familiarization and
assessment stage of Session 1; one time for headlines previously presented in only the assessment stage of
Session 1; and no times for headlines introduced for the first time in the follow-up session) and presence or
absence of warning tag. Error bars indicate 95% confidence intervals based on robust standard errors clustered
by participant, and the trend line is shown in dotted black. See the online article for the color version of this
figure.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
10 PENNYCOOK, CANNON, AND RAND
itly warning individuals that the fake-news headlines had been
disputed by third-party fact-checkers (which was true in every
case) did not abolish or even significantly diminish this effect.
Furthermore, the illusory truth effect was evident even among
news headlines that were inconsistent with the participants’ stated
political ideology.
Mechanisms of Illusory Truth
First, it is important to note that repetition increased accuracy
even for items that the participants were not consciously aware of
having been exposed to. This supports the broad consensus that
repetition influences accuracy through a low-level fluency heuris-
tic (Alter & Oppenheimer, 2009; Begg et al., 1992; Reber et al.,
1998; Unkelbach, 2007; Whittlesea, 1993). These findings indicate
that our repetition effect was likely driven, at least in part, by
automatic (as opposed to strategic) memory retrieval (Diana,
Yonelinas, & Ranganath, 2007; Yonelinas, 2002; Yonelinas &
Jacoby, 2012). More broadly, these effects correspond with prior
work demonstrating the power of fluency to influence a variety of
judgments (Schwarz, Sanna, Skurnik, & Yoon, 2007); for exam-
ple, subliminal exposure to a variety of stimuli (e.g., Chinese
characters) increases associated positive feelings (i.e., the mere
exposure effect; see Zajonc, 1968, 2001). Our evidence that the
illusory truth effect extends to implausible and even politically
inconsistent fake-news stories expands the scope of these effects.
That perceptions of fake-news accuracy can be manipulated so
easily despite being highly implausible (only 15%–22% of the
headlines were judged to be accurate) has substantial practical
implications (discussed later). However, what implications do
these results have for the understanding of the mechanisms that
underlie the illusory truth effect (and, potentially, a broader array
of fluency effects observed in the literature)?
For decades, it has been assumed that repetition increases
accuracy for only statements that are ambiguous (e.g., Dechêne
et al., 2010) because, otherwise, individuals will simply use
prior knowledge to determine truth. However, recent evidence
has indicated that repetition can increase the perceived accuracy
of even plausible but false statements (e.g., ‘chemosynthesis is
the name of the process by which plants make their food’)
among participants who were subsequently able to identify the
correct answer (Fazio et al., 2015). However, it may be that the
illusory truth effect is robust to the presence of conflicting prior
knowledge only when statements are plausible enough that
individuals fail to detect the conflict (for a perspective on
conflict detection during reasoning, see Pennycook, Fugelsang,
& Koehler, 2015). Indeed, as noted earlier, Fazio and col-
leagues (2015) speculated that “participants would draw on
their knowledge, regardless of fluency, if statements contained
implausible errors” (p. 1000). On the contrary, our findings
indicate that implausibility is only a boundary condition of the
illusory truth effect in the extreme: It is possible to use repe-
tition to increase the perceived accuracy even for entirely
fabricated and, frankly, outlandish fake-news stories that, given
some reflection (Pennycook & Rand, 2018a, 2018b), people
probably know are untrue. This observation substantially ex-
pands the purview of the illusory truth effect and suggests that
external reasons for disbelief (such as direct prior knowledge
and implausibility) are no safeguard against the fluency heuris-
tic.
Motivated Reasoning
Our results also have implications for a broad debate about
the scope of motivated reasoning, which has been taken to be a
fundamental aspect of how individuals interact with political
misinformation and disinformation (Swire, Berinsky, Le-
wandowsky, & Ecker, 2017) and has been used to explain the
spread of fake news (Allcott & Gentzkow, 2017; Beck, 2017;
Calvert, 2017; Kahan, 2017; Singal, 2017). Although Trump
supporters were indeed more skeptical about fake-news head-
lines that were anti-Trump relative to Clinton supporters (and
vice versa), our results show that repetition increases percep-
tions of accuracy even in such politically discordant cases.
Take, for example, the item “BLM Thug Protests President
Trump With Selfie...Accidentally Shoots Himself in the
Face,” which is politically discordant for Clinton supporters and
politically concordant for Trump supporters. Whereas on first
exposure Clinton supporters were less likely (11.7%) than
Trump supporters (18.5%) to rate this headline as accurate,
suggesting the potential for motivated reasoning, a single prior
exposure to this headline increased accuracy judgments in both
cases (to 17.9% and 35.5%, for Clinton and Trump supporters,
respectively). Thus, fake-news headlines were positively af-
fected by repetition even when there was a strong political
motivation to reject them. This observation complements the
results of Pennycook and Rand (2018a), who found—in con-
trast to common motivated reasoning accounts (Kahan, 2017)—
that analytic thinking leads to disbelief in fake news regardless
of political concordance. Taken together, this suggests that
motivated reasoning may play less of a role in the belief in fake
news than is often argued.
These results also bear on a recent debate about whether
corrections might actually make false information more famil-
iar, thereby increasing the incidence of subsequent false beliefs
(i.e., the familiarity backfire effect; Berinsky, 2017; Nyhan &
Reifler, 2010; Schwarz et al., 2007; Skurnik, Yoon, Park, &
Schwarz, 2005). In contrast to the backfire account, the latest
research in this domain has indicated that explicit warnings or
corrections of false statements actually have a small positive
(and certainly not negative) impact on subsequent perceptions
of accuracy (Ecker, Hogan, & Lewandowsky, 2017; Le-
wandowsky, Ecker, Seifert, Schwarz, & Cook, 2012; Penny-
cook & Rand, 2018b; Swire, Ecker, & Lewandowsky, 2017). In
our data, the positive effect of a single prior exposure (d.20
in Study 2) was effectively equivalent to the negative effect of
the disputed warning (d.17 in Study 2). Thus, although any
benefit arising from the disputed tag is immediately wiped out
by the prior exposure effect, we also did not find any evidence
of a meaningful backfire. Our findings therefore support recent
skepticism about the robustness and importance of the famil-
iarity backfire effect.
Societal Implications
Our findings have important implications for the functioning
of democracy, which relies on an informed electorate. Specif-
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
11
PRIOR EXPOSURE AND FAKE NEWS
ically, our results shed some light on what can be done to
combat belief in fake news. We employed a warning that was
developed by Facebook to curb the influence of fake news on its
social media platform (Disputed by 3rd Party Fact-Checkers).
We found that this warning did not disrupt the illusory truth
effect, an observation that resonates with findings of previous
work demonstrating that, for example, explicitly labeling con-
sumer claims as false (Skurnik et al., 2005) or retracting pieces
of misinformation in news articles (Berinsky, 2017; Ecker et al.,
2010; Lewandowsky et al., 2012; Nyhan & Reifler, 2010) are
not necessarily effective strategies for decreasing long-term
misperceptions (but see Swire, Ecker, & Lewandowsky, 2017).
Nonetheless, it is important to note that the warning did suc-
cessfully decrease subsequent overall perceptions of the accu-
racy of fake-news headlines; the warning’s effect was just not
specific to the particular fake-news headlines that the warning
was attached to (and so the illusory truth effect survived the
warning). Thus, the warning appears to have increased general
skepticism, which increased the overall sensitivity to fake news
(i.e., the warning decreased perceptions of fake-news accuracy
without affecting judgments for real news). The warning also
successfully decreased people’s willingness to share fake-news
headlines on social media. However, neither of these warning
effect sizes were particularly large—for example, as described
earlier, the negative impact of the warning on accuracy was
entirely canceled out by the positive impact of repetition. That
result, coupled with the persistence of the illusory truth effect
we observed and the possibility of an “implied truth” effect
whereby tagging some fake headlines may increase the per-
ceived accuracy of untagged fake headlines (Pennycook &
Rand, 2017), suggests that larger solutions are needed to pre-
vent people from ever seeing fake news in the first place, rather
than showing qualifiers aimed at making people discount the
fake news that they do see.
Finally, our findings have implications beyond just fake news
on social media. They suggest that politicians who continually
repeat false statements will be successful, at least to some
extent, in convincing people that those statements are in fact
true. Indeed, the word delusion derives from a Latin term
conveying the notion of mocking, defrauding, and deceiving.
That the illusory truth effect is evident for highly salient and
impactful information suggests that repetition may also play an
important role in domains beyond politics, such as the forma-
tion of religious and paranormal beliefs where claims are dif-
ficult to either validate or reject empirically. When the truth is
hard to come by, fluency is an attractive stand-in.
Context
In this research program, we used cognitive psychological the-
ory and techniques to illuminate issues that have clear conse-
quences for everyday life, with the hope of generating insights that
are both practically and theoretically relevant. The topic of fake
news—and disinformation more broadly—is of great relevance to
current public discourse and policy making and fits squarely in the
domain of cognitive psychology. Plainly, this topic is something
that cognitive psychologists should be able to say something
specific and illuminating about.
References
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the
2016 election (NBER Working Paper No. 23098). Retrieved from http://
www.nber.org/papers/w23089
Alter, A. L., & Oppenheimer, D. M. (2009). Uniting the tribes of fluency
to form a metacognitive nation. Personality and Social Psychology
Review, 13, 219 –235. http://dx.doi.org/10.1177/1088868309341564
Arkes, H. R., Boehm, L. E., & Xu, G. (1991). Determinants of judged
validity. Journal of Experimental Social Psychology, 27, 576 – 605.
http://dx.doi.org/10.1016/0022-1031(91)90026-3
Arkes, H. R., Hackett, C., & Boehm, L. (1989). The generality of the
relation between familiarity and judged validity. Journal of Behavioral
Decision Making, 2, 81–94. http://dx.doi.org/10.1002/bdm.3960020203
Bacon, F. T. (1979). Credibility of repeated statements: Memory for trivia.
Journal of Experimental Psychology: Human Learning and Memory, 5,
241–252. http://dx.doi.org/10.1037/0278-7393.5.3.241
Beck, J. (2017, March 13). This article won’t change your mind: The fact
on why facts alone can’t fight false beliefs. Atlantic. Retrieved from
https://www.theatlantic.com/science/archive/2017/03/this-article-wont-
change-your-mind/519093/
Begg, I. M., Anas, A., & Farinacci, S. (1992). Dissociation of processes in
belief: Source recollection, statement familiarity, and the illusion of
truth. Journal of Experimental Psychology: General, 121, 446 – 458.
http://dx.doi.org/10.1037/0096-3445.121.4.446
Berinsky, A. A. J. (2017). Rumors and health care reform: Experiments in
political misinformation. British Journal of Political Science, 47, 241–
262. http://dx.doi.org/10.1017/S0007123415000186
Calvert, D. (2017, March 6). The psychology behind fake news: Cognitive
biases help explain our polarized media climate. Retrieved from https://
insight.kellogg.northwestern.edu/article/the-psychology-behind-fake-
news
Coppock, A. (2016). Generalizing from survey experiments conducted on
Mechanical Turk: A replication approach. Retrieved from https://
alexandercoppock.files.wordpress.com/2016/02/coppock_generalizability2
.pdf
Corlett, P. R., Krystal, J. H., Taylor, J. R., & Fletcher, P. C. (2009). Why
do delusions persist? Frontiers in Human Neuroscience, 3, 12. http://dx
.doi.org/10.3389/neuro.09.012.2009
Dechêne, A., Stahl, C., Hansen, J., & Wänke, M. (2010). The truth about
the truth: A meta-analytic review of the truth effect. Personality and
Social Psychology Review, 14, 238 –257. http://dx.doi.org/10.1177/1088
868309352251
Diana, R., Yonelinas, A., & Ranganath, C. (2007). Imaging recollection
and familiarity in the medial temporal lobe: A three-component model.
Trends in Cognitive Sciences, 11, 379 –386. http://dx.doi.org/10.1016/j
.tics.2007.08.001
Ecker, U. K. H., Hogan, J., & Lewandowsky, S. (2017). Reminders and
repetition of misinformation: Helping or hindering its retraction? Jour-
nal of Applied Research in Memory & Cognition, 6, 185–192. http://dx
.doi.org/10.1016/j.jarmac.2017.01.014
Ecker, U. K. H., Lewandowsky, S., & Tang, D. T. W. (2010). Explicit
warnings reduce but do not eliminate the continued influence of misinfor-
mation. Memory & Cognition, 38, 1087–1100. http://dx.doi.org/10.3758/
MC.38.8.1087
Fazio, L. K., Brashier, N. M., Payne, B. K., & Marsh, E. J. (2015).
Knowledge does not protect against illusory truth. Journal of Experi-
mental Psychology: General, 144, 993–1002. http://dx.doi.org/10.1037/
xge0000098
Flynn, D., Nyhan, B., & Reifler, J. (2017). The nature and origins of
misperceptions: Understanding false and unsupported beliefs about pol-
itics. Advances in Political Psychology, 38(S1), 127–150. http://dx.doi
.org/10.1111/pops.12394
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
12 PENNYCOOK, CANNON, AND RAND
Gabielkov, M., Ramachandran, A., Chaintreau, A., & Legout, A. (2016).
Social clicks: What and who gets read on Twitter? Retrieved from
http://dl.acm.org/citation.cfm?id2901462
Goldman, R. (2016, December 24). Reading fake news, Pakistani minister
directs nuclear threat at Israel. The New York Times. Retrieved from
https://www.nytimes.com/2016/12/24/world/asia/pakistan-israel-
khawaja-asif-fake-news-nuclear.html
Gottfried, J., & Shearer, E. (2016). News use across social media platforms
2016. Retrieved from http://www.journalism.org/2016/05/26/news-use-
across-social-media-platforms-2016/
Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the
conference of referential validity. Journal of Verbal Learning and Ver-
bal Behavior, 16, 107–112. http://dx.doi.org/10.1016/S0022-5371(77)
80012-1
Hawkins, S., & Hoch, S. (1992). Low-involvement learning: Memory
without evaluation. Journal of Consumer Research, 19, 212–225. http://
dx.doi.org/10.1086/209297
Horton, J., Rand, D., & Zeckhauser, R. (2011). The online laboratory:
Conducting experiments in a real labor market. Experimental Econom-
ics, 14, 399 – 425. http://dx.doi.org/10.1007/s10683-011-9273-9
Johar, G., & Roggeveen, A. (2007). Changing false beliefs from repeated
advertising: The role of claim-refutation alignment. Journal of Con-
sumer Psychology, 17, 118 –127. http://dx.doi.org/10.1016/S1057-7408
(07)70018-9
Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflec-
tion. Judgment and Decision Making, 8, 407– 424.
Kahan, D. M. (2017, May 24). Misconceptions, misinformation, and the
logic of identity-protective cognition (Cultural Cognition Project Work-
ing Paper Series No. 164; Yale Law School, Public Law Research Paper
No. 605; Yale Law & Economics Research Paper No. 575). Available at
https://ssrn.com/abstract2973067
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman,
D., & Mandel, G. (2012). The polarizing impact of science literacy and
numeracy on perceived climate change risks. Nature Climate Change, 2,
732–735. http://dx.doi.org/10.1038/nclimate1547
Krupnikov, Y., & Levine, A. (2014). Cross-sample comparisons and ex-
ternal validity. Journal of Experimental Political Science, 1, 59 – 80.
http://dx.doi.org/10.1017/xps.2014.7
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bul-
letin, 108, 480 – 498. http://dx.doi.org/10.1037/0033-2909.108.3.480
Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill,
K. M., Menczer, F.,...Zittrain, J. L. (2018). The science of fake news.
Science, 359, 1094 –1096. http://dx.doi.org/10.1126/science.aao2998
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook,
J. (2012). Misinformation and its correction: Continued influence and
successful debiasing. Psychological Science in the Public Interest, 13,
106 –131. http://dx.doi.org/10.1177/1529100612451018
Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for
an argumentative theory. Behavioral and Brain Sciences, 34, 57–74.
http://dx.doi.org/10.1017/S0140525X10000968
Mosseri, A. (2016, June 29). Building a better news feed for you. Retrieved
from http://newsroom.fb.com/news/2016/06/building-a-better-news-
feed-for-you/
Mullinix, K., Leeper, T., Druckman, J., & Freese, J. (2015). The general-
izability of survey experiments. Journal of Experimental Political Sci-
ence, 2, 109 –138. http://dx.doi.org/10.1017/XPS.2015.19
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of
political misperceptions. Political Behavior, 32, 303–330. http://dx.doi
.org/10.1007/s11109-010-9112-2
Pennycook, G., Fugelsang, J. A., & Koehler, D. J. (2015). What makes us
think? A three-stage dual-process model of analytic engagement. Cog-
nitive Psychology, 80, 34 –72. http://dx.doi.org/10.1016/j.cogpsych.2015
.05.001
Pennycook, G., & Rand, D. (2017). The implied truth effect: Attaching
warnings to a subset of fake news stories increases perceived accuracy
of stories without warnings. Retrieved from https://papers.ssrn.com/sol3/
papers.cfm?abstract_id3035384
Pennycook, G., & Rand, D. G. (2018a). Lazy, not biased: Susceptibility to
partisan fake news is better explained by lack of reasoning than by motivated
reasoning. Cognition. http://dx.doi.org/10.1016/j.cognition.2018.06.011
Pennycook, G., & Rand, D. G. (2018b). Who falls for fake news? The roles
of bullshit receptivity, overclaiming, familiarity, and analytic thinking.
Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_
id3023545
Polage, D. C. (2012). Making up history: False memories of fake news
stories. Europe’s Journal of Psychology, 8, 245–250. http://dx.doi.org/
10.5964/ejop.v8i2.456
Reber, R., Winkielman, P., & Schwarz, N. (1998). Effects of perceptual
fluency on affective judgments. Psychological Science, 9, 45– 48. http://
dx.doi.org/10.1111/1467-9280.00008
Redlawsk, D. (2002). Hot cognition or cool consideration? Testing the
effects of motivated reasoning on political decision making. Journal of
Politics, 64, 1021–1044. http://dx.doi.org/10.1111/1468-2508.00161
Sanford, N., Veckenstedt, R., Moritz, S., Balzan, R. P., & Woodward, T. S.
(2014). Impaired integration of disambiguating evidence in delusional
schizophrenia patients. Psychological Medicine, 44, 2729 –2738. http://
dx.doi.org/10.1017/S0033291714000397
Schwartz, M. (1982). Repetition and rated truth value of statements. American
Journal of Psychology, 95, 393– 407. http://dx.doi.org/10.2307/1422132
Schwarz, N., Sanna, L. L. J., Skurnik, I., & Yoon, C. (2007). Metacognitive
experiences and the intricacies of setting people straight: Implications for
debiasing and public information campaigns. Advances in Experimental
Social Psychology, 39, 127–161. http://dx.doi.org/10.1016/S0065-
2601(06)39003-X
Shane, S. (2017, January 18). From headline to photograph, a fake news
masterpiece. New York Times. Retrieved from https://www.nytimes.com/
2017/01/18/us/fake-news-hillary-clinton-cameron-harris.html
Silverman, C., Strapagiel, L., Shaban, H., & Hall, E. (2016, October 20).
Hyperpartisan Facebook pages are publishing false and misleading in-
formation at an alarming rate. Buzzfeed News. Retrieved from https://
www.buzzfeed.com/craigsilverman/partisan-fb-pages-analysis
Singal, J. (2017, January 27). This is a great psychological framework for
understanding how fake news spreads. New York Magazine. Retrieved
from http://nymag.com/scienceofus/2017/01/a-great-psychological-
framework-for-understanding-fake-news.html
Skurnik, I., Yoon, C., Park, D. C., & Schwarz, N. (2005). How warnings
about false claims become recommendations. Journal of Consumer
Research, 31, 713–724. http://dx.doi.org/10.1086/426605
Swire, B., Berinsky, A. J., Lewandowsky, S., & Ecker, U. K. H. (2017).
Processing political misinformation: Comprehending the Trump phe-
nomenon. Royal Society Open Science, 4, 160802. http://dx.doi.org/10
.1098/rsos.160802
Swire, B., Ecker, U. K. H., & Lewandowsky, S. (2017). The role of
familiarity in correcting inaccurate information. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 43, 1948 –1961. http://
dx.doi.org/10.1037/xlm0000422
Tauber, S., Dunlosky, J., & Rawson, K. (2013). General knowledge norms:
Updated and expanded from the Nelson and Narens (1980) norms.
Behavior Research Methods, 45, 1115–1143. http://dx.doi.org/10.3758/
s13428-012-0307-9
Unkelbach, C. (2007). Reversing the truth effect: Learning the interpreta-
tion of processing fluency in judgments of truth. Journal of Experimen-
tal Psychology: Learning, Memory, and Cognition, 33, 219 –230. http://
dx.doi.org/10.1037/0278-7393.33.1.219
Unkelbach, C., & Rom, S. (2017). A referential theory of the repetition-
induced truth effect. Cognition, 160, 110 –126. http://dx.doi.org/10.1016/j
.cognition.2016.12.016
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
13
PRIOR EXPOSURE AND FAKE NEWS
Wang, W.-C., Brashier, N. M., Wing, E. A., Marsh, E. J., & Cabeza, R.
(2016). On known unknowns: Fluency and the neural mechanisms of
illusory truth. Journal of Cognitive Neuroscience, 28, 739 –746. http://
dx.doi.org/10.1162/jocn_a_00923
Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and vali-
dation of brief measures of positive and negative affect: The PANAS
scales. Journal of Personality and Social Psychology, 54, 1063–1070.
http://dx.doi.org/10.1037/0022-3514.54.6.1063
Whittlesea, B. W. (1993). Illusions of familiarity. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 19, 1235–1253. http://
dx.doi.org/10.1037/0278-7393.19.6.1235
Yonelinas, A. (2002). The nature of recollection and familiarity: A review
of 30 years of research. Journal of Memory and Language, 46, 441–517.
http://dx.doi.org/10.1006/jmla.2002.2864
Yonelinas, A., & Jacoby, L. (2012). The process-dissociation approach two
decades later: Convergence, boundary conditions, and new directions.
Memory & Cognition, 40, 663– 680. http://dx.doi.org/10.3758/s13421-
012-0205-5
Zajonc, R. (1968). Attitudinal effects of mere exposure. Journal of Per-
sonality and Social Psychology, 9, 1–27. http://dx.doi.org/10.1037/
h0025848
Zajonc, R. (2001). Mere exposure: A gateway to the subliminal. Current
Directions in Psychological Science, 10, 224 –228. http://dx.doi.org/10
.1111/1467-8721.00154
Appendix
Items Used in the Three Studies
(Appendix continues)
Table A1
Study 1 Items
Category Type Statement
Known False (extremely
implausible)
Smoking cigarettes is good for your lungs.
The earth is a perfect square.
Across the United States, only a total of 452 people voted in the last election.
A single elephant weighs less than a single ant.
True More people live in the United States than in Malta.
Cows are larger than sheep.
Coffee is a more popular drink in America than goat milk.
There are more than fifty stars in the universe.
Unknown False George was the name of the goldfish in the story of Pinocchio.
Johnson was the last name of the man who killed Jesse James.
Charles II was the first ruler of the Holy Roman Empire.
Canopus is the name of the brightest star in the sky, excluding the sun.
Tirpitz was the name of Germany’s largest battleship that was sunk in World War II.
John Kenneth Galbraith is the name of a well-known lawyer.
Huxley is the name of the scientist who discovered radium.
The Cotton Bowl takes place in Auston, Texas.
The drachma is the monetary unit for Macedonia.
Angel Falls is located in Brazil.
True The thigh bone is the largest bone in the human body.
Bolivia borders the Pacific Ocean.
The largest dam in the world is in Pakistan.
Mexico is the world’s largest producer of silver.
More presidents of the United States were born in Virginia than any other state.
Helsinki is the capital of Finland.
Marconi is name of the inventor of the wireless radio.
Billy the Kid’s last name was Bonney.
Tiber is the name of the river that runs through Rome.
Canberra is the capital of Australia.
Note. The unknown items were taken from Arkes, Hackett, & Boehm (1989); however, two of the “true” items (about
Bolivia and Pakistan) are actually false. We retain the original labeling as it has no material effect on our results.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
14 PENNYCOOK, CANNON, AND RAND
(Appendix continues)
Table A2
Study 2: Fake-News Items
Political valence Headline Source
Pro-Republican Election Night: Hillary Was Drunk, Got Physical With Mook and Podesta dailyheadlines.net
Obama Was Going to Castro’s Funeral—Until Trump Told Him This... thelastlineofdefense.org
Donald Trump Sent His Own Plane to Transport 200 Stranded Marines uconservative.com
BLM Thug Protests President Trump With Selfie...Accidentally Shoots Himself in the Face freedomdaily.com
NYT David Brooks: “Trump Needs to Decide if He Prefers to Resign, Be Impeached or Get
Assassinated”
unitedstates-politics.com
Clint Eastwood Refuses to Accept Presidential Medal of Freedom From Obama, Says “He Is
Not My President”
incredibleusanews.com
Pro-Democrat Mike Pence: Gay Conversion Therapy Saved My Marriage ncscooper.com
Pennsylvania Federal Court Grants Legal Authority to Remove Trump After Russian Meddling bipartisanreport.com
Trump on Revamping the Military: We’re Bringing Back the Draft realnewsrightnow.com
FBI Director Comey Just Proved His Bias by Putting Trump Sign on His Front Lawn countercurrentnews.com
Sarah Palin Calls to Boycott Mall of America Because “Santa Was Always White in the
Bible”
politicono.com
Trump to Ban All TV Shows That Promote Gay Activity Starting With Empire as President colossil.com
Note. Fake- and real-news headlines as presented to participants can be found at the following link: https://osf.io/txf46/.
Table A3
Study 2: Real-News Items
Political valence Headline Source
Pro-Republican Dems Scramble to Prevent Their Own From Defecting to Trump foxnews.com
Majority of Americans Say Trump Can Keep Businesses, Poll Shows bloomberg.com
Donald Trump Strikes Conciliatory Tone in Meeting With Tech Executives wsj.com
Companies Are Already Canceling Plans to Move U.S. Jobs Abroad msn.com
She Claimed She Was Attacked by Men Who Yelled “Trump” and Grabbed Her Hijab. Police Say She Lied. washingtonpost.com
At GOP Convention Finale, Donald Trump Vows to Protect LGBTQ Community fortune.com
Pro-Democrat North Carolina Republicans Push Legislation to Hobble Incoming Democratic Governor huffingtonpost.com
Vladimir Putin “Personally Involved” in US Hack, Report Claims theguardian.com
Trump Lashes Out at Vanity Fair, One Day After It Lambastes His Restaurant npr.org
Donald Trump Says He’d “Absolutely” Require Muslims to Register nytimes.com
The Small Businesses Near Trump Tower Are Experiencing a Miniature Recession slate.com
Trump Questions Russia’s Election Meddling on Twitter—Inaccurately nytimes.com
Note. Fake- and real-news headlines as presented to participants can be found at the following link: https://osf.io/txf46/.
Table A4
Study 3: Fake-News Items
Political valence Headline Source
Pro-Republican Election Night: Hillary Was Drunk, Got Physical With Mook and Podesta dailyheadlines.net
Donald Trump Protester Speaks Out: “I Was Paid $3,500 to Protest Trump’s Rally” abcnews.com.co
NYT David Brooks: “Trump Needs to Decide if He Prefers to Resign, Be Impeached or Get
Assassinated”
unitedstates-politics.com
Clint Eastwood Refuses to Accept Presidential Medal of Freedom From Obama, Says “He Is
Not My President”
incredibleusanews.com
Donald Trump Sent His Own Plane to Transport 200 Stranded Marines uconservative.com
BLM Thug Protests President Trump With Selfie...Accidentally Shoots Himself in the Face freedomdaily.com
Pro-Democrat FBI Director Comey Just Proved His Bias by Putting Trump Sign on His Front Lawn countercurrentnews.com
Pennsylvania Federal Court Grants Legal Authority to Remove Trump After Russian Meddling bipartisanreport.com
Sarah Palin Calls to Boycott Mall of America Because “Santa Was Always White in the Bible” politicono.com
Trump to Ban All TV Shows That Promote Gay Activity Starting With Empire as President colossil.com
Mike Pence: Gay Conversion Therapy Saved My Marriage ncscooper.com
Trump on Revamping the Military: We’re Bringing Back the Draft realnewsrightnow.com
Note. Fake- and real-news headlines as presented to participants can be found at the following link: https://osf.io/txf46/.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
15
PRIOR EXPOSURE AND FAKE NEWS
Received August 28, 2017
Revision received April 25, 2018
Accepted May 2, 2018
Table A5
Study 3: Real-News Items
Political valence Headline Source
Pro-Republican House Speaker Ryan Praises Trump for Maintaining Congressional Strength cnbc.com
Donald Trump Strikes Conciliatory Tone in Meeting With Tech Executives wsj.com
At GOP Convention Finale, Donald Trump Vows to Protect LGBTQ Community fortune.com
Companies Are Already Canceling Plans to Move U.S. Jobs Abroad msn.com
Majority of Americans Say Trump Can Keep Businesses, Poll Shows bloomberg.com
She Claimed She Was Attacked by Men Who Yelled “Trump” and Grabbed Her Hijab. Police Say She Lied. washingtonpost.com
Pro-Democrat North Carolina Republicans Push Legislation to Hobble Incoming Democratic Governor huffingtonpost.com
Vladimir Putin “Personally Involved” in US Hack, Report Claims theguardian.com
Trump Lashes Out at Vanity Fair, One Day After It Lambastes His Restaurant npr.org
Trump Questions Russia’s Election Meddling on Twitter—Inaccurately nytimes.com
The Small Businesses Near Trump Tower Are Experiencing a Miniature Recession slate.com
Donald Trump Says He’d “Absolutely” Require Muslims to Register nytimes.com
Note. Fake- and real-news headlines as presented to participants can be found at the following link: https://osf.io/txf46/.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
16 PENNYCOOK, CANNON, AND RAND
... The relationship between performance on the CRT and fake news susceptibility has been attributed to "lazy" thinking, defined as a tendency not to consider information reflectively or override intuitive responses [25], and may indicate impulsivity in judging information [28]. Further, personality factors such as agreeableness, conscientiousness and lower levels of extraversion have been related to better truth discrimination [29], while cognitive factors such as confirmation bias [30], the effect of repetition [31] and selective exposure have been found to elevate vulnerability to fake news, especially when received via social media [26]. ...
Article
Full-text available
Epistemic trust ‐ defined as readiness to regard knowledge, communicated by another agent, as significant, relevant to the self, and generalizable to other contexts–has recently been applied to the field of developmental psychopathology as a potential risk factor for psychopathology. The work described here sought to investigate how the vulnerability engendered by disruptions in epistemic trust may not only impact psychological resilience and interpersonal processes but also aspects of more general social functioning. We undertook two studies to examine the role of epistemic trust in determining capacity to recognise fake/real news, and susceptibility to conspiracy thinking–both in general and in relation to COVID-19. Measuring three different epistemic dispositions–trusting, mistrusting and credulous–in two studies (study 1, n = 705; study 2 n = 502), we found that Credulity was associated with inability to discriminate between fake/real news. We also found that both Mistrust and Credulity mediated the relationship between exposure to childhood adversity and difficulty in distinguishing between fake/real news, although the effect sizes were small. Finally, Mistrust and Credulity were associated with general and COVID-19 related conspiracy beliefs and vaccine hesitancy. We discuss the implications of these findings for our understanding of fake news and conspiracy thinking.
... Given the extensive evidence for the influence of verbal framing on behavior [32][33][34][35][36][37] , it is important to examine whether the framing of the questions is the cause of the dissociation between the responses to the accuracy questions and the responses to the sharing questions. Testing how question framing affects the responses to accuracy and sharing questions is important, not the least because questions with the framing scrutinized here have been used by the original authors in several other studies 23,24,[38][39][40][41][42] as well as by other research groups 22,[43][44][45][46][47] . According to the inattention-based account 17,18 , people should be well able to discriminate between accurate and inaccurate statements in response to accuracy questions but should discriminate less between accurate and inaccurate statements in response to sharing questions. ...
Article
Full-text available
Previous research suggests that even when people are capable of judging to the best of their knowledge whether claims are accurate or inaccurate, they do not sufficiently discriminate between accurate and inaccurate information when asked to consider whether they would share stories on social media. However, question framing (“To the best of your knowledge…”, “Would you consider…?”) differed between the questions about accuracy and the questions about sharing. Here we examine the effects of question framing on responses to accuracy questions and responses to sharing questions. The framing of accuracy questions had no effect on accurate-inaccurate discrimination. In contrast, accurate-inaccurate discrimination in response to sharing questions increased when participants were asked to respond, to the best of their knowledge, whether they would share claims compared to when they were asked whether they would consider sharing stories. At a theoretical level, the findings support the inattention-based account, according to which contextual cues shifting the focus toward accuracy can enhance accurate-inaccurate discrimination in sharing responses. At a methodological level, these findings suggest that researchers should carefully attend to the verbal framing of questions about sharing information on social media, as the framing may significantly influence participants’ focus on accuracy.
Article
People are increasingly worried about untruthfulness in news reporting. We distinguish between two types of untruthfulness: apparent untruthfulness (containing false information) and consequential untruthfulness (giving readers a wrong impression of the truth). Consequential untruthfulness can be caused by both the presence of false information and cherry‐picking (reporting only parts of the truth). Despite this, we find that people's perception of untruthfulness depends largely on apparent untruthfulness. Consequently, they treat news that cherry‐picks information less negatively (e.g., less likely to criticize it and more likely to share it with others) than they treat news that contains false information, when the former is more consequentially untruthful than the latter. We dub this phenomenon as cherry‐picking tolerance . We also find that prompting people to think about the consequence of the news report (i.e., the impressions people form after they read the news reports) will mitigate the cherry‐picking tolerance. This research draws attention to the widespread practice of cherry‐picking in news reporting and calls for a new look at what constitutes fake news.
Chapter
Human behavior in cyber space is extremely complex. Change is the only constant as technologies and social contexts evolve rapidly. This leads to new behaviors in cybersecurity, Facebook use, smartphone habits, social networking, and many more. Scientific research in this area is becoming an established field and has already generated a broad range of social impacts. Alongside the four key elements (users, technologies, activities, and effects), the text covers cyber law, business, health, governance, education, and many other fields. Written by international scholars from a wide range of disciplines, this handbook brings all these aspects together in a clear, user-friendly format. After introducing the history and development of the field, each chapter synthesizes the most recent advances in key topics, highlights leading scholars and their major achievements, and identifies core future directions. It is the ideal overview of the field for researchers, scholars, and students alike.
Chapter
Human behavior in cyber space is extremely complex. Change is the only constant as technologies and social contexts evolve rapidly. This leads to new behaviors in cybersecurity, Facebook use, smartphone habits, social networking, and many more. Scientific research in this area is becoming an established field and has already generated a broad range of social impacts. Alongside the four key elements (users, technologies, activities, and effects), the text covers cyber law, business, health, governance, education, and many other fields. Written by international scholars from a wide range of disciplines, this handbook brings all these aspects together in a clear, user-friendly format. After introducing the history and development of the field, each chapter synthesizes the most recent advances in key topics, highlights leading scholars and their major achievements, and identifies core future directions. It is the ideal overview of the field for researchers, scholars, and students alike.
Article
In 2024 experts highlight misinformation and disinformation “amid elections” as the top short‐term global risk. In addressing this pressing concern, electoral authorities are devising strategies to counter electoral disinformation while governments consider changes to public policy and legislation. Drawing on motivated reasoning theory, this study assesses the impact of disinformation and mitigation measures in Australia during the 2023 referendum campaign – to establish a constitutionally enshrined Indigenous Voice to Parliament – and its subsequent impacts on trust in the Australian Electoral Commission (AEC). Through a nationally representative survey experiment ( N = 3825) we find overall high public trust in the AEC with disinformation having a small, but detectable effect. This study finds a level of “moral panic” regarding disinformation's threat to electoral integrity, at least in the Australian setting. However, concerningly, we also find existing AEC communication and refutation strategies have limited impact on countering distrust arising after a disinformation attack, suggesting a need for other strategies. Nonetheless, the study underscores the resilience of Australian electoral processes against disinformation threats serving as a caution against excessive legislative reaction to this global problem. Our study contributes to understanding the complex interplay between information, trust, and public policy responses to disinformation challenges.
Article
Full-text available
Fake News sind ein weitverbreitetes Phänomen, dem die Mehrheit der Internetnutzenden online bereits begegnet ist. Menschen glauben vor allem dann an Fake News, wenn diese einstellungskonsistent sind und wiederholt rezipiert werden. Besonders anfällig für Fake News sind Menschen mit geringer Medienkompetenz, extremen politischen Einstellungen und hohem Misstrauen gegenüber etablierten Medien. Abgeleitet aus dem Forschungsstand diskutiert der Beitrag Empfehlungen für die medienpädagogische Praxis.
Article
Full-text available
This study examines the effects of misleading news—one type of false information presented by news media in the U.S. and China—in the context of international disputes. Through a web-based survey experiment, we tested how Chinese readers’ perception of false news is affected by the source of the news, the presence of visual elements, and general trust in mainstream Chinese media and that in mainstream U.S. media, as well as news literacy. Our results suggested false news reported by domestic media was perceived to better represent the reality of the covered issue than news presented by foreign media. This relationship was moderated by readers’ general trust in U.S. media and news literacy, which indicated media literacy training as a possible solution to counteract the effect of the news source. These findings not only advance current scholarship on misinformation by incorporating perspectives from non-Western media systems but also provide both foreign and domestic readers with timely and relevant methods to combat false information.
Article
Full-text available
Objective Fake news represents a particularly egregious and direct avenue by which inaccurate beliefs have been propagated via social media. We investigate the psychological profile of individuals who fall prey to fake news. Method We recruited 1,606 participants from Amazon's Mechanical Turk for three online surveys. Results The tendency to ascribe profundity to randomly generated sentences – pseudo‐profound bullshit receptivity – correlates positively with perceptions of fake news accuracy, and negatively with the ability to differentiate between fake and real news (media truth discernment). Relatedly, individuals who overclaim their level of knowledge also judge fake news to be more accurate. We also extend previous research indicating that analytic thinking correlates negatively with perceived accuracy by showing that this relationship is not moderated by the presence/absence of the headline's source (which has no effect on accuracy), or by familiarity with the headlines (which correlates positively with perceived accuracy of fake and real news). Conclusion Our results suggest that belief in fake news may be driven, to some extent, by a general tendency to be overly accepting of weak claims. This tendency, which we refer to as reflexive open‐mindedness, may be partly responsible for the prevalence of epistemically suspect beliefs writ large. This article is protected by copyright. All rights reserved.
Article
Full-text available
Addressing fake news requires a multidisciplinary effort
Article
Full-text available
What can be done to combat political misinformation? One prominent intervention involves attaching warnings to headlines of news stories that have been disputed by third-party fact-checkers. Here we demonstrate a hitherto unappreciated potential consequence of such a warning: an implied truth effect, whereby false headlines that fail to get tagged are considered validated and thus are seen as more accurate. With a formal model, we demonstrate that Bayesian belief updating can lead to such an implied truth effect. In Study 1 (n = 5,271 MTurkers), we find that although warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized implied truth effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study 2 (n = 1,568 MTurkers), we find the same effects in the context of decisions about which headlines to consider sharing on social media. We also find that attaching verifications to some true headlines—which removes the ambiguity about whether untagged headlines have not been checked or have been verified—eliminates, and in fact slightly reverses, the implied truth effect. Together these results contest theories of motivated reasoning while identifying a potential challenge for the policy of using warning tags to fight misinformation—a challenge that is particularly concerning given that it is much easier to produce misinformation than it is to debunk it. This paper was accepted by Elke Weber, judgment and decision making.
Article
Full-text available
People frequently continue to use inaccurate information in their reasoning even after a credible retraction has been presented. This phenomenon is often referred to as the continued influence effect of misinformation. The repetition of the original misconception within a retraction could contribute to this phenomenon, as it could inadvertently make the “myth” more familiar—and familiar information is more likely to be accepted as true. From a dual-process perspective, familiarity-based acceptance of myths is most likely to occur in the absence of strategic memory processes. We thus examined factors known to affect whether strategic memory processes can be utilized; age, detail, and time. Participants rated their belief in various statements of unclear veracity, and facts were subsequently affirmed and myths were retracted. Participants then re-rated their belief either immediately or after a delay. We compared groups of young and older participants, and we manipulated the amount of detail presented in the affirmative/corrective explanations, as well as the retention interval between encoding and a retrieval attempt. We found that (1) older adults over the age of 65 were worse at sustaining their post-correction belief that myths were inaccurate, (2) a greater level of explanatory detail promoted more sustained belief change, and (3) fact affirmations promoted more sustained belief change in comparison to myth retractions over the course of one week (but not over three weeks). This supports the notion that familiarity is indeed a driver of continued influence effects.
Article
Full-text available
This study investigated the cognitive processing of true and false political information. Specifically, it examined the impact of source credibility on the assessment of veracity when information comes from a polarizing source (Experiment 1), and effectiveness of explanations when they come from one’s own political party or an opposition party (Experiment 2). Participants rated their belief in factual and incorrect statements that Donald Trump made on the campaign trail; facts were subsequently affirmed and misinformation retracted. Participants then re-rated their belief immediately or after a delay. Experiment 1 found that (1) if information was attributed to Trump, Republican supporters of Trump believed it more than if it was presented without attribution, whereas the opposite was true for Democrats; and (2) although Trump supporters reduced their belief in misinformation items following a correction, they did not change their voting preferences. Experiment 2 revealed that the explanation’s source had relatively little impact, and belief updating was more influenced by perceived credibility of the individual initially purporting the information. These findings suggest that people use political figures as a heuristic to guide evaluation of what is true or false, yet do not necessarily insist on veracity as a prerequisite for supporting political candidates.
Article
Full-text available
People frequently rely on information even after it has been retracted, a phenomenon known as the continued-influence effect of misinformation. One factor proposed to explain the ineffectiveness of retractions is that repeating misinformation during a correction may inadvertently strengthen the misinformation by making it more familiar. Practitioners are therefore often encouraged to design corrections that avoid misinformation repetition. The current study tested this recommendation, investigating whether retractions become more or less effective when they include reminders or repetitions of the initial misinformation. Participants read fictional reports, some of which contained retractions of previous information, and inferential reasoning was measured via questionnaire. Retractions varied in the extent to which they served as misinformation reminders. Retractions that explicitly repeated the misinformation were more effective in reducing misinformation effects than retractions that avoided repetition, presumably because of enhanced salience. Recommendations for effective myth debunking may thus need to be revised.
Article
This research addresses refutation of false beliefs formed on the basis of repeated exposure to advertisements. Experiment 1 explores belief in the refutation as a function of the perceptual details shared (alignment) between the claim and the refutation as manipulated by whether the original claim was direct (assertion) or indirect (implication). Experiment 2 then examines whether this effect will carry through to belief in the original claim after exposure to the refutation. Findings indicate that direct refutations of indirect claims are believed more than refutations of direct claims. However, direct refutations of direct claims are more effective in reducing belief in the original claim. We argue that recollection of the original claim facilitates automatic updating of belief in that claim. Experiment 3 demonstrates that an alternative cue (a logo) in a refutation that facilitates recall of the original claim enables reduction of belief in the original indirect claim; this finding helps to pin down the mechanism—recall of the original claim—underlying belief updating. Further, Experiment 3 finds that multiple cues to recalling the original claim may prevent the automatic updating process. Theoretical and practical implications are discussed.
Article
Why do people believe blatantly inaccurate news headlines ("fake news")? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news - even for headlines that align with individuals' political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant's ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one's political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se - a finding that opens potential avenues for fighting fake news.
Article
Following the 2016 US presidential election, many have expressed concern about the effects of false stories ("fake news"), circulated largely through social media. We discuss the economics of fake news and present new data on its consumption prior to the election. Drawing on web browsing data, archives of fact-checking websites, and results from a new online survey, we find: 1) social media was an important but not dominant source of election news, with 14 percent of Americans calling social media their "most important" source; 2) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times; 3) the average American adult saw on the order of one or perhaps several fake news stories in the months around the election, with just over half of those who recalled seeing them believing them; and 4) people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks.