Content uploaded by Colleen M. Seifert
Author content
All content in this area was uploaded by Colleen M. Seifert on Oct 31, 2014
Content may be subject to copyright.
Psychological Science in the
Public Interest
13(3) 106 –131
© The Author(s) 2012
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1529100612451018
http://pspi.sagepub.com
On August 4, 1961, a young woman gave birth to a healthy
baby boy in a hospital at 1611 Bingham St., Honolulu. That
child, Barack Obama, later became the 44th president of the
United States. Notwithstanding the incontrovertible evidence
for the simple fact of his American birth—from a Hawaiian
birth certificate to birth announcements in local papers to the
fact that his pregnant mother went into the Honolulu hospital
and left it cradling a baby—a group known as “birthers”
claimed Obama had been born outside the United States and
was therefore not eligible to assume the presidency. Even
though the claims were met with skepticism by the media,
polls at the time showed that they were widely believed by a
sizable proportion of the public (Travis, 2010), including a
majority of voters in Republican primary elections in 2011
(Barr, 2011).
In the United Kingdom, a 1998 study suggesting a link
between a common childhood vaccine and autism generated
considerable fear in the general public concerning the safety of
the vaccine. The UK Department of Health and several other
health organizations immediately pointed to the lack of evidence
for such claims and urged parents not to reject the vaccine. The
Corresponding Author:
Stephan Lewandowsky, School of Psychology, University of Western
Australia, Crawley, Western Australia 6009, Australia
E-mail: stephan.lewandowsky@uwa.edu.au
Misinformation and Its Correction:
Continued Influence and
Successful Debiasing
Stephan Lewandowsky1, Ullrich K. H. Ecker1, Colleen M. Seifert2,
Norbert Schwarz2, and John Cook1,3
1University of Western Australia, 2University of Michigan, and 3University of Queensland
Summary
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there
is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding
vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in
vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at
rectifying the situation.
We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely.
Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests.
Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which
information is communicated and misinformation is spread.
We next move to misinformation at the level of the individual, and review the cognitive factors that often render
misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe
certain things but not others. We look at people’s memory for misinformation and answer the questions of why retractions
of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and,
ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless
are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors
that aid in debiasing.
We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain
to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded
in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals,
educators, and science communicators—design effective misinformation retractions, educational tools, and public-information
campaigns.
Keywords
misinformation, false beliefs, memory updating, debiasing
Misinformation and Its Correction 107
media subsequently widely reported that none of the original
claims had been substantiated. Nonetheless, in 2002, between
20% and 25% of the public continued to believe in the vaccine-
autism link, and a further 39% to 53% continued to believe there
was equal evidence on both sides of the debate (Hargreaves,
Lewis, & Speers, 2003). More worryingly still, a substantial
number of health professionals continued to believe the unsub-
stantiated claims (Petrovic, Roberts, & Ramsay, 2001). Ulti-
mately, it emerged that the first author of the study had failed to
disclose a significant conflict of interest; thereafter, most of the
coauthors distanced themselves from the study, the journal offi-
cially retracted the article, and the first author was eventually
found guilty of misconduct and lost his license to practice medi-
cine (Colgrove & Bayer, 2005; Larson, Cooper, Eskola, Katz, &
Ratzan, 2011).
Another particularly well-documented case of the persis-
tence of mistaken beliefs despite extensive corrective efforts
involves the decades-long deceptive advertising for Listerine
mouthwash in the U.S. Advertisements for Listerine had falsely
claimed for more than 50 years that the product helped prevent
or reduce the severity of colds and sore throats. After a long
legal battle, the U.S. Federal Trade Commission mandated cor-
rective advertising that explicitly withdrew the deceptive
claims. For 16 months between 1978 and 1980, the company
ran an ad campaign in which the cold-related claims were
retracted in 5-second disclosures midway through 30-second
TV spots. Notwithstanding a $10 million budget, the campaign
was only moderately successful (Wilkie, McNeill, & Mazis,
1984). Using a cross-sectional comparison of nationally repre-
sentative samples at various points during the corrective cam-
paign, a telephone survey by Armstrong, Gural, and Russ (1983)
did reveal a significant reduction in consumers’ belief that Lis-
terine could alleviate colds, but overall levels of acceptance of
the false claim remained high. For example, 42% of Listerine
users continued to believe that the product was still promoted as
an effective cold remedy, and more than half (57%) reported
that the product’s presumed medicinal effects were a key factor
in their purchasing decision (compared with 15% of consumers
of a competing product).
Those results underscore the difficulties of correcting wide-
spread belief in misinformation. These difficulties arise from
two distinct factors. First, there are cognitive variables within
each person that render misinformation “sticky.” We focus pri-
marily on those variables in this article. The second factor is
purely pragmatic, and it relates to the ability to reach the target
audience. The real-life Listerine quasi-experiment is particu-
larly informative in this regard, because its effectiveness was
limited even though the company had a fairly large budget for
disseminating corrective information.
What causes the persistence of erroneous beliefs in sizable
segments of the population? Assuming corrective information
has been received, why does misinformation1 continue to
influence people’s thinking despite clear retractions? The lit-
erature on these issues is extensive and complex, but it permits
several reasonably clear conclusions, which we present in the
remainder of this article. Psychological science has much light
to shed onto the cognitive processes with which individuals
process, acquire, and update information.
We focus primarily on individual-level cognitive processes
as they relate to misinformation. However, a discussion of the
continued influence of misinformation cannot be complete
without addressing the societal mechanisms that give rise to
the persistence of false beliefs in large segments of the popula-
tion. Understanding why one might reject evidence about
President Obama’s place of birth is a matter of individual
cognition; however, understanding why more than half of
Republican primary voters expressed doubt about the presi-
dent’s birthplace (Barr, 2011) requires a consideration of not
only why individuals cling to misinformation, but also how
information—especially false information—is disseminated
through society. We therefore begin our analysis at the societal
level, first by highlighting the societal costs of widespread
misinformation, and then by turning to the societal processes
that permit its spread.
The Societal Cost of Misinformation
It is a truism that a functioning democracy relies on an edu-
cated and well-informed populace (Kuklinski, Quirk, Jerit,
Schwieder, & Rich, 2000). The processes by which people
form their opinions and beliefs are therefore of obvious public
interest, particularly if major streams of beliefs persist that are
in opposition to established facts. If a majority believes in
something that is factually incorrect, the misinformation may
form the basis for political and societal decisions that run
counter to a society’s best interest; if individuals are misin-
formed, they may likewise make decisions for themselves and
their families that are not in their best interest and can have
serious consequences. For example, following the unsubstan-
tiated claims of a vaccination-autism link, many parents
decided not to immunize their children, which has had dire
consequences for both individuals and societies, including a
marked increase in vaccine-preventable disease and hence
preventable hospitalizations, deaths, and the unnecessary
expenditure of large amounts of money for follow-up research
and public-information campaigns aimed at rectifying the situ-
ation (Larson et al., 2011; Poland & Spier, 2010; Ratzan,
2010).
Reliance on misinformation differs from ignorance, which
we define as the absence of relevant knowledge. Ignorance,
too, can have obvious detrimental effects on decision making,
but, perhaps surprisingly, those effects may be less severe than
those arising from reliance on misinformation. Ignorance may
be a lesser evil because in the self-acknowledged absence of
knowledge, people often turn to simple heuristics when mak-
ing decisions. Those heuristics, in turn, can work surprisingly
well, at least under favorable conditions. For example, mere
familiarity with an object often permits people to make accu-
rate guesses about it (Goldstein & Gigerenzer, 2002; Newell &
Fernandez, 2006). Moreover, people typically have relatively
108 Lewandowsky et al.
low levels of confidence in decisions made solely on the basis
of such heuristics (De Neys, Cromheeke, & Osman, 2011;
Glöckner & Bröder, 2011). In other words, ignorance rarely
leads to strong support for a cause, in contrast to false beliefs
based on misinformation, which are often held strongly and
with (perhaps infectious) conviction. For example, those who
most vigorously reject the scientific evidence for climate
change are also those who believe they are best informed
about the subject (Leiserowitz, Maibach, Roser-Renouf, &
Hmielowski, 2011).
The costs of misinformation to society are thus difficult to
ignore, and its widespread persistence calls for an analysis of
its origins.
Origins of Misinformation
Misinformation can be disseminated in a number of ways,
often in the absence of any intent to mislead. For example, the
timely news coverage of unfolding events is by its very nature
piecemeal and requires occasional corrections of earlier state-
ments. As a case in point, the death toll after a major natural
disaster—such as the 2011 tsunami in Japan—is necessarily
updated until a final estimate becomes available. Similarly, a
piece of information that is considered “correct” at any given
stage can later turn out to have been erroneous.
Indeed, this piecemeal approach to knowledge construction
is the very essence of the scientific process, through which
isolated initial findings are sometimes refuted or found not to
be replicable. It is for this reason that scientific conclusions are
usually made and accepted only after some form of consensus
has been reached on the basis of multiple lines of converging
evidence. Misinformation that arises during an evolving event
or during the updating of knowledge is unavoidable as well as
unintentional; however, there are other sources of misinforma-
tion that are arguably less benign. The particular sources we
discuss in this article are:
•Rumors and fiction. Societies have struggled with the
misinformation-spreading effects of rumors for cen-
turies, if not millennia; what is perhaps less obvious
is that even works of fiction can give rise to lasting
misconceptions of the facts.
•Governments and politicians. Governments and poli-
ticians can be powerful sources of misinformation,
whether inadvertently or by design.
•Vested interests. Corporate interests have a long and
well-documented history of seeking to influence
public debate by promulgating incorrect information.
At least on some recent occasions, such systematic
campaigns have also been directed against corporate
interests, by nongovernmental interest groups.
•The media. Though the media are by definition
seeking to inform the public, it is notable that they
are particularly prone to spreading misinformation
for systemic reasons that are worthy of analysis and
exposure. With regard to new media, the Internet has
placed immense quantities of information at our fin-
gertips, but it has also contributed to the spread of
misinformation. The growing use of social networks
may foster the quick and wide dissemination of mis-
information. The fractionation of the information
landscape by new media is an important contributor
to misinformation’s particular resilience to correction.
We next consider each of these sources in turn.
Rumors and fiction
Rumors and urban myths constitute important sources of mis-
information. For example, in 2006, a majority of Democrats
believed that the George W. Bush administration either assisted
in the 9/11 terror attacks or took no action to stop them (Nyhan,
2010). This widespread belief is all the more remarkable
because the conspiracy theory found virtually no traction in
the mainstream media.
Human culture strongly depends on people passing on
information. Although the believability of information has
been identified as a factor determining whether it is propa-
gated (Cotter, 2008), people seem to mainly pass on informa-
tion that will evoke an emotional response in the recipient,
irrespective of the information’s truth value. Emotional arousal
in general increases people’s willingness to pass on informa-
tion (Berger, 2011). Thus, stories containing content likely to
evoke disgust, fear, or happiness are spread more readily from
person to person and more widely through social media than
are neutral stories (Cotter, 2008; Heath, Bell, & Sternberg,
2001; K. Peters, Kashima, & Clark, 2009). Accordingly, the
most effective “misinformers” about vaccines are parents who
truly believe that their child has been injured by a vaccine.
When such individuals present their mistaken beliefs as fact,
their claims may be discussed on popular TV and radio talk
shows and made the subject of TV dramas and docudramas
(Myers & Pineda, 2009).
A related but perhaps more surprising source of misinfor-
mation is literary fiction. People extract knowledge even from
sources that are explicitly identified as fictional. This process
is often adaptive, because fiction frequently contains valid
information about the world. For example, non-Americans’
knowledge of U.S. traditions, sports, climate, and geography
partly stems from movies and novels, and many Americans
know from movies that Britain and Australia have left-hand
traffic. By definition, however, fiction writers are not obliged
to stick to the facts, which creates an avenue for the spread of
misinformation, even by stories that are explicitly identified as
fictional. A study by Marsh, Meade, and Roediger (2003)
showed that people relied on misinformation acquired from
clearly fictitious stories to respond to later quiz questions,
even when these pieces of misinformation contradicted com-
mon knowledge. In most cases, source attribution was intact,
so people were aware that their answers to the quiz questions
Misinformation and Its Correction 109
were based on information from the stories, but reading the
stories also increased people’s illusory belief of prior knowl-
edge. In other words, encountering misinformation in a fic-
tional context led people to assume they had known it all along
and to integrate this misinformation with their prior knowl-
edge (Marsh & Fazio, 2006; Marsh et al., 2003).
The effects of fictional misinformation have been shown to
be stable and difficult to eliminate. Marsh and Fazio (2006)
reported that prior warnings were ineffective in reducing the
acquisition of misinformation from fiction, and that acquisi-
tion was only reduced (not eliminated) under conditions of
active on-line monitoring—when participants were instructed
to actively monitor the contents of what they were reading and
to press a key every time they encountered a piece of misinfor-
mation (see also Eslick, Fazio, & Marsh, 2011). Few people
would be so alert and mindful when reading fiction for enjoy-
ment. These links between fiction and incorrect knowledge are
particularly concerning when popular fiction pretends to accu-
rately portray science but fails to do so, as was the case with
Michael Crichton’s novel State of Fear. The novel misrepre-
sented the science of global climate change but was neverthe-
less introduced as “scientific” evidence into a U.S. Senate
committee (Allen, 2005; Leggett, 2005).
Writers of fiction are expected to depart from reality, but in
other instances, misinformation is manufactured intentionally.
There is considerable peer-reviewed evidence pointing to the
fact that misinformation can be intentionally or carelessly dis-
seminated, often for political ends or in the service of vested
interests, but also through routine processes employed by the
media.
Governments and politicians
In the lead-up to the U.S.-led invasion of Iraq in 2003,
U.S. government officials proclaimed there was no doubt that
Saddam Hussein had weapons of mass destruction (WMDs)
and was ready to use them against his enemies. The Bush
administration also juxtaposed Iraq and the 9/11 terrorist
attacks, identifying Iraq as the frontline in the “War on Terror”
(Reese & Lewis, 2009) and implying that it had intelligence
linking Iraq to al-Qaida. Although no WMDs were ever found
in Iraq and its link to al-Qaida turned out to be unsubstanti-
ated, large segments of the U.S. public continued to believe
the administration’s earlier claims, with some 20% to 30% of
Americans believing that WMDs had actually been discovered
in Iraq years after the invasion (Kull, Ramsay, & Lewis, 2003;
Kull et al., 2006) and around half of the public endorsing links
between Iraq and al-Qaida (Kull et al., 2006). These mistaken
beliefs persisted even though all tentative media reports about
possible WMD sightings during the invasion were followed
by published corrections, and even though the nonexistence of
WMDs in Iraq and the absence of links between Iraq and al-
Qaida was eventually widely reported and became the official
bipartisan U.S. position through the Duelfer report.
Politicians were also a primary source of misinformation
during the U.S. health care debate in 2009. Misinformation
about the Obama health plan peaked when Sarah Palin posted
a comment about “death panels” on her Facebook page. Within
5 weeks, 86% of Americans had heard the death-panel claim.
Of those who heard the myth, fully half either believed it or
were not sure of its veracity. Time magazine reported that the
single phrase “death panels” nearly derailed Obama’s health
care plan (Nyhan, 2010).
Although Sarah Palin’s turn of phrase may have been
spontaneous and its consequences unplanned, analyses have
revealed seemingly systematic efforts to misinform the
public—for example, about climate change (McCright &
Dunlap, 2010). During the administration of President George
W. Bush, political appointees demonstrably interfered with
scientific assessments of climate change (e.g., Mooney, 2007),
and NASA’s inspector general found in 2008 that in previous
years, the agency’s “Office of Public Affairs managed the
topic of climate change in a manner that reduced, marginal-
ized, or mischaracterized climate change science made avail-
able to the general public” (Winters, 2008, p. 1).
The public seems to have some awareness of the presence
of politically motivated misinformation in society, especially
during election campaigns (Ramsay, Kull, Lewis, & Subias,
2010). However, when asked to identify specific instances of
such misinformation, people are often unable to differentiate
between information that is false and other information that is
correct (Ramsay et al., 2010). Thus, public awareness of the
problem is no barrier to widespread and lasting confusion.
Vested interests and nongovernmental
organizations (NGOs)
There is also evidence of concerted efforts by vested interests
to disseminate misinformation, especially when it comes to
issues of the environment (e.g., Jacques, Dunlap, & Freeman,
2008) and public health (e.g., Oreskes & Conway, 2010;
Proctor, 2008) that have the potential to motivate policies that
would impose a regulatory burden on certain industries (e.g.,
tobacco manufacturers or the fossil-fuel industry). This pro-
cess of willful manufacture of mistaken beliefs has been
described as “agnogenesis” (Bedford, 2010). There is consid-
erable legal and scientific evidence for this process in at least
two arenas—namely, industry-based responses to the health
consequences of smoking and to climate change.
In 2006, a U.S. federal court ruled that major domestic cig-
arette manufacturers were guilty of conspiring to deny, distort,
and minimize the hazards of cigarette smoking (Smith et al.,
2011). Similarly, starting in the early 1990s, the American
Petroleum Institute, the Western Fuels Association (a coal-
fired electrical industry consortium), and The Advancement of
Sound Science Coalition (TASSC; a group sponsored by
Philip Morris) drafted and promoted campaigns to cast doubt
on the science of climate change (Hoggan, Littlemore, &
110 Lewandowsky et al.
Littlemore, 2009). These industry groups have also formed an
alliance with conservative think tanks, using a handful of sci-
entists (typically experts from a different domain) as spokes-
persons (Oreskes & Conway, 2010). Accordingly, more than
90% of books published between 1972 and 2005 that expressed
skepticism about environmental issues have been linked to
conservative think tanks (Jacques et al., 2008).
However, the spreading of misinformation is by no means
always based on concerted efforts by vested interests. On the
contrary, industry itself has been harmed by misinformation in
some instances. For example, the vaccination-autism myth has
led to decreased vaccination rates (Owens, 2002; Poland &
Jacobsen, 2011) and hence arguably decreased the revenue
and profits of pharmaceutical companies. A similar case can
be made for genetically modified (GM) foods, which are
strongly opposed by sizable segments of the public, particu-
larly in Europe (e.g., Gaskell et al., 2003; Mielby, Sandøe, &
Lassen, 2012). The magnitude of opposition to GM foods
seems disproportionate to their actual risks as portrayed by
expert bodies (e.g., World Health Organization, 2005), and it
appears that people often rely on NGOs, such as Greenpeace,
that are critical of peer-reviewed science on the issue to form
their opinions about GM foods (Einsele, 2007). These alterna-
tive sources have been roundly criticized for spreading misin-
formation (e.g., Parrott, 2010).
Media
Given that people largely obtain their information from the
media (broadly defined to include print newspapers and maga-
zines, radio, TV, and the Internet), the media’s role in the dis-
semination of misinformation deserves to be explored. We
have already mentioned that the media sometimes unavoid-
ably report incorrect information because of the need for
timely news coverage. There are, however, several other sys-
temic reasons for why the media might get things wrong.
First, the media can inadvertently oversimplify, misrepre-
sent, or overdramatize scientific results. Science is complex,
and for the layperson, the details of many scientific studies are
difficult to understand or of marginal interest. Science com-
munication therefore requires simplification in order to be
effective. Any oversimplification, however, can lead to misun-
derstanding. For example, after a study forecasting future
global extinctions as a result of climate change was published
in Nature, it was widely misrepresented by news media
reports, which made the consequences seem more catastrophic
and the timescale shorter than actually projected (Ladle,
Jepson, & Whittaker, 2005). These mischaracterizations of
scientific results imply that scientists need to take care to com-
municate their results clearly and unambiguously, and that
press releases need to be meticulously constructed to avoid
misunderstandings by the media (e.g., Riesch & Spiegelhalter,
2011).
Second, in all areas of reporting, journalists often aim to
present a “balanced” story. In many instances, it is indeed
appropriate to listen to both sides of a story; however, if media
stick to journalistic principles of “balance” even when it is not
warranted, the outcome can be highly misleading (Clarke,
2008). For example, if the national meteorological service
issued a severe weather warning for tomorrow, no one would—
or should—be interested in their neighbor Jimmy’s opinion
that it will be a fine day. For good reasons, a newspaper’s
weather forecast relies on expert assessment and excludes lay
opinions.
On certain hotly contested issues, there is evidence that the
media have systematically overextended the “balance” frame.
For example, the overwhelming majority (more than 95%;
Anderegg, Prall, Harold, & Schneider, 2010; Doran &
Zimmerman, 2009) of actively publishing climate scientists
agree on the fundamental facts that the globe is warming and
that this warming is due to greenhouse-gas emissions caused
by humans; yet the contrarian opinions of nonexperts are fea-
tured prominently in the media (Boykoff & Boykoff, 2004). A
major Australian TV channel recently featured a self-styled
climate “expert” whose diverse qualifications included author-
ship of a book on cat palmistry (Readfearn, 2011). This asym-
metric choice of “experts” leads to the perception of a debate
about issues that were in fact resolved in the relevant scientific
literature long ago.
Although these systemic problems are shared to varying
extents by most media outlets, the problems vary considerably
both across time and among outlets. In the U.S., expert voices
have repeatedly expressed alarm at the decline in “hard” news
coverage since the 1990s and the growth of sensationalist
coverage devoid of critical analysis or in-depth investigation
(e.g., Bennett, 2003). After the invasion of Iraq in 2003,
the American media attracted much censure for their often
uncritical endorsement of prewar claims by the Bush adminis-
tration about Iraqi WMDs (e.g., Artz & Kamalipour, 2004,
Kamalipour & Snow, 2004; Rampton & Stauber, 2003, Tiffen,
2009), although there was considerable variation among outlets
in the accuracy of their coverage, as revealed by survey research
into the persistence of misinformation. Stephen Kull and his
colleagues (e.g., Kull et al., 2003) have repeatedly shown that
the level of belief in misinformation among segments of the
public varies dramatically according to preferred news outlets,
running along a continuum from Fox News (whose viewers are
the most misinformed on most issues) to National Public Radio
(whose listeners are the least misinformed overall).
The role of the Internet. The Internet has revolutionized the
availability of information; however, it has also facilitated the
spread of misinformation because it obviates the use of con-
ventional “gate-keeping” mechanisms, such as professional
editors. This is particularly the case with the development of
Web 2.0, whereby Internet users have moved from being pas-
sive consumers of information to actively creating content on
Web sites such as Twitter and YouTube or blogs.
People who use new media, such as blogs (McCracken,
2011), to source their news report that they find them fairer,
Misinformation and Its Correction 111
more credible, and more in-depth than traditional sources
(T. J. Johnson & Kaye, 2004). Blog users judged war blogs to
be more credible sources for news surrounding the conflicts in
Iraq and Afghanistan than traditional media (T. J. Johnson &
Kaye, 2010).
On the other hand, information on the Internet can be highly
misleading, and it is progressively replacing expert advice.
For example, people are increasingly sourcing health care
information from social networks. In 2009, 61% of American
adults looked online for health information (Fox & Jones,
2009). Relying on the Internet as a source of health informa-
tion is fraught with risk because its reliability is highly vari-
able. Among the worst performers in terms of accuracy are
dietary Web sites: A survey of the first 50 Web sites matching
the search term “weight loss diets” revealed that only 3 deliv-
ered sound dietary advice (Miles, Petrie, & Steel, 2000). Other
domains fare more favorably: A survey of English-language
Web sites revealed that 75% of sites on depression were com-
pletely accurate and that 86% of obesity-related Web sites
were at least partially accurate (Berland et al., 2001).
Online videos are an effective and popular means of
disseminating information (and misinformation)—1.2 billion
people viewed online videos in October 2011 (Radwanick,
2011). A survey of 153 YouTube videos matching the
search terms “vaccination” and “immunization” revealed that
approximately half of the videos were not explicitly support-
ive of immunization, and that the information in the anti-
immunization videos often contradicted official reference
material (Keelan, Pavri-Garcia, Tomlinson, & Wilson, 2007).
A survey of YouTube videos about the H1N1 influenza pan-
demic revealed that 61.3% of the videos contained useful
information about the disease, whereas 23% were misleading
(Pandey, Patni, Singh, Sood, & Singh, 2010).
Finally, there are hoax Web sites whose sole purpose is to
disseminate misinformation. Although these sites can have
many objectives, including parody, the more dangerous sites
pass themselves off as official sources of information. For
instance, the site martinlutherking.org (created by a White-
power organization) disseminates hateful information about
Dr. Martin Luther King while pretending to be an official King
Web site (Piper, 2000).
Consequences of increasing media fractionation. The
growth of cable TV, talk radio, and the Internet have made it
easier for people to find news sources that support their existing
views, a phenomenon known as selective exposure (Prior,
2003). When people have more media options to choose from,
they are more biased toward like-minded media sources. The
emergence of the Internet in particular has led to a fractionation
of the information landscape into “echo chambers”—that is,
(political) blogs that primarily link to other blogs of similar per-
suasion and not to those with opposing viewpoints. More than
half of blog readers seek out blogs that support their views,
whereas only 22% seek out blogs espousing opposing views,
a phenomenon that has led to the creation of “cyber-ghettos”
(T. J. Johnson, Bichard, & Zhang, 2009). These cyber-ghettos
have been identified as one reason for the increasing polariza-
tion of political discourse (McCright, 2011; Stroud, 2010).
One consequence of a fractionated information landscape
is the emergence of “strategic extremism” among politicians
(Glaeser, Ponzetto, & Shapiro, 2005). Although politicians
have traditionally vied for the attention of the political center,
extremism can be strategically effective if it garners more
votes at one extreme of the political spectrum than it loses in
the center or the opposite end of the spectrum. A precondition
for the success—defined as a net gain of votes—of strategic
extremism is a fractionated media landscape in which infor-
mation (or an opinion) can be selectively channeled to people
who are likely to support it, without alienating others. The
long-term effects of such strategic extremism, however, may
well include a pernicious and prolonged persistence of misin-
formation in large segments of society, especially when such
information leaks out of cyber-ghettos into the mainstream.
This fractionation of the information landscape is important in
that, as we show later in this article, worldview plays a major
role in people’s resistance to corrections of misinformation.
From Individual Cognition to Debiasing
Strategies
We now turn to the individual-level cognitive processes that
are involved in the acquisition and persistence of misinforma-
tion. In the remainder of the article, we address the following
points:
We begin by considering how people assess the truth of a
statement: What makes people believe certain things, but not
others?
Once people have acquired information and believe in it,
why do corrections and retractions so often fail? Worse yet,
why can attempts at retraction backfire, entrenching belief in
misinformation rather than reducing it?
After addressing these questions, we survey the successful
techniques by which the impact of misinformation can be
reduced.
We then discuss how, in matters of public and political
import, people’s personal worldviews, or ideology, can play a
crucial role in preventing debiasing, and we examine how
these difficulties arise and whether they can be overcome.
Finally, we condense our discussion into specific recom-
mendations for practitioners and consider some ethical impli-
cations and practical limitations of debiasing efforts in
general.
Assessing the Truth of a Statement:
Recipients’ Strategies
Misleading information rarely comes with a warning label.
People usually cannot recognize that a piece of information is
incorrect until they receive a correction or retraction. For bet-
ter or worse, the acceptance of information as true is favored
112 Lewandowsky et al.
by tacit norms of everyday conversational conduct: Informa-
tion relayed in conversation comes with a “guarantee of rele-
vance” (Sperber & Wilson, 1986), and listeners proceed on the
assumption that speakers try to be truthful, relevant, and clear,
unless evidence to the contrary calls this default into question
(Grice, 1975; Schwarz, 1994, 1996). Some research has even
suggested that to comprehend a statement, people must at least
temporarily accept it as true (Gilbert, 1991). On this view,
belief is an inevitable consequence of—or, indeed, precursor
to—comprehension.
Although suspension of belief is possible (Hasson, Sim-
mons, & Todorov, 2005; Schul, Mayo, & Burnstein, 2008), it
seems to require a high degree of attention, considerable
implausibility of the message, or high levels of distrust at the
time the message is received. So, in most situations, the deck
is stacked in favor of accepting information rather than reject-
ing it, provided there are no salient markers that call the speak-
er’s intention of cooperative conversation into question. Going
beyond this default of acceptance requires additional motiva-
tion and cognitive resources: If the topic is not very important
to you, or you have other things on your mind, misinformation
will likely slip in.
When people do thoughtfully evaluate the truth value of
information, they are likely to attend to a limited set of fea-
tures. First, is this information compatible with other things I
believe to be true? Second, is this information internally
coherent?—do the pieces form a plausible story? Third, does it
come from a credible source? Fourth, do other people believe
it? These questions can be answered on the basis of declarative
or experiential information—that is, by drawing on one’s
knowledge or by relying on feelings of familiarity and fluency
(Schwarz, 2004; Schwarz, Sanna, Skurnik, & Yoon, 2007). In
the following section, we examine those issues.
Is the information compatible with
what I believe?
As numerous studies in the literature on social judgment and
persuasion have shown, information is more likely to be
accepted by people when it is consistent with other things
they assume to be true (for reviews, see McGuire, 1972;
Wyer, 1974). People assess the logical compatibility of the
information with other facts and beliefs. Once a new piece of
knowledge-consistent information has been accepted, it is
highly resistant to change, and the more so the larger the com-
patible knowledge base is. From a judgment perspective, this
resistance derives from the large amount of supporting evi-
dence (Wyer, 1974); from a cognitive-consistency perspective
(Festinger, 1957), it derives from the numerous downstream
inconsistencies that would arise from rejecting the prior infor-
mation as false. Accordingly, compatibility with other knowl-
edge increases the likelihood that misleading information will
be accepted, and decreases the likelihood that it will be suc-
cessfully corrected.
When people encounter a piece of information, they can
check it against other knowledge to assess its compatibility.
This process is effortful, and it requires motivation and cogni-
tive resources. A less demanding indicator of compatibility is
provided by one’s meta-cognitive experience and affective
response to new information. Many theories of cognitive con-
sistency converge on the assumption that information that is
inconsistent with one’s beliefs elicits negative feelings
(Festinger, 1957). Messages that are inconsistent with one’s
beliefs are also processed less fluently than messages that are
consistent with one’s beliefs (Winkielman, Huber, Kavanagh,
& Schwarz, 2012). In general, fluently processed information
feels more familiar and is more likely to be accepted as true;
conversely, disfluency elicits the impression that something
doesn’t quite “feel right” and prompts closer scrutiny of the
message (Schwarz et al., 2007; Song & Schwarz, 2008). This
phenomenon is observed even when the fluent processing of a
message merely results from superficial characteristics of its
presentation. For example, the same statement is more likely
to be judged as true when it is printed in high rather than low
color contrast (Reber & Schwarz, 1999), presented in a rhym-
ing rather than nonrhyming form (McGlone & Tofighbakhsh,
2000), or delivered in a familiar rather than unfamiliar accent
(Levy-Ari & Keysar, 2010). Moreover, misleading questions
are less likely to be recognized as such when printed in an
easy-to-read font (Song & Schwarz, 2008).
As a result, analytic as well as intuitive processing favors
the acceptance of messages that are compatible with a recipi-
ent’s preexisting beliefs: The message contains no elements
that contradict current knowledge, is easy to process, and
“feels right.”
Is the story coherent?
Whether a given piece of information will be accepted as true
also depends on how well it fits a broader story that lends
sense and coherence to its individual elements. People are par-
ticularly likely to use an assessment strategy based on this
principle when the meaning of one piece of information can-
not be assessed in isolation because it depends on other, related
pieces; use of this strategy has been observed in basic research
on mental models (for a review, see Johnson-Laird, 2012),
as well as extensive analyses of juries’ decision making
(Pennington & Hastie, 1992, 1993).
A story is compelling to the extent that it organizes infor-
mation without internal contradictions in a way that is compat-
ible with common assumptions about human motivation and
behavior. Good stories are easily remembered, and gaps are
filled with story-consistent intrusions. Once a coherent story
has been formed, it is highly resistant to change: Within the
story, each element is supported by the fit of other elements,
and any alteration of an element may be made implausible
by the downstream inconsistencies it would cause. Coherent
stories are easier to process than incoherent stories are
Misinformation and Its Correction 113
(Johnson-Laird, 2012), and people draw on their processing
experience when they judge a story’s coherence (Topolinski,
2012), again giving an advantage to material that is easy to
process.
Is the information from a credible source?
When people lack the motivation, opportunity, or expertise to
process a message in sufficient detail, they can resort to an
assessment of the communicator’s credibility. Not surprisingly,
the persuasiveness of a message increases with the communica-
tor’s perceived credibility and expertise (for reviews, see Eagly
& Chaiken, 1993; Petty & Cacioppo, 1986). However, even
untrustworthy sources are often influential. Several factors con-
tribute to this observation. People are often insensitive to con-
textual cues that bear on the credibility of a source. For example,
expert testimony has been found to be similarly persuasive
whether it is provided under oath or in another context (Nyhan,
2011). Similarly, Cho, Martens, Kim, and Rodrigue (2011)
found that messages denying climate change were similarly
influential whether recipients were told they came from a study
“funded by Exxon” or from a study “funded from donations by
people like you.” Such findings suggest that situational indica-
tors of credibility may often go unnoticed, consistent with peo-
ple’s tendency to focus on features of the actor rather than the
situation (Ross, 1977). In addition, the gist of a message is often
more memorable than its source, and an engaging story from an
untrustworthy source may be remembered and accepted long
after the source has been forgotten (for a review of such “sleeper
effects,” see Eagly & Chaiken, 1993).
People’s evaluation of a source’s credibility can be based
on declarative information, as in the above examples, as well
as experiential information. The mere repetition of an unknown
name can cause it to seem familiar, making its bearer “famous
overnight” (Jacoby, Kelley, Brown, & Jaseschko, 1989)—and
hence more credible. Even when a message is rejected at the
time of initial exposure, that initial exposure may lend it some
familiarity-based credibility if the recipient hears it again.
Do others believe this information?
Repeated exposure to a statement is known to increase its
acceptance as true (e.g., Begg, Anas, & Farinacci, 1992;
Hasher, Goldstein, & Toppino, 1977). In a classic study of
rumor transmission, Allport and Lepkin (1945) observed that
the strongest predictor of belief in wartime rumors was simple
repetition. Repetition effects may create a perceived social
consensus even when no consensus exists. Festinger (1954)
referred to social consensus as a “secondary reality test”: If
many people believe a piece of information, there’s probably
something to it. Because people are more frequently exposed
to widely shared beliefs than to highly idiosyncratic ones, the
familiarity of a belief is often a valid indicator of social con-
sensus. But, unfortunately, information can seem familiar for
the wrong reason, leading to erroneous perceptions of high
consensus. For example, Weaver, Garcia, Schwarz, and Miller
(2007) exposed participants to multiple iterations of the same
statement, provided by the same communicator. When later
asked to estimate how widely the conveyed belief is shared,
participants estimated consensus to be greater the more often
they had read the identical statement from the same, single
source. In a very real sense, a single repetitive voice can sound
like a chorus.
Social-consensus information is particularly powerful
when it pertains to one’s reference group (for a review, see
Krech, Crutchfield, & Ballachey, 1962). As already noted, this
renders repetition in the echo chambers of social-media net-
works particularly influential. One possible consequence of
such repetition is pluralistic ignorance, or a divergence
between the actual prevalence of a belief in a society and what
people in that society think others believe. For example, in the
lead-up to the invasion of Iraq in 2003, voices that advocated
unilateral military action were given prominence in the Ameri-
can media, which caused the large majority of citizens who
actually wanted the U.S. to engage multilaterally, in concert
with other nations, to feel that they were in the minority
(Leviston & Walker, 2011; Todorov & Mandisodza, 2004).
Conversely, the minority of citizens who advocated unilateral
action incorrectly felt that they were in the majority (this false-
consensus effect is the flip side of pluralistic ignorance).
The extent of pluralistic ignorance (or of the false-consensus
effect) can be quite striking: In Australia, people with particu-
larly negative attitudes toward Aboriginal Australians or asy-
lum seekers have been found to overestimate public support
for their attitudes by 67% and 80%, respectively (Pedersen,
Griffiths, & Watt, 2008). Specifically, although only 1.8% of
people in a sample of Australians were found to hold strongly
negative attitudes toward Aboriginals, those few individuals
thought that 69% of all Australians (and 79% of their friends)
shared their fringe beliefs. This represents an extreme case of
the false-consensus effect.
Perceived social consensus can serve to solidify and main-
tain belief in misinformation. But how do the processes we
have reviewed affect people’s ability to correct misinforma-
tion? From the perspective of truth assessment, corrections
involve a competition between the perceived truth value of
misinformation and correct information. In the ideal case, cor-
rections undermine the perceived truth of misinformation and
enhance the acceptance of correct information. But as we dis-
cuss in the next section, corrections often fail to work as
expected. It is this failure of corrections, known as the contin-
ued influence effect (H. M. Johnson & Seifert, 1994), that con-
stitutes the central conundrum in research on misinformation.
The Continued Influence Effect: Retractions
Fail to Eliminate the Influence of
Misinformation
We first consider the cognitive parameters of credible retrac-
tions in neutral scenarios, in which people have no inherent
reason or motivation to believe one version of events over
114 Lewandowsky et al.
another. Research on this topic was stimulated by a paradigm
pioneered by Wilkes and Leatherbarrow (1988) and H. M.
Johnson and Seifert (1994). In it, people are presented with a
fictitious report about an event unfolding over time. The report
contains a target piece of information: For some readers, this
target information is subsequently retracted, whereas for read-
ers in a control condition, no correction occurs. Participants’
understanding of the event is then assessed with a question-
naire, and the number of clear and uncontroverted references
to the target (mis-)information in their responses is tallied.
A stimulus narrative commonly used in this paradigm
involves a warehouse fire that is initially thought to have been
caused by gas cylinders and oil paints that were negligently
stored in a closet (e.g., Ecker, Lewandowsky, Swire, & Chang,
2011; H. M. Johnson & Seifert, 1994; Wilkes & Leatherbarrow,
1988). Some participants are then presented with a retraction,
such as “the closet was actually empty.” A comprehension test
follows, and participants’ number of references to the gas and
paint in response to indirect inference questions about the
event (e.g., “What caused the black smoke?”) is counted. In
addition, participants are asked to recall some basic facts about
the event and to indicate whether they noticed any retraction.
Research using this paradigm has consistently found that
retractions rarely, if ever, have the intended effect of eliminat-
ing reliance on misinformation, even when people believe,
understand, and later remember the retraction (e.g., Ecker,
Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, &
Chang, 2011; Ecker, Lewandowsky, & Tang, 2010; Fein,
McCloskey, & Tomlinson, 1997; Gilbert, Krull, & Malone,
1990; Gilbert, Tafarodi, & Malone, 1993; H. M. Johnson
& Seifert, 1994, 1998, 1999; Schul & Mazursky, 1990;
van Oostendorp, 1996; van Oostendorp & Bonebakker, 1999;
Wilkes & Leatherbarrow, 1988; Wilkes & Reynolds, 1999). In
fact, a retraction will at most halve the number of references to
misinformation, even when people acknowledge and demon-
strably remember the retraction (Ecker, Lewandowsky, &
Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011); in
some studies, a retraction did not reduce reliance on misinfor-
mation at all (e.g., H. M. Johnson & Seifert, 1994).
When misinformation is presented through media sources,
the remedy is the presentation of a correction, often in a tem-
porally disjointed format (e.g., if an error appears in a newspa-
per, the correction will be printed in a subsequent edition). In
laboratory studies, misinformation is often retracted immedi-
ately and within the same narrative (H. M. Johnson & Seifert,
1994). Despite this temporal and contextual proximity to the
misinformation, retractions are ineffective. More recent stud-
ies (Seifert, 2002) have examined whether clarifying the cor-
rection (minimizing misunderstanding) might reduce the
continued influence effect. In these studies, the correction was
thus strengthened to include the phrase “paint and gas were
never on the premises.” Results showed that this enhanced
negation of the presence of flammable materials backfired,
making people even more likely to rely on the misinformation
in their responses. Other additions to the correction were
found to mitigate to a degree, but not eliminate, the continued
influence effect: For example, when participants were given a
rationale for how the misinformation originated, such as, “a
truckers’ strike prevented the expected delivery of the items,”
they were somewhat less likely to make references to it. Even
so, the influence of the misinformation could still be detected.
The wealth of studies on this phenomenon have documented
its pervasive effects, showing that it is extremely difficult to
return the beliefs of people who have been exposed to misin-
formation to a baseline similar to those of people who were
never exposed to it.
Multiple explanations have been proposed for the contin-
ued influence effect. We summarize their key assumptions
next.
Mental models
One explanation for the continued influence effect assumes
that people build mental models of unfolding events (H. M.
Johnson & Seifert, 1994; van Oostendorp & Bonebakker,
1999; Wilkes & Leatherbarrow, 1988). In this view, factor A
(e.g., negligence) led to factor B (e.g., the improper storage of
flammable materials), and factor B in conjunction with factor
C (e.g., an electrical fault) caused outcome X (e.g., the fire) to
happen. If a retraction invalidates a central piece of informa-
tion (e.g., factor B, the presence of gas and paint), people will
be left with a gap in their model of the event and an event
representation that just “doesn’t make sense” unless they
maintain the false assertion. Therefore, when questioned about
the event, a person may still rely on the retracted misinforma-
tion to respond (e.g., answering “The gas cylinders” when
asked “What caused the explosions?”), despite demonstrating
awareness of the correction when asked about it directly. Con-
sistent with the mental-model notion, misinformation becomes
particularly resilient to correction when people are asked to
generate an explanation for why the misinformation might be
true (Anderson, Lepper, & Ross, 1980). Moreover, the litera-
ture on false memory has shown that people tend to fill gaps in
episodic memory with inaccurate but congruent information if
such information is readily available from event schemata
(Gerrie, Belcher, & Garry, 2006).
Nevertheless, the continued use of discredited mental mod-
els despite explicit correction remains poorly understood. On
the one hand, people may be uncomfortable with gaps in their
knowledge of an event and hence prefer an incorrect model
over an incomplete model (Ecker, Lewandowsky, & Apai,
2011; Ecker et al., 2010; H. M. Johnson & Seifert, 1994; van
Oostendorp & Bonebakker, 1999). The conflict created by
having a plausible answer to a question readily available, but
at the same time knowing that it is wrong, may be most easily
resolved by sticking to the original idea and ignoring the
retraction.
Retrieval failure
Another explanation for the continued influence of misinfor-
mation is the failure of controlled memory processes. First,
Misinformation and Its Correction 115
misinformation effects could be based on source confusion or
misattribution (M. K. Johnson, Hashtroudi, & Lindsay, 1993).
People may correctly recollect a specific detail—in the case of
the story of the fire discussed earlier, they may remember that
it was assumed the fire was caused by oil and paints—but
incorrectly attribute this information to the wrong source. For
example, people could falsely recollect that this information
was contained in the final police report rather than an initial
report that was subsequently retracted.
Second, misinformation effects could be due to a failure of
strategic monitoring processes (Moscovitch & Melo, 1997).
Ayers and Reder (1998) have argued that both valid and invalid
memory entries compete for automatic activation, but that
contextual integration requires strategic processing. In other
words, it is reasonable to assume that a piece of misinforma-
tion that supplies a plausible account of an event will be acti-
vated when a person is questioned about the event. A strategic
monitoring process is then required to determine the validity
of this automatically retrieved piece of information. This may
be the same monitoring process involved in source attribution,
whereby people decide whether a memory is valid and put into
the correct encoding context, or whether it was received from
a reliable source (Henkel & Mattson, 2011).
Third, there is some evidence that processing retractions
can be likened to attaching a “negation tag” to a memory entry
(e.g., “there were oil paints and gas cylinders—NOT”; Gilbert
et al., 1990; H. M. Johnson & Seifert, 1998). H. M. Johnson
and Seifert (1998) showed that the automatic activation of
misinformation in memory continues whenever it is referred
to, even after a clear correction. For example, after reading,
“John played hockey for New York. Actually, he played for
Boston,” reading “the team” results in the activation of both
cities in memory. The negation tag on the information can be
lost, especially when strategic memory processing is impaired,
as it can be in old age (E. A. Wilson & Park, 2008) or under
high cognitive load (Gilbert et al., 1990). From this perspec-
tive, negations should be more successful when they can be
encoded as an affirmation of an alternative attribute (Mayo,
Schul, & Burnstein, 2004). Mayo and her colleagues (2004)
found support for this possibility in the domain of person per-
ception. For example, the information that Jim is “not messy”
allows an affirmative encoding, “Jim is tidy,” incorporating
the polar opposite of “messy”; in contrast, learning that Jim is
“not charismatic” does not offer an alternative encoding
because of the unipolar nature of the trait “charismatic.”
Accordingly, Mayo et al. found that people were more likely
to misremember unipolar traits (e.g., remembering “not char-
ismatic” as “charismatic”) than bipolar traits (e.g., “not messy”
was rarely misremembered as “messy,” presumably because
“not messy” was recoded as “tidy” during encoding).
Fluency and familiarity
Whereas the preceding accounts focus on whether people are
more likely to recall a piece of misinformation or its correction,
a fluency approach focuses on the experience of processing the
two types of information upon later reexposure (Schwarz et al.,
2007). Without direct questions about truth values, people may
rely on their metacognitive experience of fluency during think-
ing about an event to assess plausibility of their thoughts, a pro-
cess that would give well-formed, coherent models an
advantage—as long as thoughts flow smoothly, people may see
little reason to question their veracity (Schwarz et al., 2007).
From this perspective, misinformation can exert an influence by
increasing the perceived familiarity and coherence of related
material encountered later in time. As a result, retractions may
fail, or even backfire (i.e., by entrenching the initial misinfor-
mation), if they directly or indirectly repeat false information in
order to correct it, thus further enhancing its familiarity.
For example, correcting an earlier account by explaining
that there were no oil paints and gas cylinders present requires
the repetition of the idea that “paints and gas were present.”
Generally, repetition of information strengthens that informa-
tion in memory and thus strengthens belief in it, simply
because the repeated information seems more familiar or is
associated with different contexts that can serve as later
retrieval cues (Allport & Lepkin, 1945; Eakin, Schreiber, &
Sergent-Marshall, 2003; Ecker, Lewandowsky, Swire, &
Chang, 2011; Henkel & Mattson, 2011; Mitchell & Zaragoza,
1996; Schul & Mazursky, 1990; Verkoeijen, Rikers, &
Schmidt, 2004; Zaragoza & Mitchell, 1996). It follows that
when people later reencounter the misinformation (e.g., “oil
paints and gas cylinders were present”), it may be more famil-
iar to them than it would have been without the retraction,
leading them to think, “I’ve heard that before, so there’s prob-
ably something to it.” This impairs the effectiveness of public-
information campaigns intended to correct misinformation
(Schwarz et al., 2007).
A common format for such campaigns is a “myth versus
fact” approach that juxtaposes a given piece of false informa-
tion with a pertinent fact. For example, the U.S. Centers for
Disease Control and Prevention offer patient handouts that
counter an erroneous health-related belief (e.g., “The side
effects of flu vaccination are worse than the flu”) with relevant
facts (e.g., “Side effects of flu vaccination are rare and mild”).
When recipients are tested immediately after reading such
hand-outs, they correctly distinguish between myths and facts,
and report behavioral intentions that are consistent with the
information provided (e.g., an intention to get vaccinated).
However, a short delay is sufficient to reverse this effect: After
a mere 30 minutes, readers of the handouts identify more
“myths” as “facts” than do people who never received a hand-
out to begin with (Schwarz et al., 2007). Moreover, people’s
behavioral intentions are consistent with this confusion: They
report fewer vaccination intentions than people who were not
exposed to the handout.
Because recollective memory shows more age-related
impairment than familiarity-based memory does (Jacoby,
1999), older adults (and potentially children) are particularly
vulnerable to these backfire effects because they are more
likely to forget the details of a retraction and retain only a
sense of familiarity about it (Bastin & Van Der Linden, 2005;
116 Lewandowsky et al.
Holliday, 2003; Jacoby, 1999). Hence, they are more likely to
accept a statement as true after exposure to explicit messages
that it is false (Skurnik, Yoon, Park, & Schwarz, 2005; E. A.
Wilson & Park, 2008).
A similar effect has recently been reported in the very dif-
ferent field of corporate-event sponsorship. Whereas some
companies spend large amounts of money to be officially
associated with a certain event, such as the Olympic Games,
other companies try to create the impression of official affilia-
tion without any sponsorship (and hence without expenditure
on their part), a strategy known as “ambushing.” Not only is
this strategy successful in associating a brand with an event,
but attempts to publically expose a company’s ambushing
attempt (i.e., “counter-ambushing”) may lead people to
remember the feigned brand-to-event association even better
(Humphreys et al., 2010).
Reactance
Finally, retractions can be ineffective because of social reac-
tance (Brehm & Brehm, 1981). People generally do not like to
be told what to think and how to act, so they may reject par-
ticularly authoritative retractions. For this reason, misinforma-
tion effects have received considerable research attention in a
courtroom setting where mock jurors are presented with a
piece of evidence that is later ruled inadmissible. When the
jurors are asked to disregard the tainted evidence, their convic-
tion rates are higher when an “inadmissible” ruling was
accompanied by a judge’s extensive legal explanations than
when the inadmissibility was left unexplained (Pickel, 1995,
Wolf & Montgomery, 1977). (For a review of the literature on
how jurors process inadmissible evidence, see Lieberman &
Arndt, 2000.)
Reducing the Impact of Misinformation
So far, we have shown that simply retracting a piece of infor-
mation will not stop its influence. A number of other tech-
niques for enhancing the effectiveness of retractions have been
explored, but many have proven unsuccessful. Examples
include enhancing the clarity of the retraction (Seifert, 2002;
van Oostendorp, 1996) and presenting the retraction immedi-
ately after the misinformation to prevent inferences based on
it before correction occurs (H. M. Johnson & Seifert, 1994;
Wilkes & Reynolds, 1999).
To date, only three factors have been identified that can
increase the effectiveness of retractions: (a) warnings at the
time of the initial exposure to misinformation, (b) repetition of
the retraction, and (c) corrections that tell an alternative story
that fills the coherence gap otherwise left by the retraction.
Preexposure warnings
Misinformation effects can be reduced if people are explicitly
warned up front that information they are about to be given
may be misleading (Chambers & Zaragoza, 2001; Ecker et al.,
2010; Jou & Foreman, 2007; Schul, 1993). Ecker et al. (2010)
found, however, that to be effective, such warnings need to
specifically explain the ongoing effects of misinformation
rather than just generally mention that misinformation may be
present (as in Marsh & Fazio, 2006). This result has obvious
application: In any situation in which people are likely to
encounter misinformation—for example, in advertising, in
fiction that incorporates historical or pseudoscientific infor-
mation, or in court settings, where jurors often hear informa-
tion they are later asked to disregard—warnings could be
given routinely to help reduce reliance on misinformation.
Warnings seem to be more effective when they are admin-
istered before the misinformation is encoded rather than after
(Chambers & Zaragoza, 2001; Ecker et al., 2010; Schul,
1993). This can be understood in terms of Gricean maxims
about communication (Grice, 1975): People by default expect
the information presented to be valid, but an a priori warning
can change that expectation. Such a warning would allow
recipients to monitor the encoded input and “tag” it as suspect.
Consistent with this notion, Schul (1993) found that people
took longer to process misinformation when they had been
warned about it, which suggests that, rather than quickly dis-
missing false information, people took care to consider the
misinformation within an alternative mental model. Warnings
may induce a temporary state of skepticism, which may maxi-
mize people’s ability to discriminate between true and false
information. Later in this article, we return to the issue of
skepticism and show how it can facilitate the detection of
misinformation.
The fact that warnings are still somewhat effective after mis-
information is encoded supports a dual-process view of misin-
formation retrieval, which assumes that a strategic monitoring
process can be used to assess the validity of automatically
retrieved pieces of misinformation (Ecker et al., 2010). Because
this monitoring requires effort and cognitive resources, warn-
ings may be effective in prompting recipients of information to
be vigilant.
Repeated retractions
The success of retractions can also be enhanced if they are
repeated or otherwise strengthened. Ecker, Lewandowsky,
Swire, and Chang (2011) found that if misinformation was
encoded repeatedly, repeating the retraction helped alleviate
(but did not eliminate) misinformation effects. However, mis-
information that was encoded only once persisted to the same
extent whether one retraction or three retractions were given.
This means that even after only weak encoding, misinforma-
tion effects are extremely hard to eliminate or drive below a
certain level of irreducible persistence, irrespective of the
strength of subsequent retractions.
There are a number of reasons why this could be the case.
First, some misinformation effects may arise from automatic
processing, which can be counteracted by strategic control
Misinformation and Its Correction 117
processes only to the extent that people are aware of the auto-
matic influence of misinformation on their reasoning (cf. T. D.
Wilson & Brekke, 1994). Second, inferences based on misin-
formation may rely on a sample of the memory representations
of that misinformation, and each of these representations may
be offset (thereby having its impact reduced, but not elimi-
nated) by only one retraction. Once a memory token has been
associated with a “retracted” marker, further retractions do not
appear to strengthen that marker; therefore, repeated retrac-
tions do not further reduce reliance on weakly encoded misin-
formation because weak encoding means only a single
representation is created, whereas the multiple representations
that arise with strong encoding can benefit from strong (i.e.,
multiple) retractions. (For a computational implementation of
this sampling model, see Ecker, Lewandowsky, Swire,
& Chang, 2011.) Finally, the repetition of corrections may
ironically decrease their effectiveness. On the one hand, some
evidence suggests a “protest-too-much” effect, whereby over-
exerting a correction may reduce confidence in its veracity
(Bush, Johnson, & Seifert, 1994). On the other hand, as noted
above, corrections may paradoxically enhance the impact of
misinformation by repeating it in retractions (e.g., Schwarz
et al., 2007).
Whatever the underlying cognitive mechanism, the findings
of Ecker, Lewandowsky, Swire, & Chang, (2011) suggest that
the repetition of initial misinformation has a stronger and more
reliable (negative) effect on subsequent inferences than the rep-
etition of its retraction does. This asymmetry in repetition effects
is particularly unfortunate in the domain of social networking
media, which allow information to be disseminated quickly,
widely, and without much fact-checking, and to be taken only
from sources consonant with particular worldviews.
Filling the gap: Providing an
alternative narrative
We noted earlier that retractions can cause a coherence gap in
the recipient’s understanding of an event. Given that internal
coherence plays a key role in truth assessments (Johnson-
Laird, 2012; Pennington & Hastie, 1993), the resulting gap
may motivate reliance on misinformation in spite of a retrac-
tion (e.g., “It wasn’t the oil and gas, but what else could it
be?”). Providing an alternative causal explanation of the event
can fill the gap left behind by retracting misinformation. Stud-
ies have shown that the continued influence of misinformation
can be eliminated through the provision of an alternative
account that explains why the information was incorrect (e.g.,
“There were no gas cylinders and oil paints, but arson materi-
als have been found”; “The initial suspect may not be guilty, as
there is an alternative suspect”; H. M. Johnson & Seifert,
1994; Tenney, Cleary, & Spellman, 2009).
To successfully replace the misinformation, the alternative
explanation provided by the correction must be plausible,
account for the important causal qualities in the initial report,
and, ideally, explain why the misinformation was thought to
be correct in the first place (e.g., Rapp & Kendeou, 2007;
Schul & Mazursky, 1990; Seifert, 2002). For example, noting
that the suspected WMD sites in Iraq were actually grain silos
would not explain why the initial report that they housed
WMDs occurred, so this alternative might be ineffective. An
alternative will be more compelling if it covers the causal
bases of the initial report. For example, an account might state
that a suspected WMD site was actually a chemical factory,
which would be more plausible because a chemical factory—
unlike a grain silo—may contain components that also occur
in WMDs (cf. H. M. Johnson & Seifert, 1994). A correction
may also be more likely to be accepted if it accounts for why
the initial incorrect information was offered—for example, by
stating that WMDs had been present in Iraq, but were destroyed
before 2003.
Corrections can be particularly successful if they explain
the motivation behind an incorrect report. For example, one
might argue that the initial reports of WMDs facilitated the
U.S. government’s intention to invade Iraq, so the misinforma-
tion was offered without sufficient evidence (i.e., government
officials were “trigger-happy”; cf. Lewandowsky, Stritzke,
Oberauer, & Morales, 2005, 2009). Drawing attention to a
source’s motivation can undermine the impact of misinforma-
tion. For example, Governor Ronald Reagan defused Presi-
dent Jimmy Carter’s attack on his Medicare policies in a 1980
U.S. presidential debate by stating, “There you go again!”; by
framing information as what would be “expected” from its
source, Reagan discredited it (Cialdini, 2001).
Some boundary conditions apply to the alternative-account
technique. The mere mention, or self-generation, of alternative
ideas is insufficient to reduce reliance on misinformation
(H. M. Johnson & Seifert, 1994, 1999; Seifert, 2002). That is,
the alternative must be integrated into the existing information
from the same source.
Also, people generally prefer simple explanations over
complex explanations (Chater & Vitanyi, 2003; Lombrozo,
2006, 2007). When misinformation is corrected with an alter-
native, but much more complex, explanation, people may
reject it in favor of a simpler account that maintains the misin-
formation. Hence, providing too many counterarguments, or
asking people to generate many counterarguments, can poten-
tially backfire (Sanna, Schwarz, & Stocker, 2002; Schwarz
et al., 2007). This “overkill” backfire effect can be avoided by
asking people to generate only a few arguments regarding why
their belief may be wrong; in this case, the self-generation of
the counterarguments can assist debiasing (Sanna & Schwarz,
2006). Moreover, suspicion about the rationale behind the cor-
rection, as well as for the rationale behind the initial presenta-
tion of the misinformation, may be particularly important in
the case of corrections of political misinformation. Specific
motivations likely underlie politicians’ explanations for
events, so people may place more suspicion on alternative
explanations from these sources.
In summary, the continued influence of misinformation can
be reduced with three established techniques: (a) People can
118 Lewandowsky et al.
be warned about the potentially misleading nature of forth-
coming information before it is presented; (b) corrections can
be repeated to strengthen their efficacy; and (c) corrections
can be accompanied by alternative explanations for the event
in question, thus preventing causal gaps in the account. The
last technique is particularly effective; however, it is not
always possible, because an alternative explanation may not
be available when an initial report is found to be in error. In
addition, further complications arise when corrections of mis-
information challenge the recipients’ worldview more broadly,
as we discuss in the following section.
Corrections in the Face of Existing Belief
Systems: Worldview and Skepticism
Recipients’ individual characteristics play an important role in
determining whether misinformation continues to exert an
influence. Here, we address two such characteristics—namely,
worldview and level of skepticism—that exert opposing
effects on the efficacy of corrections.
Worldview
Given that people more readily accept statements that are con-
sistent with their beliefs, it is not surprising that people’s
worldview, or personal ideology, plays a key role in the persis-
tence of misinformation. For example, Republicans are more
likely than Democrats to continue to believe the “birthers” and
to accept claims about the presence of WMDs in Iraq despite
retractions (Kull et al., 2003; Travis, 2010). At the opposite
end of the political spectrum, liberals are less accurate than
conservatives when it comes to judging the consequences of
higher oil prices. In particular, whereas experts foresee consid-
erable future risks to human health and society arising from
“peak oil” (Schwartz, Parker, Hess, & Frumkin, 2011), sur-
veys have shown that liberals are less likely than conservatives
to recognize the magnitude of these risks (Nisbet, Maibach, &
Leiserowitz, 2011).2
From this real-world survey research, we know that peo-
ple’s preexisting attitudes often determine their level of belief
in misinformation after it has been retracted. What is less well
understood is whether retractions (a) fail to reduce reliance on
misinformation specifically among people for whom the
retraction violates personal belief or (b) are equally effective
for all people, with observed post-retraction differences in
belief only mirroring pre-retraction differences. Both possi-
bilities are consistent with the literature on truth assessments
discussed earlier. Compared with worldview-congruent retrac-
tions, retractions that contradict one’s worldview are inconsis-
tent with other beliefs, less familiar, more difficult to process,
less coherent, less supported in one’s social network, and more
likely to be viewed as coming from an untrustworthy source.
All of these factors may undermine the apparent truth value of
a retraction that challenges one’s belief system. Conversely,
misinformation consistent with one’s worldview fits with
other beliefs, and is therefore more familiar, easier to process,
more coherent, more supported in one’s network, and more
likely to be viewed as coming from a trusted source. Accord-
ingly, worldview-based differences in the effectiveness of
retractions may reflect the differential appeal of the misinfor-
mation, the retraction, or both. The evidence concerning these
distinctions is sparse and mixed.
In one study, people with high and low levels of racial prej-
udice were presented with a narrative about a robbery involv-
ing an indigenous Australian who was either the suspect of
a crime (in one experiment) or a hero who prevented the
crime (in another experiment; Ecker, Lewandowsky, Fenton,
& Martin, 2012). People’s references to the racial information
covaried with their racial attitudes; that is, people who were
prejudiced mentioned the indigenous suspect more often and
the indigenous hero less often. However, this effect was found
irrespective of whether a retraction had been offered, indicat-
ing that the retraction was equally effective for low- and high-
prejudice participants. Similarly, in a study in which a fictitious
plane crash was initially attributed to a terrorist bomb before
participants received a correction clarifying that a later inves-
tigation revealed a faulty fuel tank as the cause, participants
with high levels of Islamophobia mentioned terrorism-related
material more often on a subsequent inference test than their
counterparts who scored lower on Islamophobia did, although
a retraction was equally effective for both groups (unpublished
analysis of Ecker, Lewandowsky, & Apai, 2011).
In contrast to these findings, reports from other studies
have indicated that worldviews affect how people process cor-
rective messages. In one study, retractions of nonfictitious
misperceptions (e.g., the mistaken belief that President Bush’s
tax cuts in the early 2000s had increased revenues; the idea
that there were WMDs in Iraq) were effective only among
people whose political orientation was supported by the retrac-
tion (Nyhan & Reifler, 2010). When the corrections were
worldview-dissonant (in this case, for Republican partici-
pants), a “backfire” effect was observed, such that participants
became more committed to the misinformation. Hart and
Nisbet (2011) reported a similar backfire effect using stimuli
related to climate change. In their study, people were presented
with messages highlighting the adverse effects on health
caused by climate change. Compared with a control group,
Democrats who received these messages were found to
increase their support for climate mitigation policies, whereas
support declined among Republicans.
The sway that people’s worldview holds over their percep-
tions and cognitions can be illustrated through a consideration
of some other instances of polarization. Gollust, Lantz, and
Ubel (2009) showed that even public-health messages can
have a polarizing effect along party lines: When people were
presented with evidence that Type 2 diabetes can be caused by
social circumstances (e.g., a scarcity of healthy food combined
with an abundance of junk food in poor neighborhoods), sub-
sequent endorsement of potential policy options (e.g., banning
fast-food concessions in public schools) was found to decline
Misinformation and Its Correction 119
among Republicans but to increase among Democrats in com-
parison with a control group that did not receive any informa-
tion about the causes of diabetes. Berinsky (2012) reported
similar polarizing effects in experiments in which the death-
panel myth surrounding President Obama’s health plan was
rebutted.
The role of personal worldview may not be limited to the
effects of misinformation regarding political issues: When
people who felt a high degree of connection with their favorite
brand were provided with negative information about the
brand, they reported reduced self-esteem but retained their
positive brand image, whereas the self-esteem of those with a
low degree of personal connection to brands remained
unchanged (Cheng, White, & Chaplin, 2011).
What boundary conditions limit the influence of one’s
worldview on one’s acceptance of corrections? The study by
Ecker, Lewandowsky, Fenton, and Martin (2012) involved fic-
titious events that contained attitude-relevant information,
whereas the studies just discussed involved real-world events
and politicians about which people likely had preexisting
opinions (Nyhan & Reifler, 2010). We therefore suggest that
worldview affects the effectiveness of a retraction when the
misinformation concerns a real-world event that relates to pre-
existing beliefs (e.g., it is harder to accept that the report of
WMDs in Iraq was false if one supported the 2003 invasion).
In confirmation of this idea, the political-science literature
contains reports of people being sensitive to factual or correc-
tive information on issues that arguably lack salience and
emotiveness (Barabas & Jerit, 2009; Blais et al., 2010; Gaines,
Kuklinski, Quirk, Peyton, & Verkuilen, 2007; for a review of
that literature, see Nyhan & Reifler, 2012). These findings
suggest that not all political issues necessarily lead to
polarization.
Making things worse: Backfire effects
From a societal view, misinformation is particularly damaging
if it concerns complex real-world issues, such as climate
change, tax policies, or the decision to go to war. The preced-
ing discussion suggests that in such real-world scenarios, peo-
ple will refer more to misinformation that is in line with their
attitudes and will be relatively immune to corrections, such
that retractions may even backfire and strengthen the initially
held beliefs (Nyhan & Reifler, 2010). This backfire effect has
been attributed to a process by which people implicitly coun-
terargue against any information that challenges their world-
view. Prasad et al. (2009) illuminated this counterarguing
process particularly strikingly by using a “challenge inter-
view” technique, asking participants to respond aloud to infor-
mation that debunked their preexisting beliefs. Participants
either came up with counterarguments or simply remained
unmovable (e.g., as illustrated by responses like “I guess we
still can have our opinions and feel that way even though they
say that”). These findings mesh well with the work on “moti-
vated skepticism” by Taber and Lodge (2006), which has
shown similar responses to challenges to political opinions (as
opposed to facts). In their study, people uncritically accepted
arguments for their own position but were highly skeptical of
opposing arguments, and they actively used counterarguments
to deride or invalidate worldview-incongruent information (as
revealed through protocol analysis).
Such backfire effects, also known as “boomerang” effects,
are not limited to the correction of misinformation but also
affect other types of communication. For example, messages
intended to promote positive health behaviors can backfire,
such that campaigns to reduce smoking may ironically lead to
an increase in smoking rates (for a review, see Byrne & Hart,
2009). In other areas of research, backfire effects have been
linked to people not only rejecting the message at hand but
also becoming predisposed to reject any future messages from
its source (Brehm & Brehm, 1981). If generalizations of
source distrust may occur in the context of corrections of mis-
information, their potential existence is cause for concern.
A phenomenon that is closely related to the backfire effects
arising with worldview-dissonant corrections involves belief
polarization. Belief polarization is said to occur if presentation
of the same information elicits further attitudinal divergence
between people with opposing views on an issue (Lord, Ross,
& Lepper, 1979). For example, when both religious believers
and nonbelievers were exposed to a fictitious report disprov-
ing the Biblical account of the resurrection, belief increased
among believers, whereas nonbelievers became more skepti-
cal (Batson, 1975). This increased belief among believers is
isomorphic to the worldview backfire effect in response to
corrective information.
In another example, supporters and opponents of nuclear
power reacted in opposite fashion to identical descriptions of
technological breakdowns at a nuclear plant: Whereas sup-
porters focused on the fact that the safeguards worked to pre-
vent the accident from being worse, opponents focused on the
fact that the breakdown occurred in the first place (Plous,
1991). Not unexpectedly, techniques for reducing belief polar-
ization are highly similar to techniques for overcoming world-
view-related resistance to corrections of misinformation.
Feelings of affiliation with a source also influence whether
or not one accepts a piece of information at face value. For
example, Berinsky (2012) found that among Republicans, cor-
rections of the death-panel myth were effective primarily when
they were issued by a Republican politician. However, judg-
ments of a source’s credibility are themselves a function of
beliefs: If you believe a statement, you judge its source to be
more credible (Fragale & Heath, 2004). This interaction between
belief and credibility judgments can lead to an epistemic circu-
larity, whereby no opposing information is ever judged suffi-
ciently credible to overturn dearly held prior knowledge. For
example, Munro (2010) has shown that exposure to belief-
threatening scientific evidence can lead people to discount the
scientific method itself: People would rather believe that an
issue cannot be resolved scientifically, thus discounting the evi-
dence, than accept scientific evidence in opposition to their
120 Lewandowsky et al.
beliefs. Indeed, even high levels of education do not protect
against the worldview-based rejection of information; for exam-
ple, Hamilton (2011) showed that a higher level of education
made Democrats more likely to view global warming as a threat,
whereas the reverse was true for Republicans. This constitutes
an extreme case of belief polarization (see also Malka,
Krosnick, & Langer, 2009; McCright & Dunlap, 2011). Simi-
larly, among Republicans, greater education was associated
with a greater increase in the belief that President Obama was a
Muslim (he is not) between 2009 and 2010 (Sides, 2010).
Among Democrats, few held this mistaken belief, and education
did not moderate the effect.
In summary, personal beliefs can facilitate the acquisition
of attitude-consonant misinformation, increase reliance on
misinformation, and inoculate against the correction of false
beliefs (Ecker et al., 2012; Kull et al., 2003; Lewandowsky
et al., 2005, 2009; Nyhan & Reifler, 2010; Pedersen, Clarke,
Dudgeon, & Griffiths, 2005; Pedersen, Attwell, & Heveli,
2007). Interestingly, the extent to which material is emotive
does not appear to affect its persistence in memory after cor-
rection (Ecker, Lewandowsky, & Apai, 2011). For example,
after a retraction of a report about the cause of a plane crash,
people will mistakenly continue to refer to a “terrorist attack”
as the cause just as often as “bad weather” or a “technical
fault,” even when they are demonstrably more emotionally
affected by the first. Thus, people do not simply cling to the
most emotional version of an event. Although information that
challenges people’s worldview is likely to elicit an emotive
response, emotion by itself is not sufficient to alter people’s
resistance to corrections.
One limitation of this conclusion is that worldview does not
by itself serve as a process explanation. Although it is indubi-
tably useful to be able to predict a person’s response to correc-
tions on the basis of party affiliation or other indicators
of worldview, it would be helpful if the cognitive processes
underlying that link could be characterized in greater detail.
Recent advances in illuminating those links have been promis-
ing (e.g., Castelli & Carraro, 2011; Carraro, Castelli, &
Macchiella, 2011; Jost, Glaser, Kruglanski, & Sulloway,
2003b). It is possible that one’s worldview forms a frame of
reference for determining, in Piaget’s (1928) terms, whether to
assimilate information or to accommodate it. If one’s invest-
ment in a consistent worldview is strong, changing that world-
view to accommodate inconsistencies may be too costly or
effortful. In a sense, the worldview may serve as a schema for
processing related information (Bartlett, 1977/1932), such that
relevant factual information may be discarded or misinforma-
tion preserved.
Taming worldview by affirming it
The research on preexisting attitudes and worldviews implies
that debiasing messages and retractions must be tailored to
their specific audience, preferably by ensuring that the correc-
tion is consonant with the audience’s worldview. For example,
the work on “cultural cognition” by Kahan and colleagues
(e.g., Kahan, 2010) have repeatedly shown that framing solu-
tions to a problem in worldview-consonant terms can enhance
acceptance of information that would be rejected if it were
differently framed. Thus, people who might oppose nanotech-
nology because they have an “eco-centric” outlook may be
less likely to dismiss evidence of its safety if the use of
nanotechnology is presented as part of an effort to protect the
environment. Similarly, people who oppose climate science
because it challenges their worldview may do so less if the
response to climate change is presented as a business opportu-
nity for the nuclear industry (cf. Feygina, Jost, & Goldsmith,
2010). Even simple changes in wording can make information
more acceptable by rendering it less threatening to a person’s
worldview. For example, Republicans are far more likely to
accept an otherwise identical charge as a “carbon offset” than
as a “tax,” whereas the wording has little effect on Democrats
or Independents (whose values are not challenged by the word
“tax”; Hardisty, Johnson, & Weber, 2010).
Another way in which worldview-threatening messages
can be made more palatable involves coupling them with self-
affirmation—that is, by giving recipients an opportunity to
affirm their basic values as part of the correction process
(Cohen et al., 2007, Nyhan & Reifler, 2011). Self-affirmation
can be achieved by asking people to write a few sentences
about a time they felt especially good about themselves
because they acted on a value that was important to them.
Compared with people who received no affirmation, those
who self-affirmed became more receptive to messages that
otherwise might have threatened their worldviews. Self-
affirmation may give the facts a fighting chance (Cohen et al.,
2007, Nyhan & Reifler, 2011) by helping people handle chal-
lenges to their worldviews. Intriguingly, self-affirmation also
enables people who have a high personal connection to a
favorite brand to process negative information about it appro-
priately (by lowering their evaluations of the brand rather than
their own self-esteem; Cheng et al., 2011).
Factors that assist people in handling inconsistencies in
their personal perspectives may also help to promote accep-
tance of corrections. For example, distancing oneself from a
self-focused perspective has been shown to promote wise rea-
soning (Kross & Grossmann, 2012) and may be helpful in pro-
cessing corrections.
Skepticism: A key to accuracy
We have reviewed how worldview and prior beliefs can exert
a distorting influence on information processing. However,
some attitudes can also safeguard against misinformation
effects. In particular, skepticism can reduce susceptibility to
misinformation effects if it prompts people to question the ori-
gins of information that may later turn out to be false. For
example, people who questioned the official casus belli for
the invasion of Iraq (destroying WMDs) have been shown to
be more accurate in processing war-related information in
Misinformation and Its Correction 121
general (Lewandowsky et al., 2005). Suspicion or skepticism
about the overall context (i.e., the reasons for the war) thus led
to more accurate processing of specific information about the
event in question. Importantly, in this instance, skepticism also
ensured that correct information was recognized more accu-
rately, and thus did not translate into cynicism or a blanket
denial of all war-related information. In a courtroom setting,
Fein et al. (1997) showed that mock jurors who were asked to
disregard a piece of inadmissible evidence were still influ-
enced by the retracted evidence despite claiming they were
not—unless they were made suspicious of the motives of the
prosecutor who had introduced the evidence.
These findings mesh well with related research on trust.
Although trust plays a fundamental role in most human relation-
ships, and the presence of distrust is often corrosive (e.g., Whyte
& Crease, 2010), there are situations in which distrust can have
a positive function. For example, Schul et al. (2008) showed
that when they elicited distrust in participants by showing them
a face that had been rated as “untrustworthy” by others, the par-
ticipants were more likely to be able to solve nonroutine prob-
lems on a subsequent, completely unrelated task. By contrast,
participants in whom trust was elicited performed much better
on routine problems (but not nonroutine problems), a result sug-
gesting that distrust causes people to explore their environment
more carefully, which sensitizes them to the existence of non-
routine contingencies. Similarly, Mayer and Mussweiler (2011)
showed that priming people to be distrustful enhances their cre-
ativity in certain circumstances.
Taken together, these results suggest that a healthy sense of
skepticism or induced distrust can go a long way in avoiding
the traps of misinformation. These benefits seem to arise from
the nonroutine, “lateral” information processing that is primed
when people are skeptical or distrustful (Mayer & Mussweiler,
2011; Schul et al., 2008). However, distrust and skepticism are
most likely to exert an influence when they are experienced at
the time of message exposure, and they do not always protect
people from unreliable or intentionally misleading sources,
particularly when a source’s motivation becomes apparent
only after message encoding. Even when misinformation is
identified as intentionally deceptive (as opposed to acciden-
tally wrong) or as stemming from an unreliable source,
its effects can prevail (Green & Donahue, 2011; Henkel &
Mattson, 2011). For example, Green and Donahue (2011) first
presented people with a report that was found to change peo-
ple’s attitudes about an issue (e.g., a report about a heroin-
addicted child changed people’s attitudes toward the
effectiveness of social youth-assistance programs). Partici-
pants then received a retraction stating that the report was
inaccurate, either because of a mix-up (error condition) or
because the author had made up most of the “facts” in order
to sensationalize the report (deception condition). The results
showed that participants were motivated to undo their attitudi-
nal changes, especially in the deception condition, but that
the effects of misinformation could not be undone in either
condition. The misinformation had a continuing effect on par-
ticipants’ attitudes even after a retraction established the author
had made it up.
Using misinformation to inform
Unlike brief interventions using the “myth-versus-fact”
approach (Schwarz et al., 2007), whose adverse implications we
discussed earlier, it appears that a careful and prolonged dissec-
tion of incorrect arguments may facilitate the acquisition of cor-
rect information. To illustrate this point, Kowalski and Taylor
(2009) conducted a naturalistic experiment in which they com-
pared a standard teaching format with an alternative approach in
which lectures explicitly refuted 17 common misconceptions
about psychology but left others unchallenged. The results
showed that direct refutation was more successful in reducing
misconceptions than was the nonrefutational provision of the
same information. On the basis of a more extensive review of
the literature, Osborne (2010) likewise argued for the centrality
of argumentation and rebuttal in science education, suggesting
that classroom studies “show improvements in conceptual
learning when students engage in argumentation” (p. 464).
Recent work has indicated that argumentation and engage-
ment with an opponent can even work in the political arena
(Jerit, 2008). Jerit’s analysis of more than 40 opinion polls ran
contrary to the conventional wisdom that to win a policy
debate, political actors should selectively highlight issues that
mobilize public opinion in favor of their position and not
engage an opponent in dialogue. Taking the argumentation and
refutation approach to an extreme, some have suggested that
even explicit misinformation can be used as an effective teach-
ing tool. Bedford (2010) reported a case study in which stu-
dents learned about climate science by studying “denialist”
literature—that is, they acquired actual knowledge by analyz-
ing material that contained misinformation in depth and by
developing the skills required to detect the flaws in the mate-
rial. In line with Osborne’s (2010) review, an in-depth discus-
sion of misinformation and its correction may assist people in
working through inconsistencies in their understanding and
promote the acceptance of corrections.
Debiasing in an Open Society
Knowledge about the processes underlying the persistence of
misinformation and about how misinformation effects can be
avoided or reduced is of obvious public interest. Today, infor-
mation is circulated at a faster pace and in greater amounts
than ever before in society, and demonstrably false beliefs
continue to find traction in sizable segments of the populace.
The development of workable debiasing and retraction tech-
niques, such as those reviewed here, is thus of considerable
practical importance.
Encouraging precedents for the effectiveness of using such
techniques on a large scale have been reported in Rwanda (e.g.,
122 Lewandowsky et al.
Paluck, 2009), where a controlled, yearlong field experiment
revealed that a radio soap opera built around messages of reduc-
ing intergroup prejudice, violence, and survivors’ trauma altered
listeners’ perceptions of social norms and their behavior—albeit
not their beliefs—in comparison with a control group exposed
to a health-focused soap opera. This field study confirmed that
large-scale change can be achieved using conventional media.
(Paluck’s experiment involved delivery of the program via tape
recorders, but this was for reasons of experimental control and
convenience, and it closely mimicked the way in which radio
programs are traditionally consumed by Rwandans.)
Concise recommendations for practitioners
The literature we have reviewed thus far may appear kaleido-
scopic in its complexity. Indeed, a full assessment of the debi-
asing literature must consider numerous nuances and subtleties,
which we aimed to cover in the preceding sections. However,
it is nonetheless possible to condense the core existing knowl-
edge about debiasing into a limited set of recommendations
that can be of use to practitioners.3
We summarize the main points from the literature in Figure
1 and in the following list of recommendations:
FACT
MYTH
Continued Influence Effect Alternative Account
Familiarity Backfire Effect Emphasis on Facts
Overkill Backfire Effect Simple, Brief Rebuttal Foster Healthy Skepticism
MYTH
MYTH
FACT FACT FACT
FACT FACT FACT
FACT FACT FACT
FACT FACT FACT
MYTH
FACT
FACT
FACT
Worldview Backfire Effect Affirm Worldview Affirm Identity
Despite a retraction, people continue to
rely on misinformation
Warn upfront that misleading
information is coming
Avoid repetition of the myth; reinforce the
correct facts instead
Repeating the myth increases familiarity,
reinforcing it
Use fewer arguments in refuting
the myth — less is more
Simple myths are more cognitively
attractive than complicated refutations
Evidence that threatens worldview can
strengthen initially held beliefs
Preexposure Warning
Strengthen retraction through
repetition (without reinforcing myth)
Repeated Retraction
!
?
MYTH
FACT FACT
FACT FACT
FACT FACT
FACT
FACT FACT
MYTH MYTH
FACT FACT
MYTH
Alternative explanation fills gap
left by retracting misinformation
Skepticism about information source
reduces influence of misinformation
Self-affirmation of personal values
increases receptivity to evidence
Frame evidence in worldview-affirming
manner by endorsing values of audience
Problem Solutions and Good Practice
MYTH
FACT
MYTH
FACT
MYTH
FACT
FACT
FACT
Fig. 1. A graphical summary of findings from the misinformation literature relevant to communication practitioners. The left-hand column summarizes
the cognitive problems associated with misinformation, and the right-hand column summarizes the solutions reviewed in this article.
Misinformation and Its Correction 123
•Consider what gaps in people’s mental event models
are created by debunking and fill them using an alter-
native explanation.
•Use repeated retractions to reduce the influence of
misinformation, but note that the risk of a backfire
effect increases when the original misinformation is
repeated in retractions and thereby rendered more
familiar.
•To avoid making people more familiar with misin-
formation (and thus risking a familiarity backfire
effect), emphasize the facts you wish to communicate
rather than the myth.
•Provide an explicit warning before mentioning a myth,
to ensure that people are cognitively on guard and less
likely to be influenced by the misinformation.
•Ensure that your material is simple and brief. Use
clear language and graphs where appropriate. If
the myth is simpler and more compelling than your
debunking, it will be cognitively more attractive, and
you will risk an overkill backfire effect.
•Consider whether your content may be threatening
to the worldview and values of your audience. If so,
you risk a worldview backfire effect, which is stron-
gest among those with firmly held beliefs. The most
receptive people will be those who are not strongly
fixed in their views.
•If you must present evidence that is threatening to the
audience’s worldview, you may be able to reduce the
worldview backfire effect by presenting your content
in a worldview-affirming manner (e.g., by focusing on
opportunities and potential benefits rather than risks
and threats) and/or by encouraging self-affirmation.
•You can also circumvent the role of the audience’s
worldview by focusing on behavioral techniques,
such as the design of choice architectures, rather than
overt debiasing.
Future Directions
Our survey of the literature has enabled us to provide a range
of recommendations and draw some reasonably strong conclu-
sions. However, our survey has also identified a range of
issues about which relatively little is known, and which
deserve future research attention. We wish to highlight three
such issues in particular—namely, the roles played by emo-
tion, individual differences (e.g., race or culture), and social
networks in misinformation effects.
Concerning emotion, we have discussed how misinforma-
tion effects arise independently of the emotiveness of the
information (Ecker, Lewandowsky, & Apai, 2011). But we
have also noted that the likelihood that people will pass on
information is based strongly on the likelihood of its eliciting
an emotional response in the recipient, rather than its truth
value (e.g., K. Peters et al., 2009), which means that the emo-
tiveness of misinformation may have an indirect effect on the
degree to which it spreads (and persists). Moreover, the effects
of worldview that we reviewed earlier in this article provide an
obvious departure point for future work on the link between
emotion and misinformation effects, because challenges to
people’s worldviews tend to elicit highly emotional defense
mechanisms (cf. E. M. Peters, Burraston, & Mertz, 2004).
Concerning individual differences, research has already
touched on how responses to the same information differ
depending on people’s personal worldviews or ideology
(Ecker et al., 2012; Kahan, 2010), but remarkably little is
known about the effects of other individual-difference vari-
ables. Intelligence, memory capacity, memory-updating abili-
ties, and tolerance for ambiguity are just a few factors that
could potentially mediate misinformation effects.
Finally, concerning social networks, we have already
pointed to the literature on the creation of cyber-ghettos (e.g.,
T. J. Johnson et al., 2009), but considerable research remains
to be done to develop a full understanding of the processes of
(mis-)information dissemination through complex social net-
works (cf. Eirinaki, Monga, & Sundaram, 2012; Scanfeld,
Scanfeld, & Larson, 2010; Young, 2011) and of the ways in
which these social networks facilitate the persistence of misin-
formation in selected segments of society.
Concluding Remarks: Psychosocial, Ethical,
and Practical Implications
We conclude by discussing how misinformation effects can be
reconciled with the notion of human rationality, before
addressing some limitations and ethical considerations sur-
rounding debiasing and point to an alternative behavioral
approach for counteracting the effects of misinformation.
Thus far, we have reviewed copious evidence about people’s
inability to update their memories in light of corrective infor-
mation and have shown how worldview can override fact and
corrections can backfire. One might be tempted to conclude
from those findings that people are somehow characteristically
irrational, or cognitively “insufficient.” We caution against that
conclusion. Jern, Chang, and Kemp (2009) presented a model
of belief polarization (which, as we noted earlier, is related to
the continued influence of misinformation) that was instanti-
ated within a Bayesian network. A Bayesian network captures
causal relations among a set of variables: In a psychological
context, it can capture the role of hidden psychological vari-
ables—for example, during belief updating. Instead of assum-
ing that people consider the likelihood that hypothesis is
true only in light of the information presented, a Bayesian net-
work accounts for the fact that people may rely on other “hid-
den” variables, such as the degree to which they trust an
information source (e.g., peer-reviewed literature). Jern et al.
(2009) showed that when these hidden variables are taken into
account, Bayesian networks can capture behavior that at first
glance might appear irrational—such as behavior in line with
the backfire effects reviewed earlier. Although this research can
only be considered suggestive at present, people’s rejection of
124 Lewandowsky et al.
corrective information may arguably represent a normatively
rational integration of prior biases with new information.
Concerning the limitations of debiasing, there are several
ethical and practical issues to consider. First, the application of
any debiasing technique raises important ethical questions:
While it is in the public interest to ensure that the population is
well-informed, debiasing techniques can similarly be used to
further misinform people. Correcting misinformation is cogni-
tively indistinguishable from misinforming people to replace
their preexisting correct beliefs. It follows that it is important
for the general public to have a basic understanding of misin-
formation effects: Widespread awareness of the fact that peo-
ple may “throw mud” because they know it will “stick” is an
important aspect of developing a healthy sense of public skep-
ticism that will contribute to a well-informed populace.
Second, there are situations in which applying debiasing
strategies is not advisable for reasons of efficiency. In our dis-
cussion of the worldview backfire effect, we argued that debi-
asing will be more effective for people who do not hold strong
beliefs concerning the misinformation: In people who strongly
believe in a piece of misinformation for ideological reasons, a
retraction can in fact do more harm than good by ironically
strengthening the misbelief. In such cases, particularly when
the debiasing cannot be framed in a worldview-congruent
manner, debiasing may not be a good strategy.
An alternative approach for dealing with pervasive misin-
formation is thus to ignore the misinformation altogether and
seek more direct behavioral interventions. Behavioral econo-
mists have developed “nudging” techniques that can encour-
age people to make certain decisions over others, without
preventing them from making a free choice (e.g., Thaler &
Sunstein, 2008). For example, it no longer matters whether
people are misinformed about climate science if they adopt
ecologically friendly behaviors, such as by driving low-
emission vehicles, in response to “nudges,” such as tax credits.
Despite suggestions that even these nudges can be rendered
ineffective by people’s worldviews (Costa & Kahn, 2010;
Lapinski, Rimal, DeVries, & Lee, 2007), this approach has
considerable promise.
Unlike debiasing techniques, behavioral interventions
involve the explicit design of choice architectures to facilitate
a desired outcome. For example, it has been shown that organ-
donation rates in countries in which people have to “opt in” by
explicitly stating their willingness to donate hover around
15–20%, compared to over 90% in countries in which people
must “opt out” (E. J. Johnson & Goldstein, 2003). The fact that
the design process for such choice architectures can be entirely
transparent and subject to public and legislative scrutiny less-
ens any potential ethical implications.
A further advantage of the nudging approach is that its effects
are not tied to a specific delivery vehicle, which may fail to
reach target audiences. Thus, whereas debiasing requires that
the target audience receive the corrective information—a poten-
tially daunting obstacle—the design of choice architectures
automatically reaches any person who is making a relevant
choice.
We therefore see three situations in which nudging seems
particularly applicable. First, when behavior changes need to
occur quickly and across entire populations in order to prevent
negative consequences, nudging may be the strategy of choice
(cf. the Montreal Protocol to rapidly phase out CFCs to protect
the ozone layer; e.g., Gareau, 2010). Second, as discussed in
the previous section, nudging may offer an alternative to debi-
asing when ideology is likely to prevent the success of debias-
ing strategies. Finally, nudging may be the only viable option
in situations that involve organized efforts to deliberately mis-
inform people—that is, when the dissemination of misinfor-
mation is programmatic (a case we reviewed at the outset of
this article, using the examples of misinformation about
tobacco smoke and climate change).
In this context, the persistence with which vested interests
can pursue misinformation is notable: After decades of deny-
ing the link between smoking and lung cancer, the tobacco
industry’s hired experts have opened a new line of testimony
by arguing in court that even after the U.S. Surgeon General’s
conclusion that tobacco was a major cause of death and injury
in 1964, there was still “room for responsible disagreement”
(Proctor, 2004). Arguably, this position is intended to replace
one set of well-orchestrated misinformation—that tobacco
does not kill—with another convenient myth—that the tobacco
industry did not know it. Spreading doubts by referring to the
uncertainty of scientific conclusions—whether about smok-
ing, climate change, or GM foods—is a very popular strategy
for misinforming the populace (Oreskes & Conway, 2010).
For laypeople, the magnitude of uncertainty does not matter
much as long as it is believed to be meaningful. In addition to
investigating the cognitive mechanisms of misinformation
effects, researchers interested in misinformation would be
well advised to monitor such sociopolitical developments in
order to better understand why certain misinformation can
gain traction and persist in society.
Acknowledgments
The first two authors contributed equally to the paper.
Declaration of Conflicting Interests
The authors declared that they had no conflicts of interest with
respect to their authorship or the publication of this article.
Funding
Preparation of this paper was facilitated by Discovery Grants
DP0770666 and DP110101266 from the Australian Research Council
and by an Australian Professorial Fellowship and an Australian Post-
doctoral Fellowship to the first and second author, respectively.
Notes
1. We use the term “misinformation” here to refer to any piece of
information that is initially processed as valid but that is subsequently
Misinformation and Its Correction 125
retracted or corrected. This is in contrast to so-called post-event mis-
information, the literature on which has been reviewed extensively
elsewhere (e.g., Ayers & Reder, 1998, Loftus, 2005) and has focused
on the effects of suggestive and misleading information presented to
witnesses after an event.
2. There is ongoing debate about whether the effects of worldview
during information processing are more prevalent among conser-
vatives than liberals (e.g., Greenberg & Jonas, 2003; Jost, Glaser,
Kruglanski, & Sulloway, 2003a; Jost, Glaser, Kruglanski, &
Sulloway, 2003b). This debate is informative and important but not
directly relevant in this context. We are concerned with the existence
of worldview-based effects on information processing irrespective of
their partisan origin, given that misinformation effects are generic.
3. Two of the authors of this article (Cook & Lewandowsky, 2011)
have prepared a practitioner’s guide to debiasing that, in 7 pages,
summarizes the facets of the literature that are particularly relevant
to practitioners (e.g., scientists and journalists). The booklet is
available for free download in several languages (English, Dutch,
German, and French as of July 2012) at http://sks.to/debunk, and can
be considered an “executive summary” of the material in this article
for practitioners.
References
Allen, M. (2005). A novel view of global warming. Nature, 433, 198.
Allport, F. H., & Lepkin, M. (1945). Wartime rumors of waste and
special privilege: Why some people believe them. Journal of
Abnormal and Social Psychology, 40, 3–36.
Anderegg, W. R. L., Prall, J. W., Harold, J., & Schneider, S. H. (2010).
Expert credibility in climate change. Proceedings of the National
Academy of Sciences, USA, 107, 12107–12109.
Anderson, C. A., Lepper, M. R., & Ross, L. (1980). Perseverance
of social theories: The role of explanation in the persistence of
discredited information. Journal of Personality and Social Psy-
chology, 39, 1037–1049.
Armstrong, G. M., Gural, M. N., & Russ, F. A. (1983). A longitudinal
evaluation of the Listerine corrective advertising campaign. Jour-
nal of Public Policy & Marketing, 2, 16–28.
Artz, L., & Kamalipour, Y. R. (2004). Bring ’em on: Media and poli-
tics in the Iraq war. Lanham, MD: Rowman & Littlefield.
Ayers, M. S., & Reder, L. M. (1998). A theoretical review of the mis-
information effect: Predictions from an activation-based memory
model. Psychonomic Bulletin & Review, 5, 1–21.
Barabas, J., & Jerit, J. (2009). Estimating the causal effects of media
coverage on policy-specific knowledge. American Journal of
Political Science, 53, 73–89.
Barr, A. (2011). Poll: 51 percent of GOP primary voters think Obama
born abroad. Politico. Retrieved from http://www.politico.com/
news/stories/0211/49554.html
Bartlett, F. C. (1977). Remembering: A study in experimental and
social psychology. Cambridge, England: Cambridge University
Press. (Original work published 1932)
Bastin, C., & Van Der Linden, M. (2005). Memory for temporal con-
text: Effects of ageing, encoding instructions, and retrieval strate-
gies. Memory, 13, 95–109.
Batson, C. D. (1975). Rational processing or rationalization? Effect
of disconfirming information on a stated religious belief. Journal
of Personality and Social Psychology, 32, 176–184.
Bedford, D. (2010). Agnotology as a teaching tool: Learning climate
science by studying misinformation. Journal of Geography, 109,
159–165.
Begg, I. M., Anas, A., & Farinacci, S. (1992). Dissociation of pro-
cesses in belief: Source recollection, statement familiarity, and
the illusion of truth. Journal of Experimental Psychology: Gen-
eral, 121, 446–458.
Bennett, W. L. (2003). The burglar alarm that just keeps ringing: A
response to Zaller. Political Communication, 20, 131–138.
Berger, J. (2011). Arousal increases social transmission of informa-
tion. Psychological Science, 22, 891–893.
Berinsky, A. (2012). Rumors, truths, and reality: A study of political
misinformation. Unpublished manuscript, Massachusetts Insti-
tute of Technology, Cambridge, MA.
Berland, G., Elliott, M., Morales, L., Algazy, J., Kravitz, R., Broder,
M., . . . McGlynn, E. A. (2001). Health information on the internet.
Journal of the American Medical Association, 285, 2612–2621.
Blais, A., Gidengil, E., Fournier, P., Nevitte, N., Everitt, J., &
Kim, J. (2010). Political judgments, perceptions of facts, and par-
tisan effects. Electoral Studies, 29, 1–12.
Boykoff, M. T., & Boykoff, J. M. (2004). Balance as bias: Global
warming and the US prestige press. Global Environmental
Change, 14, 125–136.
Brehm, S. S., & Brehm, J. W. (1981). Psychological reactance: A
theory of freedom and control. New York, NY: Academic Press.
Bush, J. G., Johnson, H. M., & Seifert, C. M. (1994). The implica-
tions of corrections: Then why did you mention it? In A. Ram &
K. Eiselt (Eds.), Proceedings of the 16th annual conference of the
cognitive science society (pp. 112–117). Hillsdale, NJ: Erlbaum.
Byrne, S., & Hart, P. S. (2009). The boomerang effect: A synthesis of
findings and a preliminary theoretical framework. In C. S. Beck
(Ed.), Communication yearbook (Vol. 220, pp. 3–37). Hoboken,
NY: Routledge.
Carraro, L., Castelli, L., & Macchiella, C. (2011). The automatic
conservative: Ideology-based attentional asymmetries in the
processing of valenced information. PLoS ONE, 6(11), e26456.
Retrieved from http://www.plosone.org/article/info:doi/10.1371/
journal.pone.0026456
Castelli, L., & Carraro, L. (2011). Ideology is related to basic cogni-
tive processes involved in attitude formation. Journal of Experi-
mental Social Psychology, 47, 1013–1016.
Chambers, K. L., & Zaragoza, M. S. (2001). Intended and unintended
effects of explicit warnings on eyewitness suggestibility: Evi-
dence from source identification tests. Memory & Cognition, 29,
1120–1129.
Chater, N., & Vitanyi, P. (2003). Simplicity: A unifying principle in
cognitive science. Trends in Cognitive Science, 7, 19–22.
Cheng, S. Y. Y., White, T. B., & Chaplin, L. N. (2011). The effects
of self-brand connections on responses to brand failure: A new
look at the consumer–brand relationship. Journal of Consumer
Psychology, 22, 280–288.
126 Lewandowsky et al.
Cho, C. H., Martens, M. L., Kim, H., & Rodrigue, M. (2011).
Astroturfing global warming: It isn’t always greener on the other
side of the fence. Journal of Business Ethics, 104, 571–587.
Cialdini, R. B. (2001). Influence: Science and practice (4th ed.).
Boston, MA: Allyn & Bacon.
Clarke, C. (2008). A question of balance: The autism-vaccine contro-
versy in the British and American elite press. Science Communi-
cation, 30, 77–107.
Cohen, G. L., Bastardi, A., Sherman, D. K., Hsu, L., McGoey, M.,
& Ross, L. (2007). Bridging the partisan divide: Self-affirmation
reduces ideological closed-mindedness and inflexibility in
negotiation. Journal of Personality and Social Psychology, 93,
415–430.
Colgrove, J., & Bayer, R. (2005). Could it happen here? Vaccine risk
controversies and the specter of derailment. Health Affairs, 24,
729–739.
Cook, J., & Lewandowsky, S. (2011). The debunking handbook.
Retrieved from http://www.skepticalscience.com/docs/Debunk-
ing_Handbook.pdf
Costa, D. L., & Kahn, M. E. (2010). Energy conservation “nudges”
and environmentalist ideology: Evidence from a randomized
residential electricity field experiment (NBER Working Paper
No. 15939). Washington, DC: National Bureau of Economic
Research.
Cotter, E. M. (2008). Influence of emotional content and perceived
relevance on spread of urban legends: A pilot study. Psychologi-
cal Reports, 102, 623–629.
De Neys, W., Cromheeke, S., & Osman, M. (2011). Biased but in
doubt: Conflict and decision confidence. PLoS ONE, 6, e15954.
Retrieved from http://www.plosone.org/article/info:doi/10.1371/
journal.pone.0015954
Doran, P. T., & Zimmerman, M. K. (2009). Examining the scientific
consensus on climate change. Eos, 90, 21–22.
Eagly, A. H., & Chaiken, S. (1993). The psychology of attitudes. Fort
Worth, TX: Harcourt Brace Jovanovich.
Eakin, D. K., Schreiber, T. A., & Sergent-Marshall, S. (2003). Mis-
information effects in eyewitness memory: The presence and
absence of memory impairment as a function of warning and mis-
information accessibility. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 29, 813–825.
Ecker, U. K. H., Lewandowsky, S., & Apai, J. (2011). Terrorists
brought down the plane! —No, actually it was a technical fault:
Processing corrections of emotive information. Quarterly Jour-
nal of Experimental Psychology, 64, 283–310.
Ecker, U. K. H., Lewandowsky, S., Fenton, O., & Martin, K. (2012).
Pre-existing attitudes and the continued influence of misinforma-
tion. Unpublished manuscript, University of Western Australia,
Perth.
Ecker, U. K. H., Lewandowsky, S., Swire, B., & Chang, D. (2011).
Correcting false information in memory: Manipulating the
strength of misinformation encoding and its retraction. Psycho-
nomic Bulletin & Review, 18, 570–578.
Ecker, U. K. H., Lewandowsky, S., & Tang, D. T. W. (2010). Explicit
warnings reduce but do not eliminate the continued influence of
misinformation. Memory & Cognition, 38, 1087–1100.
Einsele, A. (2007). The gap between science and perception: The
case of plant biotechnology in Europe. Advances in Biochemical
Engineering/Biotechnology, 107, 1–11.
Eirinaki, M., Monga, S. P. S., & Sundaram, S. (2012). Identification
of influential social networkers. International Journal of Web
Based Communities, 8, 136–158.
Eslick, A. N., Fazio, L. K., & Marsh, E. J. (2011). Ironic effects of
drawing attention to story errors. Memory, 19, 184–191.
Fein, S., McCloskey, A. L., & Tomlinson, T. M. (1997). Can the
jury disregard that information? The use of suspicion to reduce
the prejudicial effects of pretrial publicity and inadmissible tes-
timony. Personality and Social Psychology Bulletin, 23, 1215–
1226.
Festinger, L. (1954). A theory of social comparison processes. Human
Relations, 7, 123–146.
Festinger, L. (1957). A theory of cognitive dissonance. Evanston, IL:
Row, Peterson.
Feygina, I., Jost, J. T., & Goldsmith, R. E. (2010). System justifica-
tion, the denial of global warming, and the possibility of “system-
sanctioned change.” Personality and Social Psychology Bulletin,
36, 326–338.
Fox, S., & Jones, S. (2009). The social life of health information.
Retrieved from http://www.pewinternet.org/reports/2009/8-the-
social-life-of-health-information.aspx
Fragale, A. R., & Heath, C. (2004). Evolving informational creden-
tials: The (mis)attribution of believable facts to credible sources.
Personality and Social Psychology Bulletin, 30, 225–236.
Gaines, B. J., Kuklinski, J. H., Quirk, P. J., Peyton, B., & Verkuilen, J.
(2007). Same facts, different interpretations: Partisan motivation
and opinion on Iraq. Journal of Politics, 69, 957–974.
Gareau, B. J. (2010). A critical review of the successful CFC phase-
out versus the delayed methyl bromide phase-out in the Montreal
Protocol. International Environmental Agreements: Politics, Law
and Economics, 10, 209–231.
Gaskell, G., Allum, N., Bauer, M., Jackson, J., Howard, S., &
Lindsey, N. (2003). Climate change for biotechnology? UK pub-
lic opinion 1991-2002. AgBioForum, 6, 55–67.
Gerrie, M. P., Belcher, L. E., & Garry, M. (2006). “Mind the gap”:
False memories for missing aspects of an event. Applied Cogni-
tive Psychology, 20, 689–696.
Gilbert, D. T. (1991). How mental systems believe. American Psy-
chologist, 46, 107–119.
Gilbert, D. T., Krull, D., & Malone, P. (1990). Unbelieving the unbe-
lievable: Some problems in the rejection of false information.
Journal of Personality and Social Psychology, 59, 601–613.
Gilbert, D. T., Tafarodi, R. W., & Malone, P. S. (1993). You can’t not
believe everything you read. Journal of Personality and Social
Psychology, 65, 221–233.
Glaeser, E. L., Ponzetto, G. A. M., & Shapiro, J. M. (2005). Strategic
extremism: Why Republicans and Democrats divide on religious
values. The Quarterly Journal of Economics, 120, 1283–1330.
Glöckner, A., & Bröder, A. (2011). Processing of recognition infor-
mation and additional cues: A model-based analysis of choice,
confidence, and response time. Judgment and Decision Making,
6, 23–42.
Misinformation and Its Correction 127
Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological
rationality: The recognition heuristic. Psychological Review, 109,
75–90.
Gollust, S. E., Lantz, P. M., & Ubel, P. A. (2009). The polarizing
effect of news media messages about the social determinants of
health. American Journal of Public Health, 99, 2160–2167.
Green, M. C., & Donahue, J. K. (2011). Persistence of belief change
in the face of deception: The effect of factual stories revealed to
be false. Media Psychology, 14, 312–331.
Greenberg, J., & Jonas, E. (2003). Psychological motives and politi-
cal orientation—The left, the right, and the rigid: Comment on
Jost et al. (2003). Psychological Bulletin, 129, 376–382.
Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L.
Morgan (Eds.), Syntax and semantics, Vol. 3: Speech acts
(pp. 41–58). New York, NY: Academic Press.
Hamilton, L. C. (2011). Education, politics and opinions about cli-
mate change evidence for interaction effects. Climatic Change,
104, 231–242.
Hardisty, D. J., Johnson, E. J., & Weber, E. U. (2010). A dirty word
or a dirty world? Attribute framing, political affiliation, and query
theory. Psychological Science, 21, 86–92.
Hargreaves, I., Lewis, J., & Speers, T. (2003). Towards a better map:
Science, the public and the media. London, England: Economic
and Social Research Council.
Hart, P. S., & Nisbet, E. C. (2011). Boomerang effects in science
communication: How motivated reasoning and identity cues
amplify opinion polarization about climate mitigation poli-
cies. Communication Research. Advance online publication.
doi:10.1177/0093650211416646
Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the
conference of referential validity. Journal of Verbal Learning and
Verbal Behavior, 16, 107–112.
Hasson, U., Simmons, J. P., & Todorov, A. (2005). Believe it or not:
On the possibility of suspending belief. Psychological Science,
16, 566–571.
Heath, C., Bell, C., & Sternberg, E. (2001). Emotional selection in
memes: The case of urban legends. Journal of Personality and
Social Psychology, 81, 1028–1041.
Henkel, L. A., & Mattson, M. E. (2011). Reading is believing: The
truth effect and source credibility. Consciousness and Cognition,
20, 1705–1721.
Hoggan, J., Littlemore, R., & Littlemore, R. (2009). Climate
cover-up: The crusade to deny global warming. Vancouver, BC:
Greystone Books.
Holliday, R. E. (2003). Reducing misinformation effects in children
with cognitive interviews: Dissociating recollection and familiar-
ity. Child Development, 74, 728–751.
Humphreys, M. S., Cornwell, T. B., McAlister, A. R., Kelly, S. J.,
Quinn, E. A., & Murray, K. L. (2010). Sponsorship, ambush-
ing, and counter-strategy: Effects upon memory for sponsor and
event. Journal of Experimental Psychology: Applied, 16, 96–108.
Jacoby, L. L. (1999). Ironic effects of repetition: Measuring age-
related differences in memory. Journal of Experimental Psychol-
ogy: Learning Memory, and Cognition, 25, 3–22.
Jacoby, L. L., Kelley, C. M., Brown, J., & Jaseschko, J. (1989).
Becoming famous overnight: Limits on the ability to avoid
unconscious influences of the past. Journal of Personality and
Social Psychology, 56, 326–338.
Jacques, P. J., Dunlap, R. E., & Freeman, M. (2008). The organisa-
tion of denial: Conservative think tanks and environmental scep-
ticism. Environmental Politics, 17, 349–385.
Jerit, J. (2008). Issue framing and engagement: Rhetorical strategy in
public policy debates. Political Behavior, 30, 1–24.
Jern, A., Chang, K.-m. K., & Kemp, C. (2009). Bayesian belief
polarization. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I.
Williams, & A. Culotta (Eds.), Advances in neural information
processing systems (Vol. 22, pp. 853–861). La Jolla, CA: Neural
Information Processing Foundation.
Johnson, E. J., & Goldstein, D. (2003). Do defaults save lives? Sci-
ence, 302, 1338–1339.
Johnson, H. M., & Seifert, C. M. (1994). Sources of the continued
influence effect: When misinformation in memory affects later
inferences. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 20, 1420–1436.
Johnson, H. M., & Seifert, C. M. (1998). Updating accounts follow-
ing a correction of misinformation. Journal of Experimental Psy-
chology: Learning, Memory, and Cognition, 24, 1483–1494.
Johnson, H. M., & Seifert, C. M. (1999). Modifying mental repre-
sentations: Comprehending corrections. In H. van Oostendorp &
S. R. Goldman (Eds.), The construction of mental representations
during reading (pp. 303–318). Mahwah, NJ: Erlbaum.
Johnson, M. K., Hashtroudi, S., & Lindsay, D. S. (1993). Source
monitoring. Psychological Bulletin, 114, 3–28.
Johnson, T. J., & Kaye, B. (2004). Wag the blog: How reliance on tra-
ditional media and the internet influence credibility perceptions
of weblogs among blog users. Journalism & Mass Communica-
tion Quarterly, 81, 622–642.
Johnson, T. J., & Kaye, B. (2010). Believing the blogs of war? How
blog users compare on credibility and characteristics in 2003 and
2007. Media, War & Conflict, 3, 315–333.
Johnson, T. J., Bichard, S. L., & Zhang, W. (2009). Communication
communities or ‘‘cyberghettos?’’: A path analysis model examin-
ing factors that explain selective exposure to blogs. Journal of
Computer-Mediated Communication, 15, 60–82.
Johnson-Laird, P. N. (2012). Mental models and consistency. In B.
Gawronski & F. Strack (Eds.), Cognitive consistency: A funda-
mental principle in social cognition (pp. 225–243). New York,
NY: Guilford Press.
Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. (2003a).
Exceptions that prove the rule—Using a theory of motivated
social cognition to account for ideological incongruities and
political anomalies: Reply to Greenberg and Jonas (2003). Psy-
chological Bulletin, 129, 383–393.
Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. (2003b).
Political conservatism as motivated social cognition. Psychologi-
cal Bulletin, 129, 339–375.
Jou, J., & Foreman, J. (2007). Transfer of learning in avoiding
false memory: The roles of warning, immediate feedback, and
128 Lewandowsky et al.
incentive. Quarterly Journal of Experimental Psychology, 60,
977–896.
Kahan, D. M. (2010). Fixing the communications failure. Nature,
463, 296–297.
Kamalipour, Y. R., & Snow, N. E. (2004). War, media, and propa-
ganda. Oxford, England: Rowman & Littlefield.
Keelan, J., Pavri-Garcia, V., Tomlinson, G., & Wilson, K. (2007).
YouTube as a source of information on immunization: A con-
tent analysis. Journal of the American Medical Association, 298,
2482–2484.
Kowalski, P., & Taylor, A. K. (2009). The effect of refuting miscon-
ceptions in the introductory psychology class. Teaching of Psy-
chology, 36, 153–159.
Krech, D., Crutchfield, R. S., & Ballachey, E. L. (1962). Individual in
society. New York, NY: McGraw-Hill.
Kross, E., & Grossmann, I. (2012). Boosting wisdom: Distance from
the self enhances wise reasoning, attitudes, and behavior. Journal
of Experimental Psychology: General, 141, 43–48.
Kuklinski, J. H., Quirk, P. J., Jerit, J., Schwieder, D., & Rich, R. F.
(2000). Misinformation and the currency of democratic citizen-
ship. Journal of Politics, 62, 790–816.
Kull, S., Ramsay, C., & Lewis, E. (2003). Misperceptions, the media,
and the Iraq war. Political Science Quarterly, 118, 569–598.
Kull, S., Ramsay, C., Stephens, A., Weber, S., Lewis, E., & Hadfield,
J. (2006). Americans on Iraq: Three years on. Retrieved from
http://www.worldpublicopinion.org/pipa/pdf/mar06/usiraq_
mar06_rpt.pdf
Ladle, R., Jepson, P., & Whittaker, R. (2005). Scientists and the
media: The struggle for legitimacy in climate change and conser-
vation science. Interdisciplinary Science Reviews, 30, 231–240.
Lapinski, M. K., Rimal, R. N., DeVries, R., & Lee, E. L. (2007). The
role of group orientation and descriptive norms on water con-
servation attitudes and behaviors. Health Communication, 22,
133–142.
Larson, H. J., Cooper, L. Z., Eskola, J., Katz, S. L., & Ratzan, S. C.
(2011). Addressing the vaccine confidence gap. The Lancet, 378,
526–535.
Leggett, J. (2005). Dangerous fiction. New Scientist, 185(2489),
50–53.
Leiserowitz, A., Maibach, E., Roser-Renouf, C., & Hmielowski, J. D.
(2011). Politics and global warming: Democrats, Republicans,
Independents, and the Tea Party. Retrieved from http://environ-
ment.yale.edu/climate/files/politicsglobalwarming2011.pdf
Leviston, Z., & Walker, I. (2011). Second annual survey of Austra-
lian attitudes to climate change: Interim report. Retrieved from
http://www.csiro.au/outcomes/climate/adapting/annual-climate-
change-attitudes-survey-2011.aspx
Levy-Ari, S., & Keysar, B. (2010). Why don’t we believe non-native
speakers? The influence of accent on credibility. Journal of
Experimental Social Psychology, 46, 1093–1096.
Lewandowsky, S., Stritzke, W. G. K., Oberauer, K., & Morales, M.
(2005). Memory for fact, fiction, and misinformation: The Iraq
War 2003. Psychological Science, 16, 190–195.
Lewandowsky, S., Stritzke, W. G. K., Oberauer, K., & Morales, M.
(2009). Misinformation and the “War on Terror”: When memory
turns fiction into fact. In W. G. K. Stritzke, S. Lewandowsky, D.
Denemark, J. Clare, & F. Morgan (Eds.), Terrorism and torture:
An interdisciplinary perspective (pp. 179–203). Cambridge,
England: Cambridge University Press.
Lieberman, J. D., & Arndt, J. (2000). Understanding the limits of
limiting instruction: Social psychology explanations for the
failure of instructions to disregard pretrial publicity and other
inadmissible evidence. Psychology, Public Policy, and Law, 6,
677–711.
Loftus, E. F. (2005). Planting misinformation in the human mind: A
30-year investigation of the malleability of memory. Learning &
Memory, 12, 361–366.
Lombrozo, T. (2006). The structure and function of explanations.
Trends in Cognitive Sciences, 10, 464–470.
Lombrozo, T. (2007). Simplicity and probability in causal explana-
tion. Cognitive Psychology, 55, 232–257.
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation
and attitude polarization: The effects of prior theories on subse-
quently considered evidence. Journal of Personality and Social
Psychology, 37, 2098–2109.
Malka, A., Krosnick, J. A., & Langer, G. (2009). The association of
knowledge with concern about global warming: Trusted informa-
tion sources shape public thinking. Risk Analysis, 29, 633–647.
Marsh, E. J., & Fazio, L. K. (2006). Learning errors from fiction:
Difficulties in reducing reliance on fictional stories. Memory &
Cognition, 34, 1140–1149.
Marsh, E. J., Meade, M. L., & Roediger, H. L., III. (2003). Learning
fact from fiction. Journal of Memory and Language, 49, 519–536.
Mayer, J., & Mussweiler, T. (2011). Suspicious spirits, flexible
minds: When distrust enhances creativity. Journal of Personality
and Social Psychology, 101, 1262–1277.
Mayo, R., Schul, Y., & Burnstein, E. (2004). “I am not guilty” vs. “I
am innocent”: Successful negation may depend on the schema
used for its encoding. Journal of Experimental Social Psychol-
ogy, 40, 433–449.
McCracken, B. (2011). Are new media credible? A multidimensional
approach to measuring news consumers’ credibility and bias
perceptions and the frequency of news consumption. Unpublished
doctoral dissertation, Rochester Institute of Technology, Roches-
ter, NY.
McCright, A. M. (2011). Political orientation moderates Americans’
beliefs and concern about climate change. Climatic Change, 104,
243–253.
McCright, A. M., & Dunlap, R. E. (2010). Anti-reflexivity: The
American conservative movement’s success in undermining cli-
mate science and policy. Theory, Culture & Society, 27, 100–133.
McCright, A. M., & Dunlap, R. E. (2011). The politicization of cli-
mate change and polarization in the American public’s views of
global warming, 2001–2010. The Sociological Quarterly, 52,
155–194.
McGlone, M. S., & Tofighbakhsh, J. (2000). Birds of a feather flock
conjointly (?): Rhyme as reason in aphorisms. Psychological Sci-
ence, 11, 424–428.
McGuire, W. J. (1972). Attitude change: The information process-
ing paradigm. In C. G. McClintock (Ed.), Experimental social
Misinformation and Its Correction 129
psychology (pp. 108–141). New York, NY: Holt, Rinehart, &
Winston.
Mielby, H., Sandøe, P., & Lassen, J. (2012). The role of scientific
knowledge in shaping public attitudes to GM technologies.
Public Understanding of Science. Advance online publication.
doi:0.1177/0963662511430577
Miles, J., Petrie, C., & Steel, M. (2000). Slimming on the internet.
Journal of the Royal Society of Medicine, 93, 254.
Mitchell, K. J., & Zaragoza, M. S. (1996). Repeated exposure to
suggestion and false memory: The role of contextual variability.
Journal of Memory and Language, 35, 246–260.
Mooney, C. (2007). An inconvenient assessment. Bulletin of the
Atomic Scientists, 63, 40–47.
Moscovitch, M., & Melo, B. (1997). Strategic retrieval and the fron-
tal lobes: Evidence from confabulation and amnesia. Neuropsy-
chologia, 35, 1017–1034.
Munro, G. D. (2010). The scientific impotence excuse: Discounting
belief-threatening scientific abstracts. Journal of Applied Social
Psychology, 40, 579–600.
Myers, M., & Pineda, D. (2009). Misinformation about vaccines. In
A. D. T. Barrett & L. Stanberry (Eds.), Vaccines for biodefense
and emerging and neglected diseases (pp. 255–270). London,
England: Academic Press.
Newell, B. R., & Fernandez, D. (2006). On the binary quality of rec-
ognition and the inconsequentially of further knowledge: Two
critical tests of the recognition heuristic. Journal of Behavioral
Decision Making, 19, 333–346.
Nisbet, M. C., Maibach, E., & Leiserowitz, A. (2011). Framing peak
petroleum as a public health problem: Audience research and par-
ticipatory engagement in the United States. American Journal of
Public Health, 101, 1620–1626.
Nyhan, B. (2010). Why the “death panel” myth wouldn’t die: Mis-
information in the health care reform debate. The Forum, 8(1),
Article 5. doi:10.2202/1540-8884.1354
Nyhan, B. (2011). The limited effects of testimony on political per-
suasion. Public Choice, 148, 283–312.
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence
of political misperceptions. Political Behavior, 32, 303–330.
Nyhan, B., & Reifler, J. (2011). Opening the political mind? The
effects of self-affirmation and graphical information on factual
misperceptions. Unpublished manuscript, Dartmouth College,
Hanover, NH.
Nyhan, B., & Reifler, J. (2012). Misinformation and corrections:
Research findings from social science. Unpublished manuscript,
Dartmouth College, Hanover, NH.
Oreskes, N., & Conway, E. M. (2010). Merchants of doubt. London,
England: Bloomsbury.
Osborne, J. (2010). Arguing to learn in science: The role of collabora-
tive, critical discourse. Science, 328, 463–466.
Owens, S. R. (2002). Injection of confidence: The recent controversy
in the UK has led to falling MMR vaccination rates. European
Molecular Biology Organization Reports, 3, 406–409.
Paluck, E. L. (2009). Reducing intergroup prejudice and conflict
using the media: A field experiment in Rwanda. Journal of Per-
sonality and Social Psychology, 96, 574–587.
Pandey, A., Patni, N., Singh, M., Sood, A., & Singh, G. (2010). You-
Tube as a source of information on the H1N1 influenza pandemic.
American Journal of Preventive Medicine, 38, e1–e3.
Parrott, W. (2010). Genetically modified myths and realities. New
Biotechnology, 27, 545–551.
Pedersen, A., Attwell, J., & Heveli, D. (2007). Prediction of nega-
tive attitudes toward Australian asylum seekers: False beliefs,
nationalism, and self-esteem. Australian Journal of Psychology,
57, 148–160.
Pedersen, A., Clarke, S., Dudgeon, P., & Griffiths, B. (2005). Atti-
tudes toward indigenous Australians and asylum seekers: The
role of false beliefs and other social-psychological variables.
Australian Psychologist, 40, 170–178.
Pedersen, A., Griffiths, B., & Watt, S. E. (2008). Attitudes toward
out-groups and the perception of consensus: All feet do not wear
one shoe. Journal of Community & Applied Social Psychology,
18, 543–557.
Pennington, N., & Hastie, R. (1992). Explaining the evidence: Tests
of the story model for juror decision making. Journal of Person-
ality and Social Psychology, 62, 189–206.
Pennington, N., & Hastie, R. (1993). The story model for juror deci-
sion making. In R. Hastie (Ed.), Inside the juror (pp. 192–223.).
New York, NY: Cambridge University Press.
Peters, E. M., Burraston, B., & Mertz, C. K. (2004). An emotion-
based model of risk perception and stigma susceptibility: Cogni-
tive appraisals of emotion, affective reactivity, worldviews and
risk perceptions in the generation of technological stigma. Risk
Analysis, 24, 1349–1367.
Peters, K., Kashima, Y., & Clark, A. (2009). Talking about others:
Emotionality and the dissemination of social information. Euro-
pean Journal of Social Psychology, 39, 207–222.
Petrovic, M., Roberts, R., & Ramsay, M. (2001). Second dose of
measles, mumps, and rubella vaccine: Questionnaire survey of
health professionals. British Medical Journal, 322, 82–85.
Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood
model of persuasion. Advances in Experimental Social Psychol-
ogy, 19, 123–205.
Piaget, J. (1928). The child’s conception of the world. London, Eng-
land: Routledge and Kegan Paul.
Pickel, K. L. (1995). Inducing jurors to disregard inadmissible evi-
dence: A legal explanation does not help. Law and Human Behav-
ior, 19, 407–424.
Piper, P. (2000). Better read that again: Web hoaxes and misinforma-
tion. Searcher, 8, 40–49.
Plous, S. (1991). Biases in the assimilation of technological break-
downs—Do accidents make us safer? Journal of Applied Social
Psychology, 21, 1058–1082.
Poland, G. A., & Jacobsen, R. M. (2011). The age-old struggle against
the antivaccinationists. The New England Journal of Medicine,
364, 97–99.
Poland, G. A., & Spier, R. (2010). Fear, misinformation, and innu-
merates: How the Wakefield paper, the press, and advocacy
groups damaged the public health. Vaccine, 28, 2361–2362.
Prasad, M., Perrin, A. J., Bezila, K., Hoffman, S. G., Kindleberger,
K., Manturuk, K., . . . Powers, A. S. (2009). “There must be a
130 Lewandowsky et al.
reason”: Osama, Saddam, and inferred justification. Sociological
Inquiry, 79, 142–162.
Prior, M. (2003). Liberated viewers, polarized voters: The implica-
tions of increased media choice for democratic politics. The Good
Society, 11, 10–16.
Proctor, R. N. (2004). Should medical historians be working for the
tobacco industry? The Lancet, 363, 1174–1175.
Proctor, R. N. (2008). On playing the Nazi card. Tobacco Control,
17, 289–290.
Radwanick, S. (2011, December). More than 200 billion online
videos viewed globally in October. Retrieved from http://www
.comscore.com/press_events/press_releases/2011/12/more_
than_200_billion_online_videos_viewed_globally_in_october
Rampton, J., & Stauber, S. (2003). The uses of propaganda in Bush’s
war on Iraq. New York, NY: Tarcher/Penguin.
Ramsay, C., Kull, S., Lewis, E., & Subias, S. (2010). Misinformation
and the 2010 election: A study of the US electorate. Retrieved
from http://drum.lib.umd.edu/bitstream/1903/11375/3/misinfor-
mation_dec10_quaire.pdf
Rapp, D. N., & Kendeou, P. (2007). Revisiting what readers know:
Updating text representations during narrative comprehension.
Memory & Cognition, 35, 2019–2032.
Ratzan, S. C. (2010). Editorial: Setting the record straight: Vaccines,
autism, and The Lancet. Journal of Health Communication, 15,
237–239.
Readfearn, G. (2011). A Sunrise climate cock-up and reading cat’s
paws. Retrieved from http://www.readfearn.com/2011/01/a-
sunrise-climate-cock-up-and-reading-cats-paws/
Reber, R., & Schwarz, N. (1999). Effects of perceptual fluency on
judgments of truth. Consciousness and Cognition, 8, 338–342.
Reese, S., & Lewis, S. (2009). Framing the war on terror. Journalism,
10, 777–797.
Riesch, H., & Spiegelhalter, D. J. (2011). ‘Careless pork costs lives’:
Risk stories from science to press release to media. Health, Risk
& Society, 13, 47–64.
Ross, L. (1977). The intuitive psychologist and his shortcomings:
Distortion in the attribution process. Advances in Experimental
Social Psychology, 10, 174–221.
Sanna, L. J., & Schwarz, N. (2006). Metacognitive experiences and
human judgment: The case of hindsight bias and its debiasing.
Current Directions in Psychological Science, 17, 172–176.
Sanna, L. J., Schwarz, N., & Stocker, S. L. (2002). When debias-
ing backfires: Accessible content and accessibility experiences
in debiasing hindsight. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 28, 497–502.
Scanfeld, D., Scanfeld, V., & Larson, E. L. (2010). Dissemination of
health information through social networks: Twitter and antibiot-
ics. American Journal of Infection Control, 38, 182–188.
Schul, Y. (1993). When warning succeeds: The effect of warning on
success in ignoring invalid information. Journal of Experimental
Social Psychology, 29, 42–62.
Schul, Y., Mayo, R., & Burnstein, E. (2008). The value of distrust.
Journal of Experimental Social Psychology, 44, 1293–1302.
Schul, Y., & Mazursky, D. (1990). Conditions facilitating successful
discounting in consumer decision making. Journal of Consumer
Research, 16, 442–451.
Schwartz, B. S., Parker, C. L., Hess, J., & Frumkin, H. (2011). Public
health and medicine in an age of energy scarcity: The case of
petroleum. American Journal of Public Health, 101, 1560–1567.
Schwarz, N. (1994). Judgment in a social context: Biases, shortcom-
ings, and the logic of conversation. Advances in Experimental
Social Psychology, 26, 123–162.
Schwarz, N. (1996). Cognition and communication: Judgmental
biases, research methods, and the logic of conversation. Hills-
dale, NJ: Erlbaum.
Schwarz, N. (2004). Meta-cognitive experiences in consumer judg-
ment and decision making. Journal of Consumer Psychology, 14,
332–348.
Schwarz, N., Sanna, L. J., Skurnik, I., & Yoon, C. (2007). Metacog-
nitive experiences and the intricacies of setting people straight:
Implications for debiasing and public information campaigns.
Advances in Experimental Social Psychology, 39, 127–161.
Seifert, C. M. (2002). The continued influence of misinformation
in memory: What makes a correction effective? Psychology of
Learning and Motivation, 41, 265–292.
Sides, J. (2010). Why do more people think Obama is a Muslim?
Retrieved from http://voices.washingtonpost.com/ezra-klein/
2010/08/why_do_more_people_think_obama.html
Skurnik, I., Yoon, C., Park, D. C., & Schwarz, N. (2005). How warn-
ings about false claims become recommendations. Journal of
Consumer Research, 31, 713–724.
Smith, P., Bansal-Travers, M., O’Connor, R., Brown, A., Banthin,
C., & Guardino-Colket, S. (2011). Correcting over 50 years of
tobacco industry misinformation. American Journal of Preven-
tive Medicine, 40, 690–698.
Song, H., & Schwarz, N. (2008). Fluency and the detection of dis-
tortions: Low processing fluency attenuates the Moses illusion.
Social Cognition, 26, 791–799.
Sperber, D., & Wilson, D. (1986). Relevance: Communication and
cognition. Cambridge, MA: Harvard University Press.
Stroud, N. J. (2010). Polarization and partisan selective exposure.
Journal of Communication, 60, 556–576.
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evalu-
ation of political beliefs. American Journal of Political Science,
50, 755–769.
Tenney, E. R., Cleary, H. M. D., & Spellman, B. A. (2009). Unpack-
ing the doubt in “beyond a reasonable doubt”: Plausible alterna-
tive stories increase not guilty verdicts. Basic and Applied Social
Psychology, 31, 1–8.
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions
about health, wealth, and happiness. New Haven, CT: Yale Uni-
versity Press.
Tiffen, R. (2009). Reversed negatives: How the news media respond
to “our” atrocities. In W. G. K. Stritzke, S. Lewandowsky, D.
Denemark, J. Clare, & F. Morgan (Eds.), Terrorism and torture
(pp. 246–264). Cambridge, England: Cambridge University Press.
Todorov, A., & Mandisodza, A. N. (2004). Public opinion on foreign
policy: The multilateral public that perceives itself as unilateral.
Public Opinion Quarterly, 68, 323–348.
Topolinski, S. (2012). Nonpropositional consistency. In B. Gawronski
& F. Strack (Eds.), Cognitive consistency: A fundamental principle
in social cognition (pp. 112–131). New York, NY: Guilford Press.
Misinformation and Its Correction 131
Travis, S. (2010). CNN poll: Quarter doubt Obama was born in U.S.
Retrieved from http://politicalticker.blogs.cnn.com/2010/08/04/
cnn-poll-quarter-doubt-president-was-born-in-u-s/
van Oostendorp, H. (1996). Updating situation models derived from
newspaper articles. Medienpsychologie, 8, 21–33.
van Oostendorp, H., & Bonebakker, C. (1999). Difficulties in updat-
ing mental representations during reading news reports. In H. van
Oostendorp & S. R. Goldman (Eds.), The construction of men-
tal representations during reading (pp. 319–339). Mahwah, NJ:
Erlbaum.
Verkoeijen, P. P. J. L., Rikers, R. M. J. P., & Schmidt, H. G. (2004).
Detrimental influence of contextual change on spacing effects in
free recall. Journal of Experimental Psychology: Learning, Mem-
ory, and Cognition, 30, 796–800.
Weaver, K., Garcia, S. M., Schwarz, N., & Miller, D. T. (2007). Infer-
ring the popularity of an opinion from its familiarity: A repetitive
voice can sound like a chorus. Journal of Personality and Social
Psychology, 92, 821–833.
Whyte, K. P., & Crease, R. P. (2010). Trust, expertise, and the phi-
losophy of science. Synthese, 177, 411–425.
Wilkes, A. L., & Leatherbarrow, M. (1988). Editing episodic mem-
ory following the identification of error. Quarterly Journal of
Experimental Psychology: Human Experimental Psychology, 40,
361–387.
Wilkes, A. L., & Reynolds, D. J. (1999). On certain limitations
accompanying readers’ interpretations of corrections in episodic
text. The Quarterly Journal of Experimental Psychology, 52A,
165–183.
Wilkie, W. L., McNeill, D. L., & Mazis, M. B. (1984). Marketing’s
“scarlet letter”: The theory and practice of corrective advertising.
The Journal of Marketing, 48, 11–31.
Wilson, E. A., & Park, D. C. (2008). A case for clarity in the writ-
ing of health statements. Patient Education and Counseling, 72,
330–335.
Wilson, T. D., & Brekke, N. (1994). Mental contamination and men-
tal correction: Unwanted influences on judgments and evalua-
tions. Psychological Bulletin, 116, 117–142.
Winkielman, P., Huber, D. E., Kavanagh, L., & Schwarz, N. (2012).
Fluency of consistency: When thoughts fit nicely and flow
smoothly. In B. Gawronski & F. Strack (Eds.), Cognitive consis-
tency: A fundamental principle in social cognition (pp. 89–111).
New York, NY: Guilford Press.
Winters, K. H. (2008). Investigative Summary Regarding Allegations
That NASA Suppressed Climate Change Science and Denied
Media Access to Dr. James E. Hansen, a NASA Scientist. Retrieved
from http://oig.nasa.gov/investigations/oi_sti_summary.pdf
Wolf, S., & Montgomery, D. A. (1977). Effects of inadmissible evi-
dence and level of judicial admonishment to disregard on the
judgments of mock jurors. Journal of Applied Social Psychology,
7, 205–219.
World Health Organization. (2005). Modern food biotechnol-
ogy, human health and development: an evidence-based study.
Retrieved from http://www.who.int/foodsafety/publications/bio-
tech/biotech_en.pdf
Wyer, R. S. (1974). Cognitive organization and change: An informa-
tion processing approach. Hillsdale, NJ: Erlbaum.
Young, S. D. (2011). Recommendations for using online social net-
working technologies to reduce inaccurate online health informa-
tion. Online Journal of Health and Allied Sciences, 10, 2.
Zaragoza, M. S., & Mitchell, K. J. (1996). Repeated exposure to sug-
gestion and the creation of false memories. Psychological Sci-
ence, 7, 294–300.