ChapterPDF Available

Misinformation and its Correction: Cognitive Mechanisms and Recommendations for Mass Communication

Authors:
1
Misinformation and its Correction: Cognitive Mechanisms and Recommendations
for Mass Communication
In. B. Southwell, E. A. Thorson, & L. Sheble. (Eds), Misinformation and Mass
Audiences. Austin, TX: University of Texas Press.
Briony Swire and Ullrich Ecker
School of Psychology
University of Western Australia
Contact: briony.swire-thompson@research.uwa.edu.au
Website: https://utpress.utexas.edu/books/southwell-thorson-sheble-misinformation-and-mass-audiences
2
Misinformation and its Correction: Cognitive Mechanisms and Recommendations for Mass
Communication
In 2007, a man in the United Kingdom posted a photograph on his website of a
“mummified fairy” which he created as an April Fools’ prank. After receiving 20,000 visitors to
the site in one day, he explicitly revealed that he had fabricated the scenario, yet many accused
him of covering up the truth and vehemently insisted that the fairy was real (“Fairy fool”, 2007).
This anecdote highlights a valid concern to mass communicators: regardless of how ridiculous
information seems, once it is in the public sphere, it can take on a life of its own and may never
be fully retractable.
It has become a societal norm that the media and the internet provide vast quantities of
information, placing the onus on the individual to sort fact from fiction. However, individuals
have limited time, cognitive resources, or motivation to understand complex topics such as
scientific findings or political developments, and misconceptions are commonplace.
Unfortunately, once inaccurate beliefs are formed, they are remarkably difficult to eradicate
(Ecker, Lewandowsky, Swire, & D. Chang, 2011a). Even after people receive clear and credible
corrections, misinformation continues to influence their reasoning: in cognitive psychology, this
is known as the continued influence effect of misinformation (H. Johnson & Seifert, 1994;
Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012). The mummified fairy is a benign
example, but the ramifications can be serious. Belief in misinformation can adversely impact
decision making, and the continued influence effect has real-world implications in areas as
disparate as education, health, and the economy.
One prominent example is the misconception that the measles, mumps, rubella (MMR)
vaccine causes autism. This falsehood has been repeatedly—and convincingly—retracted by the
3
media and scientific community over a number of years since the original myth was
disseminated in a fraudulent article. Despite these debunking efforts, the myth has led to a drop
in vaccination rates, and an increase in vaccine-preventable disease (Poland & Spier, 2010). The
economic burden of 16 measles outbreaks in the US in 2011 alone has been estimated
somewhere between $2.7 million and $5.3 million (Ortega-Sanchez, Vijayaghavan, Barskey, &
Wallace, 2014). Thus, developing evidence-based recommendations on how to adequately
communicate corrections and minimize reliance upon inaccurate information is not only
important for individual decision making but also has ramifications for society as a whole.
The most important recommendation for both traditional mass media such as
newspaper and television, as well as more recent technologies such as Twitter—which have
essentially transformed ordinary citizens into would-be journalists—is to take greater care to
ensure that information is correct to begin with. However, this is not always realistic due to the
fast pace of modern information consumption and dissemination, and the fact that ordinary
citizens are not bound by rules of journalistic integrity. Social media is thus an ideal breeding
ground for the propagation and transition of misinformation, which can be exemplified by its
role in rumors surrounding the Boston Marathon bombing in 2013. For example, a well-
intentioned Reddit thread was created to help find the perpetrators, yet the accusation of an
innocent and deceased Brown University student subsequently went viral (Guzman, 2013).
Information shared through social media is usually disseminated without fact-checking, based
merely on its potential to elicit emotional responses or support a personally motivated argument
(Peters, Kashima, & Clark, 2009).
This chapter focuses on cognitive mechanisms and theories accounting for the
continued influence of misinformation. In particular, we will discuss what drives belief in
4
inaccurate information, why certain individuals are predisposed to refrain from belief change
even in the face of good corrective evidence, and how corrections can be designed to maximize
impact. We therefore provide six practical recommendations based upon our current knowledge
of cognitive processes. We first discuss theoretical accounts for the continued influence effect
such as mental models, dual processing theory, the necessity of co-activation of misinformation
and new information, and the impact of the information’s source. We then discuss individual
predispositions to the continued influence effect, in particular a person’s worldview and
skepticism.
Mental Models
When people initially encounter information, a situation model of integrated memory
representations is built, and this model is continuously updated as new information becomes
available and relevant (Bower & Morrow, 1990). If the required changes are small, they can be
integrated into the situational model incrementally (Bailey & Zacks, 2015), yet if a larger change
is required, a “global” update is necessary, which involves discarding the old mental model and
creating a new one (Kurby & Zacks, 2012). However, even if there are sufficient cognitive
resources to notice a difference between one’s mental model and the current environment, people
are often quite inadequate at assimilating new information or mapping it onto existing memory
representations (van Oostendorp, 2014). It is possible that the continued influence effect occurs
when people update incrementally when in fact a global update is called for. Reliance on
inaccurate information is less likely in instances when there is an alternative to replace the
5
inaccurate information in a person’s mental model, as a readily available alternative explanation
facilitates global updating (Ecker, Lewandowsky, Cheung, & Maybery, 2015).
A classic paradigm for studying the continued influence effect involves presenting
participants with fictitious scenarios involving the retraction of an event cause. One common
example is a narrative where negligent storage of gas cylinders is initially held responsible for
starting a warehouse fire, yet their presence is retracted shortly thereafter (H. Johnson & Seifert,
1994; Wilkes & Leatherbarrow, 1988). If participants are explicitly queried about the gas
cylinders, they typically acknowledge the gap in their understanding (i.e., a gap in their mental
event model) created by the earlier retraction, and correctly state that there were none. However,
when answering inferential reasoning questions regarding the event—such as “what was the
cause of the explosions?”—participants often still rely upon the outdated information. This
indicates that people prefer to have an inaccurate over an incomplete event model, which can
lead to reliance upon discredited information even after an explicit correction (Ecker,
Lewandowsky, & Apai, 2011b).
Recommendation 1: Providing Factual Alternatives
One of the most effective methods of correcting misinformation is to provide an
alternative factual cause or explanation to facilitate “switching out” the inaccurate information in
an individual’s initial situation model. For example, if people are told that it was not gas
cylinders that caused the warehouse fire, but that there was evidence of arson, people are
dramatically less likely to rely upon the original inaccurate information (H. Johnson & Seifert,
1994; Ecker, Lewandowsky, & Tang, 2010). The alternative explanation effectively plugs the
model gap left by the retraction. The alternative should ideally have the same explanatory
relevance as the misinformation it replaces, and it is important that it is plausible—in fact, if the
6
new information is more plausible and easy-to-understand than the original, updating is even
more efficient (Baadte & Dutke, 2012).
In the real world, providing an alternative explanation to ameliorate reliance upon
inaccurate information can be problematic, as often there is no available substitute—sometimes
all that can be said about a piece of misinformation is that it is not true. For example, if a person
is accused of a crime, they might simply turn out to be “not guilty” without an alternative suspect
being readily available. The lack of adequate alternatives can have profound ramifications. For
example, the ongoing rumors regarding missing Malaysian Airlines flight MH370, which
disappeared over the Indian Ocean in 2014, have proven difficult to retract: In the absence of
unequivocal evidence regarding what happened to the plane, traditional and social media was rife
with speculations that the plane was hijacked by terrorists or a suicidal pilot (e.g., Quest, 2016).
Arguably, belief in the hijacking speculation has been difficult to shift because a convincing
factual alternative has not been available.
Dual Process Theory: Strategic and Automatic Memory Processes
The notion that retractions create gaps in mental models is useful to understand the basic
phenomenon that is the continued influence effect. Invalidated information is not simply deleted
from memory—memory does not work like a whiteboard and retractions do not simply erase
misinformation. To explain why corrected misinformation is used during reasoning, some
theorists have focused on the memory processes governing information retrieval, where a
common assumption is that there are two separates types of memory retrieval, strategic and
automatic (Yonelinas, 2002).
Strategic memory processes are effortful and allow for the controlled recollection of the
information’s contextual details. Similar to the meta-data of a computer file, contextual details
7
include information about the information itself. This includes qualities such as the information’s
spatiotemporal context of encoding, source, and veracity (Frithsen & Miller, 2014). A person’s
ability to use strategic memory processes efficiently will depend upon factors such as effort,
motivation, the period of time since encoding, and age (e.g., Herron & Rugg, 2003). In contrast,
automatic processes are fast and relatively acontextual, and serve to quickly provide an
indication of memory strength or familiarity with an item or notion (Zimmer & Ecker, 2010).
Automatic retrieval processes can contribute to misinformation effects in two ways.
Firstly, the evaluation of a statement’s veracity is influenced by its familiarity; this is
problematic as information can be accepted as true just because it seems familiar. When
increased familiarity gives the illusion that information is valid, this is known as the illusory
truth effect (e.g., Begg, Anas, & Farinacci, 1992). Secondly, when questioned about an event or
otherwise cued, retracted misinformation can be automatically retrieved from memory without
any accompanying contextual details, and potentially without recalling that the information has
been retracted (cf. Ayers & Reder, 1998; Ecker et al., 2010). To illustrate, it has been argued that
once misinformation has been encoded and then retracted, a “negation tag” is linked to the
original memory representation (e.g., “Flight MH370 was hijacked–NOT TRUE”; cf. Gilbert,
Krull, & Malone, 1990). When queried about the topic, fast automatic memory processes might
simply retrieve the familiar claim, while strategic memory processes are required to retrieve the
negation tag and dismiss the familiar statement as untrue. If strategic memory processes are not
engaged, familiar claims are thus likely to be judged as true even after plausible retractions
(Dechene, Stahl, Hansen, & Wanke, 2010).
8
Recommendation 2: Boosting Retrieval of the Retraction, Not Familiarity of the Myth
The extent to which people engage their strategic memory processes can be actively
encouraged, and this can reduce misinformation effects. Ecker et al. (2010) found that presenting
participants with a pre-exposure warning detailing the continued influence effect greatly reduced
reliance on misinformation, and was as effective as providing a factual alternative. The authors
argued that warnings not only allowed individuals to more effectively tag misinformation as
false when encoding its retraction, but also boosted later recall of the retraction (or the “negation
tag”). The effect of warnings was investigated mainly for theoretical reasons, and providing a
pre-exposure misinformation warning will not be a viable option in most real world settings.
However, any incentive to engage in strategic memory processes should be useful, such as
boosting source-monitoring (Lindsay & M. Johnson, 1989; Poole & Lindsay, 2002).
Enhancing recollection is one way of reducing reliance on misinformation, but
circumventing the inflation of a misconception’s familiarity is potentially another way. This
involves minimizing unnecessary explicit repetition of misinformation. For example, an
educational pamphlet using a “myth-busting” format that repeats the myth before indicating that
it is false (e.g., “Flight MH370 was hijacked—FALSE”) can boost the familiarity of the
misconception, potentially increasing the risk that misconceptions are later mistakenly
remembered as being true. This misremembering of myths as facts was demonstrated by
Skurnik, Yoon, Park, and Schwarz (2005), as well as Peter and Koch (2016). In both these
studies, participants misremembered the originally false statements as true more often than
misremembering originally true statements as false. Additionally, Swire, Ecker, and
Lewandowsky (2016) found that retracting myths and affirming facts led to comparable belief
change initially (i.e., belief reduction for myths, belief increase for facts), but that belief change
9
was less sustained with myths over the course of a one-week period. In other words,
misinformation began to be “re-believed” while fact belief remained stable. Thus, where
possible, communicators should focus on the facts and explicit repetition of a myth should be
minimized if the retraction does not provide adequate information to allow people to revise their
understanding.
Co-activation of Misconception and Corrective Facts
Despite the theoretically motivated suggestion to avoid myth repetition, for practicality,
corrections usually do require repetition of the myth—the question then becomes how best to
execute this. As discussed previously, presentation of factual alternative information is
conducive to successful mental-model revision. Beyond that, several theoretical accounts have
proposed that the co-activation of inaccurate knowledge and newly encoded factual information
facilitates knowledge revision. Co-activation is believed to increase the likelihood that the
individual notices discrepancies between originally-held misconceptions and factual evidence,
and that they update their knowledge accordingly (Kendeou & van den Broek, 2007).
After a correction, both the outdated and new information may co-exist in memory, and
can both be activated by relevant cues (cf. Ayers & Reder, 1998). Thus, it is crucial for efficient
updating and knowledge revision that a sufficient amount and quality of factual information is
provided, and ideally, that the correction also explains the reasons as to why the misconception is
wrong (Seifert, 2002). Adding adequate detail to the new accurate information can systematically
strengthen the correction by slowly decreasing interference from the outdated information
(Kendeou, Smith, & O’Brien, 2013). This illustrates how when ample factual information is
available, misinformation can be used as an educational tool (Bedford, 2010).
10
Recommendation 3: Refutations of Misinformation as an Educational Tool
A refutation involves not only a statement that the misconception is false, but a
comprehensive explanation as to why it is incorrect (Hynd, 2001). The efficacy of refutations has
primarily been investigated in the field of education, and has often focused on the updating of
scientific misconceptions held by students in a classroom. A meta-analysis of 70 studies by
Guzzetti, Snyder, Glass, and Gamas (1993) indicated that corrections are most successful when
they include sufficient explanation as to why a misconception is false (and why the facts are
true). Other educational strategies aimed at reducing reliance on misinformation such as class
discussions, demonstrations, and non-refutational texts (which simply present the correct
information without a description of the misconception itself), are often successful in the short
term, but not after a delay (Guzetti, 2000).
It has been argued that the relative success of the refutation at promoting belief change is
that, by design, it increases the likelihood of the old and new information being co-activated in
memory (Kowalski & Taylor, 2009). It follows that when debunking a myth, its repetition seems
acceptable (despite the potential myth-familiarity boost) as long as (1) the repetition serves to
highlight a discrepancy between a misconception and factual evidence, thus promoting co-
activation, (2) the focus of the intervention can be shifted promptly from the myth to the factual
evidence, and (3) the target audience has the necessary resources—in particular in regards to
time and motivation—to engage with the provided materials and sees the information source as
credible, as would hopefully be the case in a classroom setting.
Retraction Source Credibility
People often do not have the time or inclination to be an expert in all fields, so most
knowledge, to a degree, is reliant upon accepting what someone else (or google) claims to be
11
true. Thus, people hold many opinions and beliefs about events and causal relationships without
having relevant involvement or expertise. For example, trust in climate scientists is a predictor of
whether or not an individual acknowledges that climate change is anthropogenic (Mase, Cho,
Prokopy, 2015). In general, high-credibility sources are more persuasive than low-credibility
sources (Eagly & Chaiken, 1993), and the lower one’s prior knowledge regarding a topic, the
more influential source credibility becomes (Jung, Walsh-Childers, & Kim, 2016). The two core
factors of source credibility discussed in the literature are (1) expertise—the extent to which the
source is capable of providing accurate information, and (2) trustworthiness—the perception that
the source is willing to provide information that the source itself believes to be accurate
(Pornpitakpan, 2004). A source can independently have varying degrees of these two qualities,
for example, a doctor may have a high degree of (perceived) expertise, but if found to be paid by
pharmaceutical companies may have relatively low (perceived) trustworthiness.
When it comes to retracting inaccurate information or belief change, intriguingly
trustworthiness seems to play a much larger role than expertise (McGinnes & Ward, 1980). For
example, Guillory and Geraci (2013) investigated the credibility of retraction source by
presenting participants with a story about a politician who was witnessed taking a bribe. This
was later retracted by people with varying degrees of trustworthiness and expertise. The authors
found that although trustworthiness was integral to the success of the retraction, expertise was
not. It should be noted that the way expertise was operationalized in this study was more akin to
“involvement in an event” rather than expertise in its perhaps more common meaning (i.e.,
“possessing relevant knowledge”). However, Ecker and Antonio (2016) replicated Guillory and
Geraci’s main finding with a more traditional interpretation of expertise and also found an effect
of trustworthiness but not expertise on the effectiveness of retractions.
12
Recommendation 4: Building Credibility
The ability to successfully correct misinformation appears to rely more upon the source’s
perceived honesty and integrity than its expertise. This means that Leonardo DiCaprio’s 2016
Oscar speech correcting climate-change misconceptions (Goldenberg, 2016) could be more
effective than an expert communication. Additionally, Paek, Hove, Jeong, and Kim’s (2011)
found that YouTube videos created by peers had more impact in terms of attitude change than
videos created by a non-profit organization. This means that social media can be an effective
vehicle for influencing others, and Facebook or Twitter posts may have more influence on
friends’ opinions than expert advice.
Ideally, and ethically, science communicators should aim to combine high
trustworthiness with high expertise. The quality and accuracy of the presented information will
influence how the source itself is perceived—this includes factors such as the information’s
presentation, plausibility, and whether or not it is supported by good examples (Jung et al. 2016;
Metzger, 2007). In general, perception of a source seems to be an iterative process in that the
more quality information is released, the greater the level of perceived credibility. In mass
communications in particular, basing claims on evidence, adequately referencing the evidence,
and presenting data in an easily accessible way to minimize misinterpretations—and doing this
consistently—will build credibility and thus contribute to a greater efficacy of corrections
(Gigerenzer, Gaissmaier, Kurz-Milcke, Schwartz, & Woloshin, 2007).
Worldview
If an individual holds a strong belief that is fundamental to their identity, even the most
credible source may not be able to shift it. A person’s ideology often influences how information
is sought out and evaluated, and if the information runs counter to prior beliefs, it is likely to be
13
ignored or more critically appraised (Wells, Reedy, Gastil, & Lee, 2009). This is known as
motivated reasoning (Kunda, 1990). Motivated reasoning can be compounded due to the
formation of ideological “echo-chambers,” where information is exchanged primarily amongst
people with similar viewpoints, such that corrections are less likely to reach the “target”
audience (Barbera, Jost, Nagler, Tucker, & Bonneau, 2015). This is fostered by social media,
where misinformation tends to circulate quicker than associated corrections (Shin, Jian, Driscoll,
& Bar, 2016).
Even if a correction reaches the misinformed target audience, simply providing the
correct information is inefficient, as continued reliance on misinformation is likely when the
misinformation conforms to a person’s pre-existing belief system, yet the correction does not
(Lewandowsky, Stritzke, Oberauer, & Morales, 2005s). Retracting misinformation that runs
counter to a person’s worldview can ironically even strengthen the to-be-corrected
misinformation, a phenomenon known as the worldview backfire effect; this has been
demonstrated when correcting misinformation surrounding contentious issues such as climate
change (Hart & Nisbet, 2012), or vaccine safety (Nyhan & Reifler, 2015). Worldview biases are
particularly difficult to overcome, as even neutral coverage of an issue can lead to polarization
(Jerit & Barabas, 2012).
Recommendation 5: Provide worldview- or self-affirming corrections
If a correction is regarding a contentious topic or politically sensitive subject matter, it is
beneficial to frame the correction in such a way that it is congruent with the person’s values in
order to reduce perceived threat (Kahan, 2010). For example, conservatives are more likely to
accept anthropogenic climate science if it is presented as a business opportunity for the nuclear
industry (Feygina, Jost, & Goldsmith, 2010). Additionally, in line with the above-mentioned
14
effects of source credibility, worldview congruence can potentially be conveyed through the
appropriate choice of messenger. Callaghan and Schnell (2009) found that attitudes towards gun
control were affected not only by the way the information was framed, but also the source of the
message. Participants who were presented an argument regarding the impacts of crime and
violence were 19% more likely to support gun control measures if the message came from a New
York Times journalist than if it was presented without a source. People also seem less defensive
regarding counter-attitudinal information when their self-worth is strengthened. For example,
Cohen, Aronson, and Steele (2000) demonstrated this effect of self-affirmation: participants who
had been instructed to write about a personal quality that made them feel good about themselves
were subsequently more likely to respond positively to evidence that challenged their beliefs
regarding the death penalty.
Skepticism
Rather than evidence-denial driven by motivated reasoning, skepticism is the awareness of
potential hidden agendas and a desire to accurately understand the evidence at hand (Mayo,
2015). Skepticism can reduce misinformation effects, as it leads to more cognitive resources
being allocated to the task of weighing up the veracity of both the misinformation and the
correction. For example, people rely less upon misinformation when given the task of fact
checking, looking for inconsistencies and correcting inaccuracies as they read a text (Rapp,
Hinze, Kohlhepp, & Ryskin, 2014). The increased deliberation over the accuracy of information
is often instigated when the information counters an individual’s worldview (Taber & Lodge,
2006). To illustrate, Lewandowsky et al. (2005) found that a greater degree of skepticism led to
better discounting of retracted real-world news reports, and DiFonzo, Beckstead, Stupak, &
Walders (2016) found that individuals with greater dispositional skepticism tended to believe
15
inaccurate rumors to a lesser extent. The ability to maintain doubt, question evidence and
scrutinize the original data—even when it aligns with one’s worldview—is conducive to
avoiding reliance on misinformation, but it is a difficult task. Thus, honing the skill of knowing
when to trust evidence, and when not to, can potentially have great benefits.
Recommendation 6: Fostering Skepticism
Skepticism is a quality that can be encouraged and even temporarily induced—for
example, negative mood increases skepticism and improves accuracy in detecting deceitful
communications (Forgas & East, 2008). There is also a growing movement suggesting that
evidence-based evaluation and critical thinking should formally be taught in schools. Schmaltz
and Lilienfeld (2014) suggested that activities such as asking students to identify pseudoscience
on campus and in the media could highlight the plethora of falsifiable claims in the public
sphere. Alternatively, the authors recommended activities where students create their own
pseudoscience to demonstrate and experience the ease with which anecdotal evidence or
“psychobabble” can be fabricated. Even examining real-world false advertising cases can be
educational, for example, investigating the Federal Trade Commission’s verdict to charge
Lumosity $2 million for claiming its brain training could protect against cognitive impairment, or
Dannon $21 million for claiming their yoghurt can prevent the flu (Lordan, 2010; Rusk, 2016).
Lastly, the ability to question causal illusions—the perception that one event caused another,
where in fact they are unrelated—can also be taught, and a better understanding about the
probability of an outcome, the probability of a cause, and cause-outcome coincidences can help
promote skepticism (Matute et al., 2015).
16
Conclusion
Assessing the accuracy of information can be a difficult task. In today’s fast-paced
society, mass communication and social media play a key role in the sharing and receiving of
current events. In reality, the public do not have time to investigate each claim they encounter in
depth; therefore, providing quality information is essential. In the aftermath of Brexit and the
2016 US election, where the political landscape was rife with misinformation and fake news
(Barthel, Mitchell, & Holcomb, 2016; McCann & Morgan, 2016), the ability to correct
inaccuracies has never seemed more pertinent. The six recommendations provided can serve as
guidelines for mass communication as to how best to retract the plethora of misinformation in
the public sphere. However, it is important to note that no corrective technique can reduce belief
to base level, as if the misinformation was never previously mentioned. In addition, even if
people do shift their opinion and acknowledge that information they previously believed to be
true is incorrect, they are unlikely to change their voting preferences or feelings towards political
candidates (Swire, Berinsky, Lewandowsky, & Ecker, 2016). Given what we know about
misinformation and its correction, communicators thus hold a great deal of responsibility to
ensure that the information initially released is as accurate as possible.
17
References
Ayers, M.S., & Reder, L.M. (1998). A theoretical review of the misinformation effect:
Predictions from an activation-based memory model. Psychonomic Bulletin & Review, 5,
1–21.
Baadte, C., & Dutke, S. (2012). Learning about persons: the effects of text structure and
executive capacity on conceptual change. European Journal of Psychology of Education,
28, 1045–1064. http://doi.org/10.1007/s10212-012-0153-2
Bailey, H.R., & Zacks, J.M. (2015). Situation model updating in young and older adults: Global
versus incremental mechanisms. Psychology and Aging, 30,232–244.
Barbera, P., Jost, J., Nagler, J., Tucker, J., & Bonneau, R. (2015). Tweeting from left to right: Is
online political communication more than an echo chamber? Psychological Science, 26,
1531-1542.
Barthel, M., Mitchell, A., & Holcomb, J. (2016) Many Americans believe fake news is sowing
confusion. Pew Research Center, Available from http://www.journalism.org/2016/12/15/
many-americans-believe-fake-news-is-sowing-confusion/
Bedford, D. (2010). Agnotology as a teaching tool: Learning climate science by studying
misinformation. Journal of Geography, 109, 159–165.
Begg, I..M., Anas, A., & Farinacci, S. (1992). Dissociation of processes in belief: Source
recollection, statement familiarity, and the illusion of truth. Journal of Experimental
Psychology: General, 121,446-458.
Bower, G.H., & Morrow, D.G. (1990). Mental models in narrative comprehension. Science,
247,44–48.
18
Callaghan, K.C., & Schnell, F. (2009). Who says what to whom: Why messengers and citizen
beliefs matter in social policy framing. The journal of Social Science, 46,12-28.
Cohen, G.L., Aronson, J., & Steele, C.M. (2000). When beliefs yield to evidence: Reducing
biased evaluation by affirming the self. Personality and Social Psychology Bulletin,
26,1151–1164.
Dechene, A., Stahl, C., Hansen, J., & Wanke, M. (2010). The truth about the truth: A meta-
analytic review of the truth effect. Personality and Social Psychology Review, 14,238–
257.
DiFonzo, N., Beckstead, J.W., Stupak, N., & Walders, K. (2016). Validity judgments of rumors
heard multiple times: the shape of the truth effect. Social Influence,11,22–39.
Eagly, A. H. (1993). The psychology of attitudes. Fort Worth, TX: Harcourt Brace Jovanovich
College Publishers.
Ecker, U.K.H., & Antonio, L. (2016). Source credibility and the continued influence effect.
Unpublished manuscript.
Ecker, U.K.H., Lewandowsky, S., & Apai, J. (2011b). Terrorists brought down the plane!—No,
actually it was a technical fault: Processing corrections of emotive information. The
Quarterly Journal of Experimental Psychology, 64,283–310.
Ecker, U.K.H., Lewandowsky, S., Cheung, C.S.C., & Maybery, M.T. (2015). He did it! She did
it! No, she did not! Multiple causal explanations and the continued influence of
misinformation. Journal of Memory and Language, 85,101–115.
Ecker, U.K.H., Lewandowsky, S., Swire, B., & Chang, D. (2011a). Correcting false information
in memory: Manipulating the strength of misinformation encoding and its retraction.
Psychonomic Bulletin & Review, 18,570–578.
19
Ecker, U.K.H., Lewandowsky, S., & Tang, D.T.W. (2010). Explicit warnings reduce but do not
eliminate the continued influence of misinformation. Memory & Cognition, 38,1087–
1100.
Fairy fool sparks huge response. (2007, April 1). BBC. Retrieved from
http://news.bbc.co.uk/2/hi/uk_news/england/derbyshire/6514283.stm
Feygina, I., Jost, J.T., & Goldsmith, R.E. (2010). System justification, the denial of global
warming, and the possibility of “system-sanctioned change.” Personality and Social
Psychology Bulletin, 36,326–338.
Forgas, J.P., & East, R. (2008). On being happy and gullible: Mood effects on skepticism and the
detection of deception. Journal of Experimental Social Psychology, 44,1362–1367.
Frithsen, A., & Miller, M.B. (2014). The posterior parietal cortex: Comparing remember/know
and source memory tests of recollection and familiarity. Neuropsychologia, 61, 31–44.
Gilbert, D.T., Krull, D.S., & Malone, P.S. (1990). Unbelieving the unbelievable: Some problems
in the rejection of false information. Journal of Personality and Social Psychology, 59,
601.
Gigerenzer, G., Gaissmaier, W., Kurz-Milcke., E., Schwartz L.M., & Woloshin S. (2007).
Helping Doctors and Patients Make Sense of Health Statistics. Psychological Science in
the Public Interest, 8,53-96.
Goldenburg, S. How Leonardo DiCaprio became one of the world's top climate change
champions. The Guardian. Retrieved from
http://www.theguardian.com/environment/2016/feb/29/how-leonardo-dicaprio-oscar-
climate-change-campaigner
20
Guillory, J.J., & Geraci, L. (2013). Correcting erroneous inferences in memory: The role of
source credibility. Journal of Applied Research in Memory and Cognition, 2, 201–209.
Guzman, M. (2013). After Boston, Still learning. Quill, 101,22-25.
Guzzetti, B.J. (2000). Learning counter-intuitive science concepts: what have we learned from
over a decade of research? Reading & Writing Quarterly, 16, 89–98.
Guzzetti, B.J., Snyder, T.E., Glass, G.V., & Gamas, W.S. (1993). Promoting conceptual change
in science: A comparative meta-analysis of instructional interventions from reading
education and science education. Reading Research Quarterly, 28,117–159.
Hart, P.S., & Nisbet, E.C. (2012). Boomerang effects in science communication: How motivated
reasoning and identity cues amplify opinion polarization about climate mitigation
policies. Communication Research, 39, 701–723.
Herron, J.E., & Rugg, M.D. (2003). Strategic influences on recollection in the exclusion task:
Electrophysiological evidence. Psychonomic Bulletin & Review, 10,703–710.
Hynd, C.R. (2001). Refutational texts and the change process. International Journal of
Educational Research, 35,699–714.
Jerit, J., & Barabas, J. (2012). Partisan perceptual bias and the information environment. Journal
of Politics, 74,672–1684.
Johnson, H.M., & Seifert, C.M. (1994). Sources of the continued influence effect: When
misinformation in memory affects later inferences. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 20,1420–1436.
Jung, E.H., Walsh-Childers, K., & Kim, H.S. (2016). Factors influencing the perceived
credibility of diet-nutrition information web sites. Computers in Human Behavior, 58,37–
47.
21
Kahan, D. (2010). Fixing the communications failure. Nature, 463, 296–297.
Kendeou, P., & van den Broek, P. (2007). The effects of prior knowledge and text structure on
comprehension processes during reading of scientific texts. Memory & Cognition, 35,
1567–1577.
Kendeou, P., Smith, E. R., & O’Brien, E.J. (2013). Updating during reading comprehension:
Why causality matters. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 39, 854–865.
Kowalski, P., & Taylor, A.K. (2009). The effect of refuting misconceptions in the introductory
psychology class. Teaching of Psychology, 36, 153–159.
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480-498.
Kurby, C.A., & Zacks, J. M. (2012). Starting from scratch and building brick by brick in
comprehension. Memory & Cognition, 40,812–826.
Lewandowsky, S., Ecker, U.K.H., Seifert, C.M., Schwarz, N., & Cook, J. (2012).
Misinformation and its correction: Continued influence and successful debiasing.
Psychological Science in the Public Interest, 13, 106–131.
Lewandowsky, S., Stritzke, W.G.K., Oberauer, K., & Morales, M. (2005). Memory for fact,
fiction, and misinformation: The Iraq war 2003. Psychological Science, 3, 190.
Lindsay, D. S., & Johnson, M. K. (1989). The eyewitness suggestibility effect and memory for
source. Memory & Cognition, 17, 349–358.
Lombrozo, T. (2007). Simplicity and probability in causal explanation. Cognitive Psychology,
55, 232–257.
22
Lordan, B. (2010). Dannon agrees to drop exaggerated health claims for Activia yogurt and
DanActive dairy drink. Federal Trade Commission. Retrieved from
https://www.ftc.gov/news-events/press-releases/2010/12/dannon-agrees-drop-
exaggerated-health-claims-activia-yogurt
Mase, A.S., Cho, H., & Prokopy, L.S. (2015). Enhancing the Social Amplification of Risk
Framework (SARF) by exploring trust, the availability heuristic, and agricultural
advisors’ belief in climate change. Journal of Environmental Psychology, 41,166–176.
Matute, H., Blanco, F., Yarritu, I., Diaz-Lago, M., Vadillo, M., & Berberia, I. (2015). Illusions of
causality: how they bias our everyday thinking and how they could be reduced. Frontiers
in Psychology, 1, 1-13.
Mayo, R. (2015). Cognition is a matter of trust: Distrust tunes cognitive processes. European
Review of Social Psychology, 26, 283–327.
McCann, K. & Morgan, T. (2016) Nigel Farage: £350 million pledge to fund the NHS was 'a
mistake'. The Telegraph. Available from
http://www.telegraph.co.uk/news/2016/06/24/nigel-farage-350-million-pledge-to-fund-
the-nhs-was-a-mistake/
McGinnes, E., & Ward, C. (1980). "Better liked than right": Trustworthiness and expertise in
credibility. Personality and Social Psychology Bulletin, 6, 67-472.
Metzger, M.J. (2007). Making sense of credibility on the Web: Models for evaluating online
information and recommendations for future research. Journal of the American Society
for Information Science and Technology, 58, 2078–2091.
Nyhan, B., & Reifler, J. (2015). Does correcting myths about the flu vaccine work? An
experimental evaluation of the effects of corrective information. Vaccine, 33, 459–464.
23
Ortega-Sanchez, I.R., Vijayaraghavan, M., Barskey, A.E., & Wallace, G.S. (2014). The
economic burden of sixteen measles outbreaks on United States public health
departments in 2011. Vaccine, 32, 1311–1317.
Paek, H.J., Hove, T., Jeong, H., Kim, M. (2011). Peer or expert? The persausive impact of
YouTube public service announcement producers. International Journal of Advertising,
30,161-188.
Peter, C., & Koch, T. (2016). When debunking scientific myths fails (and when it does not): The
backfire effect in the context of journalistic coverage and immediate judgments as
prevention strategy. Science Communication, 38,3–25.
Peters, K., Kashima, Y., & Clark, A. (2009). Talking about others: Emotionality and the
dissemination of social information. European Journal of Social Psychology, 39,207–
222.
Poland, G. A., & Spier, R. (2010). Fear, misinformation, and innumerates: How the Wakefield
paper, the press, and advocacy groups damaged the public health. Vaccine, 28, 2361–
2362.
Poole, D.A., & Lindsay, D.S. (2002) Reducing child witnesses' false reports of misinformation
from parents. Journal of Child Psychology, 81,117-40.
Pornpitakpan, C. (2004). The persuasiveness of source credibility: A critical review of five
decades’ evidence. Journal of Applied Social Psychology, 34,243–281.
Quest, R. (2016). MH370: Did the pilots do it? CNN. Retrieved from
http://www.cnn.com/2016/03/07/asia/mh370-quest-pilots/.
Rapp, D.N., Hinze, S.R., Kohlhepp, K., & Ryskin, R.A. (2014). Reducing reliance on inaccurate
information. Memory & Cognition, 42,11–26.
24
Rusk, M. (2016). Lumosity to Pay $2 Million to Settle FTC Deceptive Advertising Charges for
Its “Brain Training” Program. Federal Trade Commission. Retrieved from
https://www.ftc.gov/news-events/press-releases/2016/01/lumosity-pay-2-million-settle-
ftc-deceptive-advertising-charges from
Schmaltz, R., & Lilenfeld, S. (2014) Hauntings, homeopathy, and the Hopkinsville Goblins:
using pseudoscience to teach scientific thinking. Frontiers in Psychology, 5, 1-5.
Seifert, C. M. (2002). The continued influence of misinformation in memory: What makes a
correction effective? Psychology of Learning and Motivation: Advances in Research and
Theory, 41,265-292.
Shin, J., Jian, L., Driscoll, K., & Bar, F. (2016). Political rumoring on Twitter during the 2012
US presidential election: Rumor diffusion and correction. New media and society,1-22.
Skurnik, I., Yoon, C., Park, D.C., & Schwarz, N. (2005). How warnings about false claims
become recommendations. Journal of Consumer Research, 31,713–724.
Swire, B., Berinsky, A., Lewandowsky, S., & Ecker, U. K. H. (2017). Processing political
misinformation—Comprehending the Trump phenomenon. Royal Society Open Science,
4, https://doi.org/10.1098/rsos.160802
Swire, B., Ecker, U. K. H., & Lewandowsky, S. (2017). The role of familiarity in correcting
inaccurate information. Journal of Experimental Psychology: Learning, Memory, and
Cognition.
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs.
American Journal of Political Science, 50, 755–769. http://doi.org/10.1111/j.1540-
5907.2006.00214.x
25
Van Oostendorp, H. (2014). The ambivalent effect of focus on updating mental representations.
In D. N. Rapp & J. L. Braasch (Eds). Processing inaccurate information and applied
perspectives from cognitive science and the educational sciences (pp. 223-244).
Cambridge, MA: MIT Press.
Wells, C., Reedy, J., Gastil, J., & Lee, C. (2009). Information distortion and voting choices: The
origins and effects of factual beliefs in initiative elections. Political Psychology, 30,953-
969.
Wilkes, A.L., & Leatherbarrow, M. (1988). Editing episodic memory following the identification
of error. Quarterly Journal of Experimental Psychology: Human Experimental
Psychology, 52,165-183.
Yonelinas, A.P. (2002). The nature of recollection and familiarity: A review of 30 years of
research. Journal of Memory and Language, 46,441–517.
Zimmer, H.D., & Ecker, U.K.H. (2010). Remembering perceptual features unequally bound in
object and episodic tokens: Neural mechanisms and their electrophysiological correlates.
Neuroscience & Biobehavioral Reviews, 34,1066–1079.
... Research has focused on investigating how misinformation spreads, ways to counteract this rapid spread, and ways to correct belief in health-related misinformation. Regardless of how these misinformation-based beliefs originated, they are relatively stable in the recipient's cognitive/mental model, quite resistant to corrective debunking messages, and therefore difficult to eliminate (Cook et al., 2017;Ecker et al., 2011;Lewandowsky et al., 2012;Swire & Ecker, 2018). ...
... Cognitive and motivational reasons for believing misinformation can be distinguished (van der Linden, 2022). However, misconceptions can also occur because people merely have limited time, cognitive resources, and/or motivation to understand complex scientific topics in everyday life (Cook et al., 2017;Swire & Ecker, 2018). Researchers have identified several important motivational factors for belief in health-related misinformation that may vary by country, such as political ideology, particularly conservatism and populism (Eberl et al., 2021;Pennycook et al., 2020a;Roozenbeek et al., 2020). ...
... Unfortunately, this positive effect is usually not long term (Carey et al., 2022), and once confronted with misinformation, people often continue to recall false details from memory despite taking note of factual correctionsknown as the continued influence effect of misinformation (Johnson & Seifert, 1994;Lewandowsky et al., 2012;Walter & Tukachinsky, 2019) and belief perseverance (Anderson, 2007). Researchers have compiled a collection of recommended best practice debunking strategies (e.g., Cook & Lewandowsky, 2011;Swire & Ecker, 2018). Debunking, at its best, provides detailed and clear rebuttals after people have been exposed to a falsehood. ...
Chapter
This chapter explores the pervasive issue of health-related misinformation and fake news, particularly during the COVID-19 pandemic. It defines key terms, examines how misinformation spreads, and discusses its prevalence and impact on public health. The chapter highlights the role of social media in the rapid dissemination of false information and investigates the reasons behind the public’s belief in such misinformation. Consequences of believing in health misinformation are addressed, including negative impacts on health behaviors and public trust. The chapter also reviews various strategies to combat misinformation, such as debunking, nudging, andprebunking, and emphasizes the need for comprehensive approaches to strengthen individual and societal resilience. Future research directions are suggested, focusing on underexplored areas and the generalizability of findings beyond the COVID-19 context. This comprehensive analysis underscores the importance of combating health misinformation to protect public health.
... The CIE has typically been explained as resulting from erroneous memory processes H. M. Johnson & Seifert, 1994;Lewandowsky et al., 2012;Seifert, 2002;Swire & Ecker, 2018). Connor Desai et al. (2020), however, proposed an explanation for the CIE that is based on rationality: According to their framework, a CIE is considered rational if the source that presented the original information (the informant) is deemed more credible than the source retracting the original information (the retractor). ...
... evidence to justify agreement with the retracted original information. This conclusion would stand in contrast to previous theoretical accounts that proposed erroneous memory processes as the cause of the CIE H. M. Johnson & Seifert, 1994;Lewandowsky et al., 2012;Seifert, 2002;Swire & Ecker, 2018). However, such a strong conclusion would be premature because the evidence for the null hypothesis of Contrast 4 was not conclusive. ...
Article
Full-text available
The Continued Influence Effect (CIE) is the phenomenon that retracted information often continues to influence judgments and inferences. The CIE is rational when the source that retracts the information (the retractor) is less credible than the source that originally presented the information (the informant; Connor Desai et al., 2020). Conversely, a CIE is not rational when the retractor is at least as credible as the informant. Thus, a rational account predicts that the CIE depends on the relative credibility of informant and retractor. In two experiments (N = 151, N = 146), informant credibility and retractor credibility were independently manipulated. Participants read a fictitious news report in which original information and a retraction were each presented by either a source with high credibility or a source with low credibility. In both experiments, when the informant was more credible than the retractor, participants showed a CIE compared to control participants who saw neither the information nor the retraction (ds > 0.82). When the informant was less credible than the retractor, participants showed no CIE, in line with a rational account. However, in Experiment 2, participants also showed a CIE when informant and retractor were equally credible (ds > 0.51). This cannot be explained by a rational account, but is consistent with error-based accounts of the CIE. Thus, a rational account alone cannot fully account for the complete pattern of results, but needs to be complemented with accounts that view the CIE as a memory-based error.
... In the process, they have uncovered many debunking strategies-some that have been shown to be effective, and some that have not. While initial efforts focused on correcting misperceptions, researchers argue that individuals create a mental model of an event or situation and are reluctant to modify the model with new information when the existing belief has sufficient explanatory power (Ecker, Lewandowsky, Swire, & Chang, 2011;Swire & Ecker, 2018). When misinformation is integrated into a mental schema for a topic, unless corrective information is presented simultaneously, it becomes very challenging to change the mental model later (Walter & Tukachinsky, 2020). ...
Article
Full-text available
Combating the spread of misinformation is a struggle that has inspired considerable research in the fields of psychology, education, political science, and information science, among others. Such research has found that “prebunking” or “inoculation” techniques—strategies that reduce the acceptance of misinformation before one has encountered it—have had marked success. However, there is little evidence that librarians are deliberately employing inoculation techniques in their information literacy (IL) instruction. Via a quasi-experimental study, this research explores the effect of prebunking techniques in an IL instruction session on undergraduate students’ ability to recognise misinformation. The prebunking techniques are delivered through a competitive game called Chaos Creator, based on the Bad News game developed by researchers at Cambridge University. Results of the study show that misinformation inoculation techniques are more effective than the popular source evaluation tool, the CRAP test, in helping students identify misinformation. However, misinformation inoculation techniques can backfire, causing students to become overly sceptical of trustworthy messages.
... To optimize educational resources and make room for evidence-based strategies, it is important to refute neuromyths (Krammer et al. 2021;Rousseau 2021). Past research suggests that textual refutations can be used to update erroneous beliefs in general (Chan et al. 2017;Rich et al. 2017;Swire and Ecker 2018). Various types of refutation texts have been effectively used to correct erroneous beliefs about educational misconceptions, including those simply stating that the information is incorrect, to more complex refutations that include explanatory and personalized feedback (Lithander et al. 2021;Ferrero et al. 2020;Dersch et al. 2022). ...
Article
Full-text available
Students and educators sometimes hold beliefs about intelligence and learning that lack scientific support, often called neuromyths. Neuromyths can be problematic, so it is important to find methods to correct them. Previous findings demonstrate that textual refutations are effective for correcting neuromyths. However, even after correction, erroneous information may continue to influence reasoning. In three experiments, we investigated whether feedback could be used to update students’ and educators’ beliefs and influence their reasoning about neuromyths. Across all experiments, the results showed that both students and educators held erroneous beliefs about learning and memory that could be updated after receiving feedback. Feedback also increased students’, but not teachers’, reasoning accuracy. The results demonstrate that feedback can be used to update beliefs in neuromyths, but these beliefs may influence reasoning even after correction.
... Therefore, providing alternative explanations is not only essential but is more effective when information is framed in such a way that it aligns with the worldview of targeted users. People tend to accept familiar information as true (Schwarz et al., 2007;Swire & Ecker, 2018). Debunking interventions can also backfire due to a familiarity effect when they repeat initial misinformation in order to correct it (Schwarz et al., 2007). ...
Article
Full-text available
The rise of misinformation on social media platforms is an extremely worrisome issue and calls for the development of interventions and strategies to combat fake news. This research investigates one potential mechanism that can help mitigate fake news: prompting users to form implementation intentions along with education. Previous research suggests that forming “if – then” plans, otherwise known as implementation intentions, is one of the best ways to facilitate behavior change. To evaluate the effectiveness of such plans, we used MTurk to conduct an experiment where we educated participants on fake news and then asked them to form implementation intentions about performing fact checking before sharing posts on social media. Participants who had received both the implementation intention intervention and the educational intervention significantly engaged more in fact checking behavior than those who did not receive any intervention as well as participants who had received only the educational intervention. This study contributes to the emerging literature on fake news by demonstrating that implementation intentions can be used in interventions to combat fake news.
... However, users' detection and correction can also be an effective way to combat misinformation [9,10] and can be as effective as algorithmic corrections [11]. Despite this, people generally do not challenge misinformation they encounter on social media [12][13][14][15][16][17] User behaviour on social media is complex and can be influenced by numerous factors with regard to challenging misinformation, such as diversity in individuals' cognitive processes [18] and users' concerns regarding their image on social media or their relationship with others [19]. ...
Article
In the current digital era, reliance on technology for communication and the gathering and dissemination of information is growing. However, the information disseminated can be misleading or false. Nurses tend to be trusted by the public, but not all information brought to the public forum is well-informed. Ill-informed discussions have resulted in harm to individuals who take such information as fact and act on it. As technology continues to evolve and fact versus fiction becomes more challenging to discern, it is critical that nurses recognize their ethical responsibility to the public in providing information for which sound evidence exists. This analysis will explore medical misinformation through concepts such as confirmation bias and the politicization of science. Also, the impact of nurses not recognizing the power and responsibility associated with using their credentials in public fora, even when the central motivator is that they believe they are helping other individuals. Using nursing goals and perspectives, we will discuss the ethical responsibility of nurses to be aware of the soundness of what they think they know. Utilizing ideas of professional responsibilities, as outlined by professional codes of ethics as well as the ethical principles of non-maleficence and veracity, we explore the problem of nurses propagating misinformation and suggest strategies to enhance nurse awareness of their ethical responsibilities for veracity and transparency regarding what is known and what is not.
Article
Visual misinformation about ongoing contaminated food crises poses a significant threat to organizational wellbeing and public health, particularly when people share incorrect images on social media. Corrective responses and highly credible media sources as effective strategies geared toward combating crisis misinformation. Extending Lewandowsky and colleagues’ (2012) corrective strategies for debunking misinformation, the concept of visual misinformation, cognitive process, and the theory of social sharing of emotion, this study aims to advance research on visual misinformation in public relations and crisis communication. A 2 (image veracity: incorrect vs. true) x 2 (corrective strategy: simple rebuttal vs. simple rebuttal + fact elaboration) x 2 (source credibility: high vs. low) between-subjects eye-tracking experiment was conducted to test the effects of these features on visual attention and intention to share. Additionally, we explored the mediation effects of emotional surprise and perceived crisis severity on sharing posts. Results showed visual cues (e.g., images and sources) and textual cues (e.g., corrective strategies) led to different allocations of visual attention. We found visual attention significantly mediated the effects of combined corrective messages on sharing. Additionally, feeling surprised also significantly mediated the effects of messages with low credible sources on sharing. This study provides insights into advancing crisis communication theory and offers evidence-based recommendations for health organizations and practitioners to better fight against food crisis misinformation.
Chapter
While there is overwhelming scientific agreement on climate change, the public has become polarized over fundamental questions such as human-caused global warming. Communication strategies to reduce polarization rarely address the underlying cause: ideologically-driven misinformation. In order to effectively counter misinformation campaigns, scientists, communicators, and educators need to understand the arguments and techniques in climate science denial, as well as adopt evidence-based approaches to neutralizing misinforming content. This chapter reviews analyses of climate misinformation, outlining a range of denialist arguments and fallacies. Identifying and deconstructing these different types of arguments is necessary to design appropriate interventions that effectively neutralize the misinformation. This chapter also reviews research into how to counter misinformation using communication interventions such as inoculation, educational approaches such as misconception-based learning, and the interdisciplinary combination of technology and psychology known as technocognition.
Article
Full-text available
People frequently continue to use inaccurate information in their reasoning even after a credible retraction has been presented. This phenomenon is often referred to as the continued influence effect of misinformation. The repetition of the original misconception within a retraction could contribute to this phenomenon, as it could inadvertently make the “myth” more familiar—and familiar information is more likely to be accepted as true. From a dual-process perspective, familiarity-based acceptance of myths is most likely to occur in the absence of strategic memory processes. We thus examined factors known to affect whether strategic memory processes can be utilized; age, detail, and time. Participants rated their belief in various statements of unclear veracity, and facts were subsequently affirmed and myths were retracted. Participants then re-rated their belief either immediately or after a delay. We compared groups of young and older participants, and we manipulated the amount of detail presented in the affirmative/corrective explanations, as well as the retention interval between encoding and a retrieval attempt. We found that (1) older adults over the age of 65 were worse at sustaining their post-correction belief that myths were inaccurate, (2) a greater level of explanatory detail promoted more sustained belief change, and (3) fact affirmations promoted more sustained belief change in comparison to myth retractions over the course of one week (but not over three weeks). This supports the notion that familiarity is indeed a driver of continued influence effects.
Article
Full-text available
This study investigated the cognitive processing of true and false political information. Specifically, it examined the impact of source credibility on the assessment of veracity when information comes from a polarizing source (Experiment 1), and effectiveness of explanations when they come from one’s own political party or an opposition party (Experiment 2). Participants rated their belief in factual and incorrect statements that Donald Trump made on the campaign trail; facts were subsequently affirmed and misinformation retracted. Participants then re-rated their belief immediately or after a delay. Experiment 1 found that (1) if information was attributed to Trump, Republican supporters of Trump believed it more than if it was presented without attribution, whereas the opposite was true for Democrats; and (2) although Trump supporters reduced their belief in misinformation items following a correction, they did not change their voting preferences. Experiment 2 revealed that the explanation’s source had relatively little impact, and belief updating was more influenced by perceived credibility of the individual initially purporting the information. These findings suggest that people use political figures as a heuristic to guide evaluation of what is true or false, yet do not necessarily insist on veracity as a prerequisite for supporting political candidates.
Article
Full-text available
Social media can be a double-edged sword for political misinformation, either a conduit propagating false rumors through a large population or an effective tool to challenge misinformation. To understand this phenomenon, we tracked a comprehensive collection of political rumors on Twitter during the 2012 US presidential election campaign, analyzing a large set of rumor tweets (n = 330,538). We found that Twitter helped rumor spreaders circulate false information within homophilous follower networks, but seldom functioned as a self-correcting marketplace of ideas. Rumor spreaders formed strong partisan structures in which core groups of users selectively transmitted negative rumors about opposing candidates. Yet, rumor rejecters neither formed a sizable community nor exhibited a partisan structure. While in general rumors resisted debunking by professional fact-checking sites (e.g. Snopes), this was less true of rumors originating with satirical sources.
Article
Full-text available
Research on the illusory truth effect has found that repeated presentation of uncertain statements increases validity judgments of those statements. Three experiments explored the shape of the repetition–validity–judgment relationship over multiple repetitions, the mediating role of processing fluency, and the moderating role of dispositional skepticism. Participants read narratives in which different rumors were repeated 0–6 or 0–9 times; validity estimates, processing fluency, and dispositional skepticism were also measured. Validity judgments were logarithmically related to repetitions; this effect was mediated by processing fluency, and moderated slightly by skepticism. Results explore the boundaries of the processing fluency contrast account of the illusory truth effect, suggest a minor role for skepticism, and inform research on belief in rumor (uncertain statements in circulation).
Article
Full-text available
The current review proposes that exposure to a specific untrustworthy source of information engages a mode of thought—a distrust mindset—that is also evoked by incidental distrust contexts and by personality characteristics. The review summarises empirical research demonstrating that—in contrast to trust, which leads to the familiar congruent type of cognitive processes—distrust triggers a spontaneous activation of alternatives and incongruent associations for a given concept. These alternatives dilute the activation level of the given concept, indicating that our mind can spontaneously stop the congruent-processing flow. Consequently, distrust blocks congruent effects such as confirmatory biases, accessibility effects, stereotyping, and routine reasoning. Thus, the review suggests that the basic flow of our cognition is (dis)trust dependent. The review concludes with a discussion of the effect of the distrust mindset as a demonstration of (1) situated cognition and (2) a spontaneous negation process.
Article
Full-text available
When reporting scientific information, journalists often present common myths that are refuted with scientific facts. However, correcting misinformation this way is often not only ineffective but can increase the likelihood that people misremember it as true. We test this backfire effect in the context of journalistic coverage and examine how to counteract it. In a web-based experiment, we find evidence for a systematic backfire effect that occurs after a few minutes and strengthens after five days. Results show that forming judgments immediately during reception (in contrast to memory-based) can reduce backfire effects and prevent erroneous memory from affecting participants’ attitudes.
Article
Full-text available
Two types of misinformation effects are discussed in the literature—the post-event misinformation effect and the continued influence effect. The former refers to the distorting memorial effects of misleading information that is presented after valid event encoding; the latter refers to information that is initially presented as true but subsequently turns out to be false and continues to affect memory and reasoning despite the correction. In two experiments, using a paradigm that merges elements from both traditions, we investigated the role of presentation order and recency when two competing causal explanations for an event are presented and one is subsequently retracted. Theoretical accounts of misinformation effects make diverging predictions regarding the roles of presentation order and recency. A recency account—derived from time-based models of memory and reading comprehension research suggesting efficient situation model updating—predicts that the more recently presented cause should have a stronger influence on memory and reasoning. By contrast, a primacy account—derived from primacy effects in impression formation and story recall as well as findings of inadequate memory updating—predicts that the initially presented cause should be dominant irrespective of temporal factors. Results indicated that (1) a cause’s recency, rather than its position (i.e., whether it was presented first or last) determined the emphasis that people place on it in their later reasoning, with more recent explanations being preferred; and (2) a retraction was equally effective whether it invalidated the first or the second cause, as long as the cause’s recency was held constant. This provides evidence against the primacy account and supports time-based models of memory such as temporal distinctiveness theory.
Chapter
Interdisciplinary approaches to identifying, understanding, and remediating people's reliance on inaccurate information that they should know to be wrong. Our lives revolve around the acquisition of information. Sometimes the information we acquire—from other people, from books, or from the media—is wrong. Studies show that people rely on such misinformation, sometimes even when they are aware that the information is inaccurate or invalid. And yet investigations of learning and knowledge acquisition largely ignore encounters with this sort of problematic material. This volume fills the gap, offering theoretical and empirical perspectives on the processing of misinformation and its consequences. The contributors, from cognitive science and education science, provide analyses that represent a variety of methodologies, theoretical orientations, and fields of expertise. The chapters describe the behavioral consequences of relying on misinformation and outline possible remediations; discuss the cognitive activities that underlie encounters with inaccuracies, investigating why reliance occurs so readily; present theoretical and philosophical considerations of the nature of inaccuracies; and offer formal, empirically driven frameworks that detail when and how inaccuracies will lead to comprehension difficulties. Contributors Peter Afflerbach, Patricia A. Alexander, Jessica J. Andrews, Peter Baggetta, Jason L. G. Braasch, Ivar Bråten, M. Anne Britt, Rainer Bromme, Luke A. Buckland, Clark A. Chinn, Byeong-Young Cho, Sidney K. D'Mello, Andrea A. diSessa, Ullrich K. H. Ecker, Arthur C. Graesser, Douglas J. Hacker, Brenda Hannon, Xiangen Hu, Maj-Britt Isberner, Koto Ishiwa, Matthew E. Jacovina, Panayiota Kendeou, Jong-Yun Kim, Stephan Lewandowsky, Elizabeth J. Marsh, Ruth Mayo, Keith K. Millis, Edward J. O'Brien, Herre van Oostendorp, José Otero, David N. Rapp, Tobias Richter, Ronald W. Rinehart, Yaacov Schul, Colleen M. Seifert, Marc Stadtler, Brent Steffens, Helge I. Strømsø, Briony Swire, Sharda Umanath
Article
We estimated ideological preferences of 3.8 million Twitter users and, using a data set of nearly 150 million tweets concerning 12 political and nonpolitical issues, explored whether online communication resembles an "echo chamber" (as a result of selective exposure and ideological segregation) or a "national conversation." We observed that information was exchanged primarily among individuals with similar ideological preferences in the case of political issues (e.g., 2012 presidential election, 2013 government shutdown) but not many other current events (e.g., 2013 Boston Marathon bombing, 2014 Super Bowl). Discussion of the Newtown shootings in 2012 reflected a dynamic process, beginning as a national conversation before transforming into a polarized exchange. With respect to both political and nonpolitical issues, liberals were more likely than conservatives to engage in cross-ideological dissemination; this is an important asymmetry with respect to the structure of communication that is consistent with psychological theory and research bearing on ideological differences in epistemic, existential, and relational motivation. Overall, we conclude that previous work may have overestimated the degree of ideological segregation in social-media usage. © The Author(s) 2015.