ArticlePDF Available

Systematic Review of Educational Approaches to Misinformation

Authors:

Abstract and Figures

Misinformation can have severe negative effects on people’s decisions, behaviors, and on society at large. This creates a need to develop and evaluate educational interventions that prepare people to recognize and respond to misinformation. We systematically review 107 articles describing educational interventions across various lines of research. In characterizing existing educational interventions, this review combines a theory-driven approach with a data-driven approach. The theory-driven approach uncovered that educational interventions differ in terms of how they define misinformation and regarding which misinformation characteristics they target. The data-driven approach uncovered that educational interventions have been addressed by research on the misinformation effect, lie detection, information literacy, and fraud trainings, with each line of research yielding different types of interventions. Furthermore, this article reviews evidence about the interventions’ effectiveness. Besides identifying several promising types of interventions, comparisons across different lines of research yield open questions that future research should address to identify ways to increase people's resilience towards misinformation.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
Educational Psychology Review (2025) 37:43
https://doi.org/10.1007/s10648-025-10012-8
REVIEW ARTICLE
Systematic Review ofEducational Approaches
toMisinformation
MartinaA.Rau1 · AnnaE.Premo1
Accepted: 19 March 2025
© The Author(s) 2025
Abstract
Misinformation can have severe negative effects on people’s decisions, behav-
iors, and on society at large. This creates a need to develop and evaluate educa-
tional interventions that prepare people to recognize and respond to misinformation.
We systematically review 107 articles describing educational interventions across
various lines of research. In characterizing existing educational interventions, this
review combines a theory-driven approach with a data-driven approach. The theory-
driven approach uncovered that educational interventions differ in terms of how they
define misinformation and regarding which misinformation characteristics they tar-
get. The data-driven approach uncovered that educational interventions have been
addressed by research on the misinformation effect, lie detection, information liter-
acy, and fraud trainings, with each line of research yielding different types of inter-
ventions. Furthermore, this article reviews evidence about the interventions’ effec-
tiveness. Besides identifying several promising types of interventions, comparisons
across different lines of research yield open questions that future research should
address to identify ways to increase people’s resilience towards misinformation.
Keywords Misinformation· Disinformation· Fake news· Education· Learning
Introduction
Misinformation creates severe issues for our society. Misinformation has always
existed (McKernon, 1925; Posetti & Matthews, 2018), but modern media increases
its reach (Greifender etal., 2021; Vosoughi etal., 2018). Misinformation is preva-
lent in social media (Baqir etal., 2024; Loveland etal., 2024), mainstream media
(Tsfati etal., 2020), science (Allchin, 2023; West & Bergstrom, 2021), and educa-
tion (Kendeou & Johnson, 2024).
* Martina A. Rau
martina.rau@gess.ethz.ch
1 Department ofHumanities, Social, andPolitical Science, Federal Institute ofTechnology
inZurich (ETH Zurich), Zurich, Switzerland
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 2 of 41
Misinformation can negatively impact many areas of our lives. It can skew
people’s political beliefs (Garrett, 2011; Mauk & Grömping, 2024) and affect
decision-making related to health (Greene & Murphy, 2021; Lieneck etal., 2022)
or voting (Cantarella et al., 2023). Even brief exposure to misinformation can
have long-term effects (Zhu etal., 2012), even after a retraction (Chan & Albar-
racín, 2023; Lewandowsky etal., 2012).
Although some reviews have covered specific types of educational interven-
tions, such as inoculation techniques (e.g., Lewandowsky & Van Der Linden,
2021) or lie detection trainings (e.g., Driskell, 2012), they do not cover the
breadth of existing educational interventions to misinformation. This article
seeks to close this gap, motivated by misinformation as a persistent problem of
large societal impact and by calls to include misinformation in educational cur-
ricula (Schwartz, 2021).
Specifically, we examine which educational interventions seeking to reduce
people’s susceptibility to misinformation have been documented in the literature.
To this end, we conducted a systematic literature review. We focus on educational
interventions that equip participants with cognitive competencies (i.e., knowl-
edge, skills, or strategies) that may generalize to misinformation examples not
covered in the intervention and to future misinformation encounters. Our review
aims to capture the fact that such interventions have been documented by a wide
range of literatures. Hence, it is deliberately broad and not limited to a specific
definition of misinformation or type of intervention. Thus, we included studies
covering diverse topics such as memory and lie detection, as long as they matched
our inclusion criteria, detailed below.
To synthesize educational interventions that emerge from different literatures,
we employed an idealized model of research on educational interventions. This
idealized model follows our perspective as educational psychologists. Educa-
tional psychology research assumes that a person—who may have prior knowl-
edge about some topic—receives new information about this topic (Fig.1A). A
study may compare various interventions aimed at increasing the effectiveness of
information presentation. Knowledge tests then evaluate the effectiveness of these
interventions. In doing so, they account for prior knowledge and ideally assess
long-term effects with a delayed posttest.
Misinformation adds a layer of complexity to educational interventions. Fig-
ure1B shows the idealized model of a study on an educational intervention that
seeks to reduce misinformation susceptibility. A person may have prior knowl-
edge/beliefs about some topic as well as prior misinformation-detection com-
petencies. An intervention then teaches her how to detect misinformation. This
should reduce her learning of misinformation that she subsequently encounters.
A final test (or multiple) determines the intervention’s effectiveness, ideally while
accounting for effects of both types of prior knowledge. By comparing different
lines of research to this idealized model, this review reveals research gaps that
future research should address.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 3 of 41 43
Theoretical Background
Defining Misinformation
The term “misinformation” describes factually inaccurate information, regard-
less of its intent (Greifender etal., 2021; Pennycook & Rand, 2021). By con-
trast, “disinformation” or “fake news” describes misinformation published with
malicious intent. Because intent is difficult to determine, this article uses the
term “misinformation.”
Yet, determining whether information is inaccurate is not straightforward. Journal-
ism uses verification approaches to determine intersubjectivity (e.g., obtaining inde-
pendent but congruent information). Thus, a “fact” adheres to these standards. Lazer
etal. (2018) propose that anything that “lacks the news media’s editorial norms and
processes” is misinformation.
Still, nuances remain (Molina etal., 2021; Zhou & Zafarani, 2018). For exam-
ple, rumors are unchecked facts that may or may not be accurate (Adams etal.,
2023). When not recognized as such, satire and commentary can be misinformative
(Berkowitz & Schwartz, 2016). Molina etal. (2021) and Wu etal. (2019) account
for such types of information as gray areas because a dichotomous distinction would
mischaracterize them. Hence, this review documents whether extant research uses
dichotomous or nuanced definitions.
Fig. 1 Models of educational interventions, (A) in general and (B) seeking to reduce misinformation sus-
ceptibility
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 4 of 41
Features ofMisinformation
A definition does not yet enable a person to detect misinformation. Several features
can signal that a piece of information is false (Molina etal., 2021). Content features
include quality elements (e.g., typos), linguistic cues, and rhetorical features (Rubin
& Lukoianova, 2015). For example, informal language (Zhou etal., 2004) or spe-
cific function words (Afroz etal., 2012) are associated with misinformative content
(Zhou & Zafarani, 2018).
Additionally, sources of information provide diagnostic information. This
includes a lack of sources and obscure or fabricated sources (Zhang etal., 2022).
By contrast, a reliable source may indicate credible information (Fogg etal., 2001).
Furthermore, structural features can serve as veracity indicators. These describe
the delivery format of the information. For example, a URL (Mazzeo etal., 2021)
or clickbait (Zannettou et al., 2019; Zeng et al., 2020) might give clues about
trustworthiness.
Finally, network features describe how the information is obtained or distributed.
Misinformation has different distribution patterns than information that is accu-
rate (e.g., spreading faster; Vosoughi etal., 2018) or with a specific narrative (Del
Vicario etal., 2016).
Given that these characteristics have diagnostic value, this review considers
whether educational approaches specifically target them.
Research Questions
This article combines a theory-driven with a data-driven approach to categorize dif-
ferent types of interventions. We address the following research questions (RQ):
RQ1: Which educational interventions aiming to reduce the effects of misinfor-
mation have been documented?
RQ2: What empirical evidence exists about the effectiveness of these interven-
tions?
Methods
Literature Search
The literature search occurred in June 2023 on EBSCO, ERIC, and PsychInfo.
We chose a broad search string to access a wide range of literatures. It included
two criteria. First, to capture various terms used for misinformation, we connected
“misinformation,” “disinformation,” “fake news,” “deception,” and “propaganda”
with OR. Second, to capture various types of educational approaches, we con-
nected “teaching,” “education,” “instruction,” “training,” “learning,” “educational
approach,” “instructional approach,” “educational intervention,” and “instructional
intervention” with OR. The two criteria were connected with AND. To broaden
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 5 of 41 43
the search, we selected the option “apply equivalent subjects.” Finally, for quality
control, we applied the filter “peer reviewed.
We narrowed the results from the initial search to the final sample of 107 arti-
cles (Fig. 2). Throughout, we applied the following criteria. (1) The intervention
targets cognitive processes. Articles on emotional or stress-related factors, behav-
ioral factors (e.g., substance use, sleep), and relatively stable factors (e.g., person-
ality, attitudes, age, gender, native language) were excluded. By contrast, interven-
tions aimed at protecting existing knowledge against misinformation were included.
(2) The intervention aims to teach knowledge, skills, or strategies that might affect
future misinformation encounters. We excluded studies on the impact of events after
exposure to misinformation (e.g., post-warnings, retractions). (3) The intervention
focuses on learning of individuals, not organizations. (4) The article describes a
concrete intervention that has been implemented instead of offering opinions about
the design of a potential intervention. (5) The article describes a primary study that
includes an empirical evaluation of misinformation susceptibility based on pretest-
to-posttest comparisons or comparisons to a control.
Coding Process
To examine which educational interventions have been documented (RQ1), we
combined theory-driven and data-driven approaches. The data-driven approach cap-
tured (1) the line of research in which the educational intervention was investigated.
Fig. 2 PRISMA diagram for corpus generation
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 6 of 41
The theory-driven approach characterized the interventions based on (2) how they
defined misinformation and (3) which misinformation features they targeted, align-
ing with the literature reviewed above.
The data-driven categories emerged from our observation that articles drew on
different theoretical perspectives, examined different misinformation scenarios, used
different methods, and investigated different research questions. These differences
seemed to align with standards of different research fields that appeared relatively
distinct. This impression was reinforced when considering referenced literature and
publication venues.
To code the articles, the first author defined the data-driven categories and dis-
cussed them with the second author. Both authors then coded all articles inde-
pendently. IRR was calculated using prevalence-adjusted bias-adjusted Kappa
(PABAK; Byrt etal., 1993) and was 0.93 for lines of research, 0.85 for definitions
of misinformation, and 0.75 for targeted misinformation features. Both authors
discussed all conflicts until they were resolved.
In the following, we first describe the lines of research identified from the data-
driven approach. Then, for each line of research, we describe their characteristics
following our theory-driven approach and offer a narrative review of the interven-
tions (RQ1). Throughout, we discuss the empirical evidence about the effectiveness
of the interventions (RQ2).
In doing so, we only focus on those interventions that match our inclusion crite-
ria. For example, if an article described a multi-factorial experiment where interven-
tion A matched our inclusion criteria but intervention B did not, we report only on
intervention A. Unless noted in a footnote, articles focused on adults not identified
as neurodiverse.
Results
The data-driven approach identified four distinct categories that examine educational
interventions about misinformation from different angles. First, research on mod-
erators of the misinformation effect is part of the literature on memory effects. The
reviewed articles were predominantly lab studies where participants were exposed
to information about an event, then received misleading post-event information, and
finally were tested on their memory of the original event. In this context, the mis-
information effect describes the extent to which misleading post-event information
impedes memory accuracy regarding the original event.
Second, research on lie detection trainings is part of the deception literature. This
category subsumes research on deception from a child developmental perspective as
well as from an investigative psychology perspective. The reviewed articles encom-
pass lab and field studies about trainings for deception detection based on verbal or
nonverbal cues.
Third, research on information-literacy trainings falls into the field of curriculum
and instruction. The reviewed articles include lab and field studies that examined the
effects of comprehensive curricula or specific trainings, both of which taught knowl-
edge and skills about how to identify and handle misinformation.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 7 of 41 43
Fourth, fraud trainings focus on malignant actions. This line of research stands at
the intersection of literatures on human behavior and information technologies. The
articles include lab and field studies about trainings aiming to help people recognize
fraud.
Finally, an other category includes singular articles of which there were too few
to warrant a separate category. Figure3 shows how many articles fall into each data-
driven and theory-driven category, the latter of which will be discussed below for
each line of research.
Moderators oftheMisinformation Effect
Forty of the 107 articles described educational interventions that sought to moderate
the misinformation effect. A typical experiment involved three phases (Fig.4). First,
participants received “true” information. Second, they received post-event infor-
mation. Studies manipulated whether subsequent information was misinformation
(i.e., conflicting with the original information). Third, participants received a final
memory test. The misinformation effect means that misinformation impedes perfor-
mance on the final memory test, either relative to original information that was not
contradicted (in a within-subjects design) or relative to participants not exposed to
misinformation (in a between-subjects design).
Several theoretical accounts exist for the misinformation effect, and likely, multi-
ple mechanisms are at play (Loftus, 2005). In brief, in a misinformation experiment,
Fig. 3 Distribution of articles across theory- and data-driven categories. While nuanced vs. dichotomous
definitions were mutually exclusive, misinformation features were not
Fig. 4 Structure of studies on moderators of the misinformation effect
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 8 of 41
participants receive information from different sources. Consequently, their (in-)
ability to discriminate and monitor memory from different sources is relevant to
explaining the misinformation effect (source-monitoring framework; Bulevich etal.,
2022; Chan et al., 2012). Likewise, the strength and integrity of memory for the
original and post-event information impact the likelihood that participants remem-
ber one over the other (trace-integrity framework; Marche, 1999; Marche & Howe,
1995). Additionally, the way in which memory is stored is relevant. Participants may
internally represent the gist of information or at verbatim, and often hold both in
parallel (fuzzy-trace theory; Marche etal., 2002; Pansky & Tenenboim, 2011).
Figure 4 serves to compare the logic of studies of moderators of the misinfor-
mation effect to the idealized logic of educational interventions to misinformation
(Fig.1B) What stands out is that the reviewed studies manipulated prior knowledge
about the given topic by presenting original information to participants. This is logi-
cal in the context of these studies, which generally focused on a fictional event that
participants knew nothing about. By contrast, they considered prior misinformation-
detection competencies as irrelevant. This makes sense because this line of research
focuses on interventions that affect participants’ memory for the original informa-
tion rather than on strategies of misinformation detection. Finally, while about half
of the reviewed studies included only an immediate posttest, the other half included
a delayed posttest but no immediate posttest (see Appendix Table3).
In line with the predominant study setup, the articles used a dichotomous defi-
nition of misinformation, considering the original information as “true,” and con-
tradictory post-event information as “false” (Fig.3). Furthermore, interventions
generally did not target specific misinformation features. Only three targeted
specific misinformation features (content features; Huff & Umanath, 2018; Luke
etal., 2017; Umanath etal., 2019).
Narrative Review
Several types of interventions emerged from our review of articles on the misinfor-
mation effect. We distinguish retrieval practice, cognitive interviews, pre-warnings,
and attention focusing. While some articles described combinations of these inter-
ventions, we describe their characteristics separately and comment on joint effects
where applicable.
Retrieval Practice Of the 40 articles on the misinformation effect, 19 examined
retrieval practice. These articles draw on evidence that retrieval practice strengthens
people’s memory. Hence, they examined whether taking a practice test affects par-
ticipants’ susceptibility to misinformation.
The types of practice tests differed in terms of format (Table1). Most of the
articles included a cued-recall test in which participants answered questions
about the original information. Other studies used a cued-recall format where
participants completed sentences by filling in details based on their mem-
ory or did not specify its format. Additionally, some articles used free-recall
tests with instructions to write a detailed summary of the original information.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 9 of 41 43
Furthermore, one article compared a test in which participants gave a one-word
answer to a version where they additionally answered a follow-up question about
specific details. Finally, in some studies, participants provided confidence rat-
ings for each answer.
A further difference regarded whether the tests were timed or not (Table 1).
While several studies gave participants a time limit for each question (e.g., 20 s per
question), others gave longer intervals for completion of the entire test (e.g., 30 min
for a free-recall test). Yet other articles did not restrict time or did not indicate that
they did.
A final characteristic of the practice tests was that most did not involve feedback.
Three studies constitute an exception (Table1). They compared a one-trial test with-
out feedback to a criterion-learning test where participants received the original
information again until they correctly answered all questions twice in a row. Repeat-
ing the original information after the test can be viewed as a form of feedback.
The results of most articles point towards negative effects of retrieval practice;
that is, taking a practice test increased participants’ susceptibility to misinformation
(known as retrieval-enhanced suggestibility, RES). Specifically, while participants
had better memory for control items that had not been contradicted by misinforma-
tion (i.e., showing a beneficial effect of testing), testing decreased their memory for
items that had been contradicted, resulting in increased misinformation endorsement
(Bulevich etal., 2022; Chan & Langley, 2011; Chan & LaPaglia, 2011; Chan etal.,
2012, 2017, 2022; Gordon & Thomas, 2017; LaPaglia & Chan, 2013; LaPaglia
Table 1 Types of retrieval-practice interventions
Format Time limit Articles
Cued-recall, question Yes No feedback: Chan & Langley, 2011; Chan
& LaPaglia, 2011; Chan etal., 2017;
Chan etal., 2012; LaPaglia & Chan,
2013; Manley & Chan, 2019; Mullet
etal., 2014; Pereverseff etal., 2020
Varied feedback: Marche & Howe, 19951
No No feedback: Mullet etal., 2014; Pansky
& Tenenboim, 2011; Rindal etal., 2016;
Wang etal., 2014
Varied feedback: Marche, 19991; Marche
etal., 20021
Cued-recall, fill-in-the-
gap
Yes Bulevich etal., 2022
No
Cued-recall, unspecified Yes Chan etal., 2022
No Gordon & Thomas, 2017
Free-recall Yes LaPaglia etal., 2014
No Szpitalak etal., 2021; Wang etal., 2014
Special features Detailed vs. one-word answer: Pansky & Tenenboim, 2011; Confidence
ratings: Chan etal., 2022; Chan etal., 2012; Gordon & Thomas, 2017;
Mullet etal., 2014
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 10 of 41
etal., 2014; Manley & Chan, 2019; Rindal etal., 2016). Testing also increased par-
ticipants’ tendency to believe the misinformation had been presented during the
first phase of the study; that is, they confused it with the original information (Chan
etal., 2012). Finally, Szpitalak etal. (2021) gave all participants a practice test and
observed the misinformation effect nevertheless.
Several articles identified boundary conditions of the RES effect. We only report
on those related to the nature of the practice test rather than study design. Mul-
let etal. (2014) found that the RES effect was less pronounced when participants
received the practice questions along with the answers, as opposed to asking ques-
tions without providing answers. Furthermore, Pansky and Tenenboim (2011) found
an RES effect for gist testing, verbatim testing reduced suggestibility, hence showing
a protective effect of testing.
A few articles reported null effects. Pereverseff et al. (2020) found overall null
effects, although an analysis conditioned on that items had not been recalled during
the practice test identified a protective effect of testing. Wang etal. (2014) found no
differences between a control, a free-recall, and a cued-recall practice test.
Some studies yielded positive effects of testing. Besides verbatim testing (Pan-
sky & Tenenboim, 2011), criterion-testing reduced participants’ susceptibility to
misinformation compared to one-trial tests (Marche, 1999; Marche & Howe, 1995;
Marche etal., 2002).1 Additionally, criterion-testing reduced participants’ likelihood
to believe that misinformation had been presented in the first study phase (Marche
etal., 2002).
The articles offer several possible explanations for the RES effect. One expla-
nation relates to test-potentiated learning; namely, that retrieval practice enhances
learning of new information. Testing might improve participants’ encoding strate-
gies, meta-cognitive functioning, and context isolation (e.g., Chan et al., 2017).
Furthermore, testing might focus participants’ attention on content queried by the
practice test (e.g., Chan & Langley, 2011). This might lead to surprise when they
encounter misinformation, making it more memorable (Chan & LaPaglia, 2011). In
addition, context discrimination may be relevant. According to Chan etal. (2012),
the effects of retrieval practice may depend on participants perceiving the contexts
in which original information and misinformation are presented as related. If they
do, they may update their original memory accordingly, resulting in RES.
Explanations offered by articles that found positive effects did not necessarily
conflict with these proposed mechanisms. According to Marche and Howe (1995),
who invoke the trace-integrity model, susceptibility to misinformation depends on
the extent of initial learning and of forgetting (also see Marche, 1999; Marche etal.,
2002). More weakly encoded information is more prone to misinformation. Addi-
tionally, Marche etal. (2002) and Pansky and Tenenboim (2011) invoke fuzzy-trace
theory. Gist representations of events may be more malleable because they contain
fewer details. If misinformation fills gaps in gist representations, RES results. Yet,
1 These articles focused on pre-school children
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 11 of 41 43
when participants have a verbatim representation that includes exact information,
they may resist misinformation.
Cognitive Interviews An approach related to retrieval practice involves the use of
interviews to enhance participants’ memory. Five articles focused on cognitive inter-
views, often used by crime investigators to collect testimonial evidence from wit-
nesses. Such interviews have been examined in the context of the misinformation
effect because witnesses might be exposed to misinformation after the witnessed
event (McPhee etal., 2014; Memon etal., 2010).
Most cognitive interviews included several components: establishing rapport with
the interviewee, giving interviewees permission to respond “I don’t know” where
appropriate, and mnemonic strategies. Three of the five articles involved all these
components (Holliday, 2003a2, 2003b3; Memon etal., 2010). One article addition-
ally asked interviewees to draw a sketch while providing an accompanying narrative
(LaPaglia etal., 2014). Finally, McPhee etal. (2014) compared a verbal cognitive
interview to a self-report booklet that guided participants through a written inter-
view procedure. Both included mnemonics and sketching.
The results were mixed. LaPaglia etal. (2014) found that a cognitive inter-
view yielded a slightly more pronounced RES effect than a free-recall test,
hence indicating a negative effect. By contrast, Holliday (2003a, 2003b) found
null effects of cognitive interviews compared to other standardized interviews.
Then again, two studies found a protective effect of cognitive interviews com-
pared to interviewed controls (McPhee etal., 2014; Memon etal., 2010), regard-
less of written or spoken modality (McPhee etal., 2014).
To explain these effects, the articles propose mechanisms similar to those used to
explain retrieval practice. In a nutshell, the articles agree that cognitive interviews
strengthen participants’ memory of original information. However, some propose
that strengthened memory leads participants to notice misinformation, hence mak-
ing them less susceptible (McPhee etal., 2014; Memon etal., 2010). By contrast,
others believe strengthened memory draws participants’ attention to unexpected
misinformation, leading them to process it more deeply and making them more sus-
ceptible (LaPaglia etal., 2014).
Pre‑warnings Rather than targeting memory processes, other interventions focused
on encoding of post-event information. One option is to warn participants about
misinformation that they may subsequently receive. Thirteen articles included such
pre-warnings.
2 This article focused on children aged 4–8
3 This article focused on children aged 4–10
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 12 of 41
The warnings differed in specificity. Most articles described general warnings
that informed participants that there might be inaccuracies or differences relative to
the original information (Bailey etal., 2021; Brown etal., 1999; Fazio etal., 2013,
2015; Greene etal., 1982; Huff & Umanath, 2018; Luke etal., 2017; Mullet etal.,
2014; Neil etal., 2021; Salovich & Rapp, 2022). Other articles described specific
warnings, such as information about the source’s intent to deceive or false beliefs
(Chan & Okamoto, 2006)4 or about suggestive questions with examples and specific
information about where to look for those questions (Luke etal., 2017). A somewhat
different approach was to give participants feedback about their susceptibility and to
tell them that their susceptibility would be monitored (Salovich & Rapp, 2022). This
article also examined staged feedback that informed participants about their suscep-
tibility being above or below average. Some studies compared warnings to attention-
focusing interventions, described below.
The results of most studies point towards null effects of general warnings based
on comparisons to controls (Bailey etal., 2021; Luke etal., 2017) or because a mis-
information effect occurred despite participants having been warned (Brown etal.,
1999; Fazio etal., 2013, 2015; Huff & Umanath, 2018; Mullet etal., 2014). How-
ever, in one study, warned participants exhibited a weaker misinformation effect
than unwarned ones (Greene etal., 1982). Furthermore, one study found a benefit of
a general warning for participants who had detected discrepancies between original
and post-event information (Neil etal., 2021).
With respect to specific warnings, results were mixed. While Chan and Okamoto
(2006) found a positive effect of a specific warning, Luke etal. (2017) found null
effects. By contrast, the approach by Salovich and Rapp (2022) to tell participants
that their susceptibility would be monitored as well as information about their sus-
ceptibility proved effective. Especially negative feedback, informing participants
that they were more susceptible than average, reduced their susceptibility.
The articles that observed positive effects of warnings converged on increased
scrutiny as a likely explanation. Warned participants may be more motivated to
monitor and evaluate the information they receive (Salovich & Rapp, 2022) and
may process new information more deeply (Greene et al., 1982). Furthermore,
they may use information from the warning provided to reason about discrepan-
cies they may notice (Chan & Okamoto, 2006).
Articles that found null effects motivated the use of warnings similarly, sug-
gesting that warnings might slow participants down (Bailey et al., 2021) and
enhance source monitoring (Calvillo & Parong, 2016). Yet, they acknowledged
that warnings failed to do so (Huff & Umanath, 2018), possibly because partici-
pants did not read the warnings carefully (Bailey etal., 2021), because they failed
to use their knowledge to detect discrepancies (Fazio etal., 2013), or due to cog-
nitive load limitations (Neil etal., 2021).
4 This article focused on kindergarteners
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 13 of 41 43
Attention Focusing In line with the idea that discrepancy detection is key to resist-
ance to misinformation, eight articles targeted this mechanism through attention-
focusing interventions. The interventions differed in how explicitly they focused
participants’ attention on discrepancies. Implicit interventions included two mind-
fulness interventions (Alberts etal., 2017; Qi etal., 2018), which were described as
a training of participants’ ability to attend closely to their current experiences. Other
interventions were less implicit but nonetheless did not explicitly disclose their focus
on discrepancy detection. This involved instructions to read a text slowly (Tousig-
nant etal., 1986) or to highlight important details (Blank etal., 2022). Explicit types
of interventions asked participants to press a button when noticing discrepancies
(Huff & Umanath, 2018; Putnam etal., 2017; Umanath etal., 2019). Others directly
asked participants to judge the veracity of the information (Bailey etal., 2021).
Results were sobering for the mindfulness interventions. Alberts et al. (2017)
found null effects on participants’ susceptibility. By contrast, Qi etal. (2018) found
that their mindfulness intervention increased susceptibility to misinformation.
The other two broad types of attention-focusing interventions yielded positive
findings. Tousignant et al. (1986) found that participants who were instructed to
read slowly detected more discrepancies than those told to read quickly. The former
also had a more accurate memory of the original information. Blank et al. (2022)
found a reduced misinformation effect for items for which many participants had
noticed discrepancies.
Results for the explicit attention-focusing interventions were largely positive. Instruc-
tions to indicate discrepancies were effective (Putnam etal., 2017), although one study
found only marginal benefits (Umanath etal., 2019). Huff and Umanath (2018) found
benefits only when discrepancy-detection instructions were combined with examples.
Finally, Bailey etal. (2021) found positive effects of veracity-judgment instructions.
The explanations offered for these effects differed between the mindfulness interven-
tions and other interventions. A mindful focus might enhance encoding and memory of
misinformation (Alberts et al., 2017). Furthermore, mindfulness promotes a nonjudg-
mental attitude that may backfire in the context of misinformation (Qi etal., 2018).
For remaining interventions, the authors suggested that slower reading times enhance
scrutiny, which in turn enhances discrepancy detection and resistance to misinformation
(Bailey etal., 2021; Tousignant etal., 1986). Furthermore, when noticing discrepancies,
participants naturally review the original memory they are comparing new information
to, which potentially strengthens this memory (Putnam etal., 2017). Overall, discrep-
ancy detection seems to undermine the credibility of subsequent information (Blank
etal., 2022) via a “detect-and-reject” process (Umanath etal., 2019).
Discussion
A common theme among the reviewed studies on the misinformation effect
was that successful interventions enhanced the likelihood that participants
noticed discrepancies between original and misinformation. While retrieval
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 14 of 41
practice did not per se seem suited to achieve this goal, there were exceptions
(i.e., detail-focused testing, criterion testing). Hence, well-practiced, detailed
knowledge might help participants discriminate between the contexts in which
they received different pieces of information. This in turn might make them
more resilient to misinformation. In this context, we note that surprisingly few
of the reviewed studies included feedback in their retrieval-practice interven-
tions. This choice contrasts with the benefits of feedback for retrieval prac-
tice, which are well-documented by educational psychology research (e.g.,
Schwieren etal., 2017). Yet, our review suggests that specific pre-warnings
and explicit types of attention-focusing interventions might be better suited
to enhance discrepancy detection and reduce susceptibility to misinformation.
Nonetheless, it would be interesting to test combinations of these different
types of interventions.
Lie Detection Trainings
Of the 107 reviewed articles, 30 focused on lie detection trainings. A typical study
teaches participants how to detect lies based on cues (Fig.5). To evaluate the inter-
vention, participants then observe a speaker who tells some lies. In the end, they
must identify the lies. While the studies assume that participants have no prior
knowledge relevant to the speaker’s statement, some assessed participants’ prior
misinformation-detection competencies.
Research on lie detection builds on the idea that liars behave differently than
truth-tellers (ten Brinke et al., 2016) due to emotional arousal and cognitive load
while lying (Vrij etal., 2015, 2016). Lie detection trainings offer instruction for peo-
ple to consciously detect such differences (Fiedler & Walka, 1993; ten Brinke etal.,
2016).
To train people in lie detection, interventions may focus on nonverbal or verbal
types of deception cues. Example nonverbal cues are eye aversion, fidgeting, and
involuntary emotional expressions (Ekman & Friesen, 1969). Yet, research shows
that nonverbal behaviors are mostly unreliable (DePaulo etal., 2003; Luke, 2019).
Fig. 5 Structure of lie detection training studies
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 15 of 41 43
By contrast, verbal cues result from cognitive strategies liars use to fabricate lies and
track them (Vrij, 2019). They can relate to content (e.g., consistency of statements)
or not (e.g., using fillers like “em”). Verbal cues are more reliable than nonverbal
ones (DePaulo etal., 2003; Vrij, 2014, 2019).
When comparing lie detection studies to our idealized model (Figs.1B and 5),
differences become apparent. Lie detection studies consider prior knowledge/beliefs
about a topic irrelevant. This makes sense considering their study designs. Many
lie detection studies use stimulus video tapes of interviewees as training and/or test
materials. Here, interviewees are instructed to lie or tell the truth. Furthermore,
while all studies included an immediate posttest, only one additionally included a
delayed posttest (Ranick etal., 2013).
The common study setup implies a dichotomous definition of misinformation,
held by all reviewed articles (Fig.3). Furthermore, Fig.3 illustrates that lie detection
trainings target particular misinformation features, specifically content features and
structural features. This aligns with their focus on verbal and nonverbal cues. We
coded verbal cues related to content as content features, and both verbal cues unre-
lated to content (e.g., fillers “em”) and nonverbal cues as structural features. Content
features were targeted by 22 of the articles, structural features by 23 articles. Four
did not target specific misinformation features.
Narrative Review
Several types of interventions emerged from our review. Specifically, we distinguish
between deception-cue instructions, comprehensive trainings, and facial-expression
trainings.
Deception‑Cue Instructions Of the 30 articles, 17 provided instructions on decep-
tion detection based on specific cues. The interventions targeted different types of
cues (Table2). Most included instructions to focus on verbal cues; several included
instructions to focus on specific nonverbal cues. Some interventions told participants
to ignore nonverbal cues because they are unreliable. One article instructed partici-
pants either to mimic or not to mimic nonverbal cues.
A further characteristic of several interventions was examples of targeted cues.
In one article, participants were told to compare cues displayed during critical state-
ments to baseline statements that were verifiably correct. Several interventions
included practice with feedback. Finally, in two articles, the intervention was carried
out in form of the VERITAS game. Players practice interviewing and receive feed-
back on cues they used to judge veracity.
Several articles reported positive effects. Attending to verbal cues improved lie
detection relative to controls in several studies (DePaulo et al., 1982; Fiedler &
Walka, 1993; Masip etal., 2018; Santarcangelo etal., 2004), regardless of whether
participants practiced with feedback (Fiedler & Walka, 1993). Geiselman et al.
(2013) and Porter etal. (2000) found that instructions on verbal and nonverbal cues
improved lie detection compared to a control. Yet, Porter etal. (2000) found that a
practice-with-feedback group that received no instruction on cues performed equally
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 16 of 41
Table 2 Types of deception-cue instructions
Cues Baseline Examples Feedback Articles
Verbal Yes Yes Yes Köhnken, 1987
No Yes Yes Porter etal., 2000; Stanley & Webster, 2019
No Kassin & Fong, 1999; Levine etal., 2005; Santarcangelo etal., 2004
No Yes Dunbar etal., 2018; Fiedler & Walka, 1993; Miller etal., 2019
No Bogaard & Meijer, 2022; Culbertson etal., 2016; DePaulo etal.,
1982; Fiedler & Walka, 1993; Geiselman etal., 2013; Masip etal.,
2018; Tetterton & Warren, 2005
Nonverbal focus Yes Ye s Yes Porter etal., 2000
No Köhnken, 1987
Nonverbal ignore Yes Ye s No Kassin & Fong, 1999; Levine etal., 2005; Santarcangelo etal., 2004
No Yes Dunbar etal., 2018; Fiedler & Walka, 1993; Miller etal., 2019
No Culbertson etal., 2016; DePaulo etal., 1982; Fiedler & Walka,
1993; Geiselman etal., 2013; Stel etal., 2009; Tetterton & War-
ren, 2005
No Yes No Dickhäuser etal., 2012
No No Bogaard & Meijer, 2022
Special features Instructions to mimic or not to mimic: Stel etal., 2009;
Game implementation: Dunbar etal., 2018; Miller etal., 2019
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 17 of 41 43
well. For nonverbal cues, Stel etal. (2009) found that instructions to attend to but
not to mimic these cues improved lie detection over a control.
However, many studies yielded null effects. Instructions on verbal and nonverbal
cues yielded no advantage over a control (Culbertson etal., 2016). Likewise, several
studies found no benefits of instruction on verbal cues (Köhnken, 1987; Santarcan-
gelo etal., 2004). Similarly, instructions to attend to nonverbal cues proved inef-
fective in several studies (DePaulo etal., 1982; Santarcangelo et al., 2004), as did
instructions to mimic nonverbal cues (Stel etal., 2009). Also, instructions to ignore
nonverbal cues proved no better than controls (Bogaard & Meijer, 2022; Dickhäuser
et al., 2012; Tetterton & Warren, 2005), even if participants were told to instead
focus on verbal cues (Bogaard & Meijer, 2022).
Additionally, there was evidence of negative effects of deception-cue instructions.
Kassin and Fong (1999) found that instruction with examples on verbal cues resulted
in worse lie detection performance than no instruction. Likewise, Stanley and Web-
ster (2019) found that a training on verbal cues led to no pretest-to-posttest gains,
whereas the control improved. Similarly, Levine etal. (2005) found that a training on
verbal and nonverbal features reduced lie detection performance relative to a control.
However, the authors updated the cues covered in the training based on their findings.
The updated training then enhanced lie detection performance relative to the control.
Findings for the game implementations were also mixed. While VERITAS
yielded significant pre-post gains in one study (Dunbar etal., 2018) it improved only
truth-detection skills in another (Miller etal., 2019). Furthermore, while the game
improved truth-detection skills relative to a lecture on the same cues, it yielded
lower deception-detection skills (Dunbar etal., 2018). However, a follow-up study
covered in the same article found null effects.
To explain the identified positive effects, the reviewed articles suggested that peo-
ple lack knowledge about diagnostic cues (Fiedler & Walka, 1993). Furthermore,
because people rarely have the occasion to consciously observe deceptive communi-
cation, they may particularly benefit from instruction (DePaulo etal., 1982).
To explain null or negative effects, the articles offered several explanations.
First, it is difficult to ignore stereotypical cues even when receiving explicit
instruction about diagnostic cues (Bogaard & Meijer, 2022). Second, instruction
about diagnostic cues may be insufficient because deception needs to be judged
based on a collective pattern of indicators (Geiselman etal., 2013). Finally, such
patterns may differ between people (Köhnken, 1987).
Comprehensive Trainings Nine articles described comprehensive trainings, often for
police officers. While some lasted a few hours (Colwell etal., 2012; Crews etal.,
2007; Porter etal., 2010; Sooniste etal., 2017), others took a full day (Shaw etal.,
2011; Vrij etal., 2015, 2016) or more (Matsumoto etal., 2014; Porter etal., 2000).
All trainings included practice opportunities.
A further characteristic of comprehensive lie detection trainings regarded their
focus on a combination of verbal and nonverbal cues (Crews et al., 2007; Matsu-
moto etal., 2014; Porter etal., 2010, 2000; Shaw etal., 2011; Sooniste etal., 2017).
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 18 of 41
However, two emphasized the importance of relying on verbal rather than nonverbal
cues (Vrij etal., 2015, 2016). Only Colwell etal. (2012) focused solely on verbal cues.
A few trainings added specific foci. Three covered interviewing techniques to
elicit diagnostic statements (Sooniste etal., 2017; Vrij etal., 2015, 2016). Addi-
tionally, Shaw etal. (2011) told participants to compare interviewees’ behaviors
to a baseline. Furthermore, Crews et al. (2007) described in-person and web-
based versions of their training. Finally, Porter et al. (2000) asked for partici-
pants’ rationales for judgments during practice.
Most studies reported positive findings, either based on comparisons to controls
(Porter et al., 2000; Vrij et al., 2016) or pretest-to-posttest gains (Colwell et al.,
2012; Crews etal., 2007; Matsumoto et al., 2014; Porter et al., 2010; Shaw etal.,
2011; Vrij etal., 2015). Crews etal. (2007) found that the web-based version of
their training was just as effective as the in-person version. Furthermore, Porter etal.
(2000) found no differences between their comprehensive training and the decep-
tion-cue instructions described above. Only Sooniste etal. (2017) found no differ-
ences between training and control.
The rationales offered for these findings largely highlight that people often hold
inaccurate beliefs about deception cues (Colwell etal., 2012). By contrast, trained
participants used cues and asked questions in accordance with the training (Col-
well etal., 2012; Porter et al., 2000; Vrij etal., 2016), which likely explains the
positive effects.
Facial‑Expression Trainings Three of the 30 articles described facial-expression
trainings, following Ekman and Friesen (1969)’s facial action coding system. Galin
and Thorn (1993) used an older version of the training with videos and photo-
graphs while providing instruction on facial movements. By contrast, Jordan etal.
(2019) and Stanley and Webster (2019) used Ekman’s web-based training tool. Both
included examples and practice with feedback.
Results were unconvincing. Jordan et al. (2019) and Galin and Thorn (1993)
found no benefits over a control (in the latter case regarding detection of posed pain
only). Conversely, Stanley and Webster (2019) found a pretest-to-posttest decline,
whereas a control improved. Furthermore, the facial-expressions training performed
no better than Stanley and Webster (2019)’s deception-cue training on verbal cues
(see above). Finally, participants who practiced with feedback but received no train-
ing on facial expressions were better able to detect posed and genuine pain than
those who received this training (Galin & Thorn, 1993).
To explain these findings, the articles mentioned that the material might have
been overwhelming (Galin & Thorn, 1993). Furthermore, displays of emotions may
not be indicative of deception (Jordan etal., 2019). Additionally, deception detection
might be more successful when participants employ a holistic, instinctive strategy
(Stanley & Webster, 2019).
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 19 of 41 43
Other Three articles were distinct from the types of lie detection trainings just
described. Two aimed at helping children understand that others might have goals
and thoughts that differ from one’s own, which might lead them to intentionally
deceive the child. Ding etal. (2022)5 taught children to deceive a puppet during
a hide-and-seek task and praised them when they were successful. Ranick et al.
(2013)6 described personal training sessions that taught three autistic children about
people’s motivation to lie. During conversation, the trainer praised children for rec-
ognizing deception or asked leading questions to guide the child towards deception
detection. Ding etal. (2022) found benefits for the intervention compared to a con-
trol. Ranick etal. (2013) documented pretest-to-posttest improvements that persisted
one month after training. The authors attributed the positive findings to children’s
improved theory-of-mind ability (Ding etal., 2022; Ranick etal., 2013).
ten Brinke et al. (2019) instructed participants to attend to their physiologi-
cal reactions to emotion-inducing videos. Participants received explanations of
the underlying mechanisms and reflected on their experiences. The intervention
improved lie detection skills relative to a control. The authors argue that deception
triggers automatic reactions in viewers’ body and mind. Detecting these reactions
can be trained by improving introspective access.
Discussion
Altogether, the reviewed deception-cue instructions and facial-expression trainings
were largely ineffective at reducing susceptibility to misinformation, regardless of
the targeted types of cues or pedagogical devices. By contrast, the reviewed compre-
hensive trainings seemed to have promise—although these were mostly evaluated
based on pretest-to-posttest rather control-group comparisons. These mixed findings
might result from lie detection requiring an analysis of idiosyncratic patterns of cues
rather than of single cues. The intervention that focused participants on their own
physiological experiences as well as the theory-of-mind interventions for children
may further highlight the importance of a holistic approach to lie detection. In these
latter interventions, participants were not asked to focus on specific cues but rather
to judge the entire experience. Future research might explore ways to attune partici-
pants to patterns that characterize deceptive scenarios as a whole.
Information‑Literacy Trainings
Of the 107 articles, 23 covered interventions for information literacy. Some authors
distinguish information literacy, media literacy, news literacy, and digital literacy
(e.g., Apuke & Gever, 2023). Still, they all assume that news consumers need a
diverse set of critical-thinking skills to understand, evaluate, interact with, and
5 This article focused on children aged 3–4
6 This article focused on autistic children aged 6–9
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 20 of 41
create information. To simplify, we refer to trainings that aim to achieve these goals
as information-literacy trainings.
When comparing studies on information-literacy trainings to the idealized
model (Figs.1B and 6), it becomes clear that this is one of only two lines of
research that considered all components of the idealized model. However, most
studies considered only one type of prior knowledge. Some only captured prior
knowledge/beliefs about the topic, others only prior misinformation-detection
competencies (see Appendix TableA1). Only two studies considered both types
of prior knowledge (Brodsky etal., 2021; Tseng etal., 2021). Furthermore, only
two studies conducted both immediate and delayed posttests (Al Zou’bi, 2022;
Osborn, 1939). Thus, no study incorporated all components of the idealized
model.
While most articles defined misinformation as false information, about a third
presented nuanced definitions of misinformation, including, for instance, satirical
news or rumors (Fig.3). This use of nuanced definitions stands in contrast to all
other lines of research but aligns with the focus on complex information scenarios in
such trainings. Also in contrast to other lines of research, all 23 articles targeted spe-
cific features of misinformation. Content features, features of sources, and structural
features were most common. Network features were targeted by two articles—the
only two in our corpus.
Narrative Review
Our review uncovered several types of information-literacy interventions. We
distinguish comprehensive-curricula, feature-focused trainings, inoculation, and
games.
Comprehensive Curricula Of the 23 articles, seven described comprehensive cur-
ricula that covered a broad range of content and skills. The curricula differed
in what misinformation features they targeted (e.g., clickbait headlines, lack of
sources) and their delivery format (e.g., social media, news, email). A further
difference lay in how they problematized misinformation. Some spoke of mis-
information as a challenge for civic competence (Apuke et al., 2023); Geers
Fig. 6 Structure of information-literacy training studies
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 21 of 41 43
etal., 2020; Murrock etal., 2018). Others considered it a global social challenge
(Apuke & Gever, 2023; Osborn, 1939; Scheibenzuber etal., 2021; Tseng etal.,
2021).
All curricula stressed specific content features, such as taking a single viewpoint
(Murrock et al., 2018), overconfident claims or omissions (ibid.), ambiguous or
emotional language (Osborn, 1939), or clickbait (Apuke etal., 2023). Others cov-
ered bias, for instance by the media (Murrock etal., 2018) or readers themselves
(Scheibenzuber etal., 2021).
The curricula also differed in terms of detection strategies for which they pro-
vided instruction. One common strategy centered on features of sources, such as
considering sources (Apuke & Gever, 2023; Apuke et al., 2023; Murrock et al.,
2018; Scheibenzuber etal., 2021; Tseng etal., 2021) and engaging in fact-checking
or lateral reading (Apuke & Gever, 2023; Murrock etal., 2018; Scheibenzuber etal.,
2021). Similarly, Geers et al. (2020)7 and Scheibenzuber et al. (2021) considered
aspects of spread, attending to filter bubbles and echo chambers. One curriculum
covered epistemological issues such as how knowledge is constructed and commu-
nicated (Tseng etal., 2021).8 It further focused on the use of scientific evidence or
theory to back up claims.
Overall, the reviewed curricula yielded mixed results. Several studies reported
improved misinformation detection compared to controls (Apuke & Gever, 2023;
Apuke etal., 2023; Murrock etal., 2018) and pretest-to-posttest gains in credibility
judgments (Scheibenzuber etal., 2021). Two studies identified further advantages in
verification skills, sharing intentions, and social-media knowledge (Apuke & Gever,
2023; Apuke etal., 2023).
Two studies reported partially positive findings. Geers et al. (2020) found that
study participants’ political knowledge and media literacy improved. However, in
a comparison of participants who joined some versus all training components, they
found that the latter outperformed the former in political knowledge but not media
literacy. Similarly, Osborn’s (1939)9 curriculum reduced participants’ average sus-
ceptibility to misinformation and increased knowledge about propaganda compared
to a control, but not for all participants.
By contrast, only one study found null effects. Tseng etal.’s (2021) curriculum on
epistemic vigilance in scientific reading failed to outperform a control.
To explain positive effects, authors suggest that critical thinking and knowledge
about misinformation reduces participants’ reliance on their first emotional or intui-
tive reaction. Instead, they assess the quality of information, leading them to spot
misinformation. An implicit assumption is that spotting misinformation reduces its
impact on a person’s beliefs.
7 This article focused on vocational school students aged 16–26
8 This article focused on students in grades 9–12
9 This article focused on students in grades 11–12
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 22 of 41
In explaining mixed effects, the authors referred to participants’ prior beliefs.
While all studies on comprehensive curricula accounted for some aspects of
prior knowledge, two foregrounded both prior knowledge and beliefs/attitudes in
the design of their intervention (Osborn, 1939; Tseng etal., 2021). Mixed effects
might reflect both the impact and challenge of explicitly designing interventions
that attend to attitude in addition to knowledge. A further explanation for the
mixed results relates to attrition and fidelity due to the comprehensive nature of
the intervention (Geers etal., 2020; Green etal., 2022; Tseng etal., 2021). This
explanation highlights the importance of attending to participants’ experiences.
Feature‑Focused Trainings Ten of the 23 articles described interventions with
special rather than comprehensive foci. The interventions took different peda-
gogical approaches. Some used humor to teach cues of satirical news (Prich-
ard & Rucynski Jr., 2019). Others used propaganda videos to teach about
cinematic techniques (Merkt & Sochatzy, 2015)10 and persuasion techniques
(Simpson, 2008). Some focused on information sources, covering verifica-
tion, fact checking, or lateral reading strategies (Al Zou’bi, 2022; Brodsky
et al., 2021; Domgaard & Park, 2021; Eng etal., 2021; Filho et al., 2023;
McGrew & Chinoy, 2022; Motz etal., 2022).
Results for humoristic and propaganda-focused interventions were positive.
Humor competence training improved participants’ ability to identify satirical news
(Prichard & Rucynski Jr., 2019). Similarly, using propaganda videos yielded posi-
tive effects compared to controls in terms of cinematic-technique identification and
interpretation (Merkt & Sochatzy, 2015) and knowledge about persuasion tech-
niques (Simpson, 2008).
The source-focused interventions were also positively evaluated. Eng et al.
(2021) found that a training that combined example quotes and images improved
risk perceptions, information seeking, and intentions to discuss misinformation
with others more so than a training without these components. However, it did
not affect declarative knowledge about misinformation. Motz etal. (2022) showed
that direct instruction with inductive practice improved participants’ fallacy
detection compared to a control. Furthermore, several studies found pretest-to-
posttest gains in misinformation detection (Al Zou’bi, 2022), source-evaluation
knowledge (McGrew & Chinoy, 2022), and fact-checking knowledge (Brodsky
et al., 2021). Additionally, Domgaard and Park (2021) and Filho etal. (2023)
showed that full trainings led to more robust gains in credibility judgments than
trainings that excluded some components.
Authors used similar explanations about positive effects as comprehensive
curricula, stressing that information literacy helps participants detect misinfor-
mation. Furthermore, they suggested that meeting the needs of the participants
(e.g., cognitive capacity, prior knowledge/attitudes) increases the utility of
knowledge gains for misinformation detection. Yet, while all feature-focused
trainings appeared effective, some authors cautioned that the lack of a delayed
10 This article focused on students in grade 9
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 23 of 41 43
posttest reduces the long-term generalizability of their findings (e.g., Apuke &
Gever, 2023; Iyengar etal., 2023). This applies to many of the reviewed stud-
ies, discussed below. Additionally, Prichard and Rucynski Jr. (2019) noted that
their training made participants overall more skeptical of news, warranted or
not.
Inoculation Interventions Three of the 23 articles described interventions
grounded in inoculation theory (Banas & Miller, 2013; Biddlestone etal., 2023;
Green et al., 2022). Inoculation theory assumes that exposing people to weak
versions of persuasive messages can instill a skeptical attitude to future persua-
sion attempts. All interventions included forewarnings that served to strengthen
participants’ current beliefs and to induce vigilance about future persuasion
attempts. They also offered information about the content covered or strategies
used in impending persuasion attempts. All interventions covered persuasion
techniques, but differed in terms of approach.
We observed a few differences among the interventions. Biddlestone etal. (2023)
and Banas and Miller (2013) examined interventions with a passive approach that
did not require participants to actively produce counterarguments. The latter study
compared two types of interventions, one focused on facts and on logical arguments
in persuasion attempts. Green etal. (2022) compared active and passive approaches.
Besides a regular inoculation intervention, Banas and Miller (2013) studied a meta-
inoculation intervention. It included warnings and explanations of how inoculation
works while encouraging critical thinking.
Overall, inoculation interventions yielded mixed results. Biddlestone etal.
(2023) found that a passive inoculation reduced participants’ reliance on the
logical fallacy targeted by the intervention, which mediated a reduction in
conspiracy beliefs. Banas and Miller (2013) found that both types of passive
inoculation interventions reduced misinformation susceptibility. However,
Green etal. (2022) reported opposite results. They found that an active inocu-
lation but not a passive one reduced participants’ trust in a misinformative
article.
Finally, the meta-inoculation intervention inhibited the effectiveness of both
the fact-focused and the logic-focused inoculations (Banas & Miller, 2013).
Importantly, while meta-inoculation reduced the effects of the inoculations, it
did not eliminate them. Even meta-inoculated participants were still significantly
more resistant than control participants. This underscores the power of fact- and
logic-focused inoculation strategies.
To explain the positive findings, the authors invoke inoculation theory, detailed
above. To explain the conflicting results for passive inoculations, Green etal. (2022)
suggest that the content of their intervention and the persuasion attempt did not
match. Inoculation may be most effective when the intervention covers the same
content as the misinformation.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 24 of 41
Games Three of the 23 articles described game-based information-literacy
trainings: Trustme! (Yang et al., 2021) and Bad News (Iyengar et al., 2023;
Modirrousta-Galian et al., 2023). Trustme! incorporated a feature-focused
training, asking players to evaluate examples of information while provid-
ing elaborative feedback. The inoculation game Bad News let players embody
a deceitful media mogul while coaching them on misinformation techniques.
Both included practice with feedback, although feedback differed in level of
elaboration.
Modirrousta-Galian et al. (2023) additionally described two inductive-learning
interventions that asked participants to judge the veracity of news headlines, one
version with gamification, one without. Both interventions provided correctness
feedback without explanations, reasoning that participants might induce how to
detect misinformation.
Results for game interventions were mixed. Yang etal. (2021) found that
Trustme! players outperformed a no-intervention control at lie detection.
Iyengar et al. (2023) reported that Bad News players showed less trust in
misinformation. In contrast, Modirrousta-Galian etal. (2023) reported mar-
ginal improvements in misinformation discrimination for the inductive-learn-
ing interventions relative to Bad News players and controls. Indeed, the Bad
News game did not improve participants’ ability to detect real news. Instead,
they became less skeptical of new altogether, regardless of whether they were
true.
One plausible explanation for the success of Trustme! is in the high degree of
alignment between game and study design. Indeed, Trustme! was designed specifi-
cally for the Yang etal. (2021) study. Furthermore, to explain why the Bad News
game made participants generally more skeptical, Modirrousta-Galian etal. (2023)
highlight that it presents examples of only fake news, not of both fake and real news.
Hence, it fails to support differentiation.
Discussion
Our review suggests that information-literacy interventions have the potential to
help participants detect misinformation. No other line of research in our review
addressed nuanced definitions of misinformation and targeted content, source,
structural, network features to the same extent. To this end, the interventions
often explicitly drew participant attention to specific features, such as through the
use of infographics or short guides. Based on our review, we cannot conclude that
the success of these interventions stems from this nuanced coverage. However,
it highlights that it is possible for successful interventions to attempt to cover
the complexity of realistic misinformation scenarios. Furthermore, the reviewed
articles stress the importance of addressing prior knowledge and attitudes in the
interventions. Nevertheless, some articles raise the concern that information-liter-
acy interventions risk making participants skeptical of all (not just fake) news if
they do not consider both.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 25 of 41 43
Fraud Trainings
Six of the 107 articles described fraud trainings. They target misinformation sce-
narios where people receive messages asking them to take actions that have negative
consequences (e.g., disclosing sensitive information). Research on fraud trainings
acknowledges cognitive limitations of human information processing and peoples
overconfidence in their ability to judge information (Sarno etal., 2022). This results
in tendencies to overlook security warnings (Moreno-Fernández et a., 2017) and to
engage in risky behaviors (Daengsi etal., 2021).
To evaluate fraud trainings, the reviewed studies used assessments that pre-
sented fraudulent messages to which participants had to respond. This study
setup implies a malignant actor who lies. Hence, all articles used a dichotomous
definition of misinformation (Fig.3). Furthermore, most fraud trainings assumed
that fraud is detectable based on specific features (Fig.3). In fact, three articles
focused on content features, three on structural features, and one on features of
sources. Only one did not target specific features.
The reviewed studies differed in their consideration of prior knowledge. Some
assessed participants’ prior knowledge/beliefs about the content of the message
and some considered prior misinformation-detection competencies (see Appen-
dix TableA1). However, two did not consider any prior knowledge (Sarno etal.,
2022; Scheibe etal., 2014), and none considered both types, which is in contrast
to the idealized model (see Figs.1B and 7).
In further contrast, though one study included both immediate and delayed
posttests, three only included an immediate and two only a delayed posttest (see
Appendix TableA1).
Narrative Review
Four of six articles described phishing trainings. They differed in delivery
method. One type of phishing training provided written information about con-
tent features of phishing attempts (e.g., requiring immediate action, collecting
personal information; Sarno et al., 2022). These interventions also included
practice with feedback. In addition to feedback on phishing attempt detection,
Fig. 7 Structure of fraud training studies
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 26 of 41
Weaver et al. (2021) informed about cues indicating an email’s legitimacy.
Other trainings used inductive approaches where participants distinguished
genuine from fake websites based on visual cues (Moreno-Fernández et al.,
2017) or legitimate from phishing emails (Sarno etal., 2022).
Two articles targeted other types of fraud. One issued warning phone calls that
gave advice on how to respond to phone scams (Scheibe etal., 2014), while the
other described brief online warnings to investment-fraud techniques (Burke etal.,
2022).
All interventions yielded positive effects. Sarno etal. (2022) and Weaver etal.
(2021) found improved detection performance compared to controls. Daengsi etal.
(2021) reported that employees were less likely to open phishing emails after the
training. Moreno-Fernández et al. (2017) found similar pretest-to-posttest gains.
Furthermore, progressing from easy to hard examples yielded higher learning gains
than using only hard examples. Likewise, relative to controls, both interventions
with warnings led to improved detection of phishing (Scheibe etal., 2014) or invest-
ment fraud (Burke etal., 2022). Participants who received a reminder 3months later
showed persistent gains 6months later (Burke etal., 2022). Importantly, the training
improved fraud detection while maintaining overall willingness to invest.
One explanation for the success of these interventions is the integration of
trainings into known interfaces, such as an email client already used for work
(Sarno et al., 2022). The authors also suggest that repeated practice is key for
immediate gains (Weaver etal., 2021). Additionally, reminders may be important
for reducing long-term decay (Burke etal., 2022).
Discussion
Overall, fraud trainings yielded positive effects. Furthermore, this line of research
touched on all aspects of the idealized model. However, each individual study
diverged from the idealized model in terms of included components (e.g., prior
knowledge) and sequencing (e.g., simultaneous rather than sequential presentation
of false information and the posttest). These divergences seem appropriate given the
scope and context of this line of research but raise important questions about the
role of fidelity to the idealized model.
Other Educational Interventions
Eight of the 107 articles did not fit the above categories. All defined misinfor-
mation as false information and none targeted specific misinformation features
(Fig. 3). While most included only an immediate posttest, three included a
delayed but no immediate posttest (Murphy etal., 2020; Sellabona etal., 201311;
Topp-Manriquez etal., 2016).
11 This article focused on children aged 41–47 months
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 27 of 41 43
Two articles examined whether warnings or previous debriefings about deception
studies affected later susceptibility. Allen (1983) found that warning participants
that they may be deceived did not affect their chance of guessing the study’s real
purpose. In contrast, Murphy etal. (2020) found a protective effect of having been
debriefed about a previous misinformation study. They recruited participants from
a previous misinformation study for which they had been debriefed. Compared to
new participants, repeat participants were slightly less susceptible to misinformation
they received in a new study. The authors reason that repeat participants were more
suspicious and hence more likely to detect misinformation.
Two articles examined perceptual training on deceptive actions in handball
(Alsharji & Wade, 2016) or badminton (Ryu etal., 2018). While Alsharji and Wade
(2016) used signaling to highlight diagnostic visual cues, Ryu etal. (2018) used video
editing techniques that highlighted cues relevant to movement. Both interventions
proved more effective than controls. The authors suggest that focusing participants on
informative visual cues helped them disengage from uninformative visual ones.
Two articles focused on detection of manipulated pictures of faces. Robertson
etal. (2018) informed participants about visual cues that indicate manipulation. Half
of them additionally practiced manipulation detection with feedback. The practice
group showed higher performance on a posttest. The authors reason that feedback is
essential to learning the cues. By contrast, ToppManriquez etal. (2016) examined
composite assembly. Participants selected single facial features to create pictures
of faces. Doing so significantly reduced their ability to identify faces from a photo
lineup relative to a control, especially if they had reviewed their composites in the
interim. The authors suggest that the composites intruded participants’ memory of
the original, hence reducing their performance on a recognition test.
One study examined a theory-of-mind intervention for children (Sellabona etal.,
2013). An experimenter either described or labelled objects so that children could
distinguish their apparent and identities (e.g., a candle looks like a tomato). Label-
ling yielded the highest benefit when children later performed other theory-of-mind
tasks. The authors offer that labelling helped children to verbally distinguish multi-
ple features of the object or event.
Finally, Rapp etal. (2014) asked participants to detect inaccuracies in fictional
stories. Participants either had to highlight inaccuracies or edit them. Both interven-
tions reduced their tendency to believe these inaccuracies relative to controls. The
authors conclude that active reflection on incoming information may protect against
the influence of inaccuracies.
Discussion
The small number of interventions in this category implies that comments about
their effectiveness are unwarranted. Yet, they may inspire future research. Overall,
studies in this category point towards the need of interventions matching the way in
which misinformation affects people. If it does due to a lack of reflection (e.g., fic-
tional stories), then interventions that encourage reflection might be helpful. If mis-
information operates on a perceptual level (e.g., manipulated facial pictures), then
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 28 of 41
an intervention that targets perceptual processes can be effective whereas one that
requires analysis of facial components is not.
General Discussion
Our systematic literature review sought to identify educational interventions aiming
to reduce the effects of misinformation (RQ1) and to examine empirical evidence
about their effectiveness (RQ2). Here, we discuss our findings (1) across the lines of
research we uncovered and (2) in comparison to our idealized model of educational
interventions.
Findings Across Lines ofResearch
Our review highlights characteristics of educational interventions that were success-
ful across different lines of research. First, the intervention should help participants
notice discrepancies between the original and new information. This can be achieved
by pre-warnings (see the “Moderators of the Misinformation Effect” section; e.g.,
Salovich & Rapp, 2022), explicit attention-focusing interventions (see the “Modera-
tors of the Misinformation Effect” section; e.g., Putnam etal., 2017), or inoculation
(see the “Information-Literacy Trainings” section; e.g., Biddlestone etal., 2023). An
intervention might also cover strategies for discrepancy detection through patterns
of cues in utterances (see the “Lie Detection Trainings” section; e.g., Masip etal.,
2018) or messages (see the “Information-Literacy Trainings” section; e.g., Murrock
etal., 2018; and the"Fraud Trainings" section; e.g., Sarno etal., 2022).
Second, the process participants use to notice discrepancies should match the
process they use to process misinformation (see the “Other Educational Interven-
tions” section). For instance, if the target deception scenario involves perception
alone (e.g., deceptive actions in sports, facial image manipulation), then interven-
tions are ineffective if they require analysis (e.g., Topp-Manriquez etal., 2016). By
contrast, interventions focused on practicing perception based on cues with feedback
seem more promising in this context (e.g., Robertson etal., 2018). This finding has
potential implications for complex interventions that target both perception-based
recognition of cues that should then be scrutinized with an analytical mindset. In
fact, several successful comprehensive curricula incorporated both practice of cue
perception and content analysis (see the “Information-Literacy Trainings” section;
e.g., Apuke & Gever, 2023).
Our comparison across lines of research also highlights avenues for future
research. First, different lines of research focus on different misinformation sce-
narios, reflected by different definitions of misinformation. Research on the misin-
formation effect, lie detection, and fraud trainings has mostly focused on dichoto-
mous definitions—in part due to controlled study designs. By contrast, studies on
information-literacy trainings have considered nuanced definitions, but often with
less stringent study designs. Thus, interventions that target misinformation scenar-
ios where truth and falsehood are well-defined have been evaluated in more depth.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 29 of 41 43
Future research should test whether findings from these studies apply to gray areas
of misinformation where uncertainty about information validity plays a bigger role.
Second, different lines of research focus on different misinformation features.
Though prior research shows that content features, features of sources, structural
features, and network features can indicate misinformation, extant interventions pre-
dominantly target mostly content and structural features. While some addressed fea-
tures of sources, only two considered network features. Information-literacy train-
ings targeted the largest variety of features, whereas most interventions related to
the misinformation effect targeted none. Future research should examine whether
interventions that were effective for some features might also enhance misinforma-
tion based on other types of features.
Third, different lines of research focus on different aspects that might influence a
person’s susceptibility to misinformation. While some interventions focus on mem-
ory for the original information, others focus on attention during (mis-)informa-
tion processing, yet others on strategic knowledge or even perceptual skills. Future
research should investigate whether interventions might be more effective if they
focus on multiple aspects at the same time. Likewise, components of interventions
that proved effective for one aspect may also prove effective for others. For instance,
feedback—proven effective for strategic knowledge about misinformation detec-
tion—may also prove effective for memory interventions.
Comparison toIdealized Model
We used an idealized model of research on educational interventions aiming to
reduce the impact of misinformation (Fig.1B) as a frame for our review. Comparing
the reviewed lines of research to our idealized models highlights further avenues for
future research.
First, different lines of research emphasize different types of prior knowledge.
While research on the misinformation effect manipulates prior content knowl-
edge while viewing prior misinformation competencies irrelevant, other lines of
research consider various aspects of prior knowledge. Research on information-
literacy trainings acknowledges both prior content knowledge/beliefs and prior
misinformation competencies. Yet only two studies assessed both. By contrast,
research on lie detection and fraud trainings views prior content knowledge irrel-
evant but often (but not always) assessed prior misinformation competencies.
Yet, from an educational psychology perspective, both prior knowledge/beliefs
about the topic of the misinformative message and prior misinformation-detection
competencies are relevant in realistic misinformation scenarios. It is important
not only to account for both types of prior knowledge but also to examine how
they might interact with an intervention. Prior research suggests that prior knowl-
edge about the topic targeted by misinformation offers some protection from mis-
information (Pennycook etal., 2020; Scherer etal., 2021; Umanath, 2016). Yet,
expertise in a field can also influence a person’s identity, which in turn influences
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 30 of 41
how they process information (Oyserman & Dawson, 2021). Hence, experts and
novices may need different types of instruction for how to spot misinformation.
Second, the tests used to assess learning outcomes differed dramatically
between lines of research. Some used assessments that were very similar to the
examples encountered during training (e.g., fraud trainings, some information-
literacy trainings). This seemed to be associated with positive findings. These
assessments contrast with the wider scope of other studies (e.g., lie detection
trainings, some information-literacy trainings), which often tested the ability to
detect misinformation in novel situations (e.g., interviewing a new person, con-
sidering a new media message). Results were often more mixed when studies
used less tightly coupled measures. Studies on the misinformation effect were
different yet because they centered around memory. If an intervention’s goal is
to protect acquired knowledge from misinformation, it is hard to uncouple the
assessments from memory of this knowledge.
From an educational psychology perspective, these observations raise ques-
tions about transfer. We defined an educational intervention as one that instills
generalizable competencies that reduce participants’ susceptibility to future mis-
information. Hence, ideally, a study would demonstrate that participants can
apply their newly acquired competencies to pieces of misinformation that differ
somewhat from examples covered during training. Yet, while several articles met
our inclusion criteria by attempting to achieve this goal, few studies were able
to demonstrate success in this regard. This does not imply that the interventions
cannot achieve this goal, but rather that many of the reviewed studies were not
set up to test its attainment. Furthermore, in addition to assessing misinformation
detection, studies should assess participants’ belief in misinformation. In fact,
people may continue to be influenced by information even if they know it is false
(e.g., Lewandowsky etal., 2012).
A related concern regards the longevity of any identified effects. For an educa-
tional intervention to be viewed as successful, it should show long-term effects.
Hence, educational psychology studies often incorporate delayed posttests. Yet,
few of the reviewed articles used a delayed posttest, although some showed that
lasting benefits are possible (Burke etal., 2022; Murrock etal., 2018). Long-term
assessments are important also because interventions that yield short-term ben-
efits are not always the most successful interventions when considering long-term
benefits (e.g., Bjork & Bjork, 2011; Roediger III & Karpicke, 2006).
In sum, our idealized model highlights gaps in the reviewed studies that future
work can address by considering broader aspects of prior knowledge and learning
outcomes (Fig.8).
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 31 of 41 43
Limitations
Our review gives a broad overview across literatures that document educational
interventions to address misinformation. As such, it does not provide an exhaus-
tive review of each line of research. It is possible that a given line of research uses
different keywords that our literature search did not uncover. Even so, this review
may alert readers to literatures they may otherwise not have considered. Further-
more, our goal was to review interventions that target cognitive processes. Other
factors (e.g., situational, personal) affect information processing. Future research
could explore how non-cognitive factors can be incorporated into educational
interventions. Finally, an in-depth consideration of various types of misinforma-
tion features and intervention characteristics was out of scope for this review. For
instance, content features, as well as structural features, could be further distin-
guished into sub-features. Future research should examine various aspects of mis-
information features in more depth.
Conclusions
This systematic review offers a broad overview of research on educational inter-
ventions aimed at mitigating negative effects of future misinformation encounters.
It identifies various types of interventions with promising effects. Furthermore, it
highlights gaps between educational psychology research and extant research on
misinformation as well as among different lines of research on misinformation. In
doing so, this review aims to pave the way for future interventions that can help
people become more resilient to misinformation.
Fig. 8 Model of educational interventions seeking to reduce misinformation susceptibility, highlighting
focal areas for future research in red
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 32 of 41
Appendix
Table 3 Articles that partially deviated from the models described above
Component Review articles
Moderators of the misinformation effect
Presentation of initial information
(Exceptions)
• Fazio etal. (2013) assessed prior knowledge but did
not present initial information
• Fazio etal. (2015) assumed that participants had prior
knowledge (no test)
• Salovich and Rapp (2013) assumed that participants
had prior knowledge (no test)
Delayed posttest without immediate posttest;
delays after the intervention were counted,
regardless of whether the delay occurred
before or after misinformation presentation;
delayed posttest was defined as > 24 h since
intervention
• Brown etal. (1999) gave either an immediate posttest
or a 1-week delayed posttest
• Chan etal. (2017) 1-week delay until final test
• Chan etal. (2022) Exp 2 and 3 included 48 h delay
until final test
• Chan and Langley (2011) included a 1-week delay
until final test
• Chan and LaPaglia (2011) Exp 3 included a 1-week
delay until final test
• Holliday (2003a, 2003b) included a 1 day-delay until
final test
• Marche (1999) Exp 1 included a 4- or 8-week delay
of final test; Exp 2 included a 1-day delay of final test;
Exp 3 included a 1-week delay of final test
• Marche and Howe (1995) included 1week delay until
final test
• Marche etal. (2002) included a 3-week delay until
final test
• McPhee etal. (2014) included a 1-week delay until
final test
• Memon etal. (2010) included a 1 week delay until
final test
• Mullet etal. (2014) gave either an immediate or to a
1-week delayed posttest
• Pansky and Tennenboim (2011) included a 2-day
delay until final test
• Pereverseff etal. (2020) included a 2-day delay until
final test
• Qi etal. (2018) gave one group an immediate posttest,
another a 1-week delayed test
• Wang etal. (2014) included a 1-week delay until final test
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 33 of 41 43
Table 3 (continued)
Component Review articles
Lie detection trainings
Prior misinformation-
detection competencies
• Crews etal. (2007) assessed deception-detection skills
• Ding etal. (2022) assessed epistemic vigilance
• Dunbar etal. (2018) assessed deception-detection
skills
• Galin and Thorn (1993) assessed facial expression
detection skills
• Geiselman etal. (2013) assessed deception-cue
knowledge
• Jordan etal. (2019) assessed facial expression detec-
tion skills
• Kassin and Fong (1999) assessed lie detection knowl-
edge
• Matsumoto etal. (2014) assessed lie detection skills
• Miller etal. (2019) assessed lie detection skills
• Porter etal. (2000) assessed lie detection skills for part
of sample
• Porter etal. (2010) assessed facial expression detec-
tion skills
• Ranick etal. (2013) assessed deception-detection
skills
• Shaw etal. (2011) assessed deception-detection skills
• Stanley and Webster (2019) assessed facial expression
detection
• Vrij etal. (2015) assessed interview and veracity-
judgement skills
• Vrij etal. (2016) assessed interview and veracity-
judgement skills
Immediate and delayed posttests • Ranick etal. (2013) included an immediate posttest
and a 1-month delayed posttest
Information-literacy trainings
Prior misinformation-detection competencies • Al Zou’bi (2022) assessed fake news detection
• Apuke etal. (2023) assessed fake-news detection
• Apuke and Gever (2023) assessed fake news detection
• Brodsky etal. (2021) assessed lateral reading skills
• Iyengar etal. (2022) assessed news post reliability
ratings
• McGrew and Chinoy (2022) assessed source evalua-
tion knowledge
• Merkt and Sochatzy (2015) assessed propaganda
knowledge
• Modirrousta-Galian etal. (2022) assessed veracity
judgments
• Motz etal. (2022) assessed fallacy identification in
fiction
• Murrock etal. (2018) assessed retrospective ratings of
misinformation detection
• Prichard and Rucynski Jr. (2019) assessed satirical
news detection
• Scheibenzuber etal. (2021) assessed credibility ratings
• Tseng etal. (2021) assessed credibility judgments of
claims
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 34 of 41
Acknowledgements Special thanks to Charlotte Müller and Ralph Schumacher for their comments and
feedback.
Funding Open access funding provided by Swiss Federal Institute of Technology Zurich. This work was
partially funded by the U.S. National Science Foundation (2202457).
Data Availability Citations and coding supporting this review are available via OSF (osf.io/sj6pg/).
Table 3 (continued)
Component Review articles
Prior content knowledge/beliefs • Banas and Miller (2013) assessed beliefs about the
9/11 Truth conspiracy theory
• Biddlestone etal. (2023) assessed general beliefs
about conspiracies
• Brodsky etal. (2021) assessed beliefs about the
platform
• Eng etal. (2021) assessed knowledge and beliefs
about greenwashing
• Geers etal. (2020) assessed political efficacy and
media literacy
• Green etal. (2022) assessed climate change beliefs
and cultural cognition
• Osborn (1939) assessed knowledge and attitudes about
propaganda content
• Tseng etal. (2021) assessed media and IT literacy,
scientific reasoning, and attitudes
• Yang etal. (2021) assessed intellectual civic skills
Immediate and delayed posttests • Al Zou’bi (2022) assessed fake news detection imme-
diately and 2days after training
• Osborn (1939) conducted an additional test of attitude
three weeks after training
Delayed final test with no immediate test • Murrock etal. (2018) assessed ratings of misinforma-
tion-detection skills 1.5 years after training with no
immediate test following training
Fraud trainings
Prior content knowledge/beliefs • Burke etal. (2022) assessed financial literacy
• Moreno-Fernández etal. (2017) assessed Internet and
online banking usage
Prior misinformation detection competencies • Daengsi etal. (2022) assessed reactions to phishing
attempts
• Moreno-Fernández etal. (2017) assessed knowledge
about phishing
• Weaver etal. (2021) assessed phishing-detection skills
Immediate and delayed posttests • Burke etal. (2022) assessed willingness to invest and
beliefs about losing money both immediately after
training and with a 6-month delay
Delayed posttest without immediate posttest • Daengsi etal. (2022) assessed reactions to phishing
attempts 4months after the initial phishing attack
• Schiebe etal. (2014) assessed scam detection abilities
on a 2- or 4-week delay
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 35 of 41 43
Declarations
Conflict of Interest The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons licence, and indicate if changes were made. The images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended
use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permis-
sion directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/
licenses/by/4.0/.
References
* Indicates reviewed articles
Adams, Z., Osman, M., Bechlivanidis, C., & Meder, B. (2023). (Why) is misinformation a problem? Per-
spectives on Psychological Science, 18(6), 1436–1463.
Afroz, S., Brennan, M., & Greenstadt, R. (2012). Detecting hoaxes, frauds, and deception in writing style
online. IEEE Symposium on Security and Privacy, 2012, 461–475.
* Al Zou’bi, R. M. (2022). The impact of media and information literacy on students’ acquisition of the
skills needed to detect fake news.
* Alberts, H. J., Otgaar, H., & Kalagi, J. (2017). Minding the source: The impact of mindfulness on
source monitoring. Legal and Criminological Psychology, 22(2), 302-313.
Allchin, D. (2023). Ten competencies for the science misinformation crisis. Science Education, 107(2),
261–274.
* Allen, D. F. (1983). Follow-up analysis of use of forewarning and deception in psychological experi-
ments. Psychological Reports, 52(3), 899-906.
* Alsharji, K. E., & Wade, M. G. (2016). Perceptual training effects on anticipation of direct and decep-
tive 7-m throws in handball. Journal of Sports Sciences, 34(2), 155-162.
* Apuke, O. D., & Gever, C. V. (2023). A quasi experiment on how the field of librarianship can help in
combating fake news. The journal of academic librarianship, 49(1), 102616.
* Apuke, O. D., Omar, B., & Asude Tunca, E. (2023). Literacy concepts as an intervention strategy for
improving fake news knowledge, detection skills, and curtailing the tendency to share fake news in
Nigeria. Child & Youth Services, 44(1), 88-103.
* Bailey, N. A., Olaguez, A. P., Klemfuss, J. Z., & Loftus, E. F. (2021). Tactics for increasing resistance
to misinformation. Applied Cognitive Psychology, 35, 863-872.
* Banas, J. A., & Miller, G. (2013). Inducing resistance to conspiracy theory propaganda: Testing inocu-
lation and metainoculation strategies. Human Communication Research, 39(2), 184-207.
Baqir, A., Galeazzi, A., & Zollo, F. (2024). News and misinformation consumption: A temporal compari-
son across European countries. PLoS ONE, 19(5), e0302473.
Berkowitz, D., & Schwartz, D. A. (2016). Miley, CNN and The Onion: When fake news becomes realer
than real. Journalism Practice, 10(1), 1–17.
Biddlestone, M., Roozenbeek, J., & van der Linden, S. (2023). Once (but not twice) upon a time: Narra-
tive inoculation against conjunction errors indirectly reduces conspiracy beliefs and improves truth
discernment. Applied Cognitive Psychology, 37(2), 304–318.
Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable
difficulties to enhance learning. In M. A. Gernsbacher & J. Pomerantz (Eds.), Psychology and the real
world: Essays illustrating fundamental contributions to society (2 ed., pp. 59–68). Worth Publishing.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 36 of 41
* Blank, H., Panday, A., Edwards, R., Skopicz-Radkiewicz, E., Gibson, V., & Reddy, V. (2022). Double
misinformation: Effects on eyewitness remembering. Journal of Applied Research in Memory and
Cognition, 11(1), 97–105.
* Bogaard, G., & Meijer, E. H. (2022). No evidence that instructions to ignore nonverbal cues improve
deception detection accuracy. Applied Cognitive Psychology, 36(3), 636-647.
* Brodsky, J. E., Brooks, P. J., Scimeca, D., Galati, P., Todorova, R., & Caulfield, M. (2021). Asso-
ciations between online instruction in lateral reading strategies and fact-checking COVID-19 news
among college students. AERA Open, 7, 23328584211038937.
* Brown, A. S., Schilling, H. E., & Hockensmith, M. L. (1999). The negative suggestion effect:
Pondering incorrect alternatives may be hazardous to your knowledge. Journal of Educational
Psychology, 91(4), 756-764.
* Bulevich, J. B., Gordon, L. T., Hughes, G. I., & Thomas, A. K. (2022). Are witnesses able to avoid
highly accessible misinformation? Examining the efficacy of different warnings for high and low
accessibility postevent misinformation. Memory & Cognition, 50(1), 45-58.
* Burke, J., Kieffer, C., Mottola, G., & Perez-Arce, F. (2022). Can educational interventions reduce sus-
ceptibility to financial fraud? Journal of Economic Behavior & Organization, 198, 250-266.
Byrt, T., Bishop, J., & Carlin, J. B. (1993). Bias, prevalence and kappa. Journal of Clinical Epidemiol-
ogy, 46(5), 423–429.
* Calvillo, D. P., & Parong, J. A. (2016). The misinformation effect is unrelated to the DRM effect with
and without a DRM warning. Memory, 24(3), 324-333.
Cantarella, M., Fraccaroli, N., & Volpe, R. (2023). Does fake news affect voting behaviour? Research
Policy, 52(1), 104628.
Chan, M.-P.S., & Albarracín, D. (2023). A meta-analysis of correction effects in science-relevant misin-
formation. Nature Human Behaviour, 7(9), 1514–1525.
Chan, J. C., & Langley, M. M. (2011). Paradoxical effects of testing: Retrieval enhances both accurate
recall and suggestibility in eyewitnesses. Journal of Experimental Psychology, 37(1), 248–255.
* Chan, J. C., & LaPaglia, J. A. (2011). The dark side of testing memory: Repeated retrieval can enhance
eyewitness suggestibility. Journal of Experimental Psychology, 17(4), 418-432.
* Chan, L., & Okamoto, Y. (2006). Resisting suggestive questions: Can theory of mind help? Journal of
Research in Childhood Education, 20(3), 159-174.
* Chan, J. C., Wilford, M. M., & Hughes, K. L. (2012). Retrieval can increase or decrease suggestibility
depending on how memory is tested: The importance of source complexity. Journal of Memory
and Language, 67(1), 78-85.
* Chan, J. C., Manley, K. D., & Lang, K. (2017). Retrieval-enhanced suggestibility: A retrospective and a
new investigation. Journal of Applied Research in Memory and Cognition, 6(3), 213-229.
* Chan, J. C., O’Donnell, R., & Manley, K. D. (2022). Warning weakens retrieval-enhanced suggestibil-
ity only when it is given shortly after misinformation: The critical importance of timing. Journal of
Experimental Psychology, 28(4), 694–716.
* Colwell, L. H., Colwell, K., Hiscock-Anisman, C. K., Hartwig, M., Cole, L., Werdin, K., & Youschak,
K. (2012). Teaching professionals to detect deception: The efficacy of a brief training workshop.
Journal of Forensic Psychology Practice, 12(1), 68-80.
* Crews, J., Cao, J., Lin, M., Nunamaker, J., & Burgoon, J. (2007). A comparison of instructor-led vs.
web-based training for detecting deception. Journal of STEM Education, 8(1), 31–40.
* Culbertson, S. S., Weyhrauch, W. S., & Waples, C. J. (2016). Behavioral cues as indicators of deception in
structured employment interviews. International Journal of Selection and Assessment, 24(2), 119-131.
* Daengsi, T., Pornpongtechavanich, P., & Wuttidittachotti, P. (2021). Cybersecurity awareness enhancement:
A study of the effects of age and gender of Thai employees associated with phishing attacks. Education
and Information Technologies, 27, 4729–4751.
Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrocioc-
chi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of
Sciences, 113(3), 554–559.
DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues
to deception. Psychological Bulletin, 129(1), 74.
* DePaulo, B. M., Lassiter, G. D., & Stone, J. L. (1982). Attentional determinants of success at detecting
deception and truth. Personality and social psychology bulletin, 8(2), 273-279.
* Dickhäuser, O., Reinhard, M.-A., & Marksteiner, T. (2012). Accurately detecting students’ lies regard-
ing relational aggression by correctional instructions. Educational Psychology, 32(2), 257-271.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 37 of 41 43
* Ding, X. P., Lim, H. Y., & Heyman, G. D. (2022). Training young children in strategic deception pro-
motes epistemic vigilance. Developmental Psychology, 58(6), 1128.
* Domgaard, S., & Park, M. (2021). Combating misinformation: The effects of infographics in verifying
false vaccine news. Health Education Journal, 80(8), 974-986.
Driskell, J. E. (2012). Effectiveness of deception detection training: A meta-analysis. Psychology,
Crime & Law, 18(8), 713–731.
* Dunbar, N. E., Miller, C. H., Lee, Y.-H., Jensen, M. L., Anderson, C., Adams, A. S., Elizondo, J.,
Thompson, W., Massey, Z., & Nicholls, S. B. (2018). Reliable deception cues training in an inter-
active video game. Computers in Human Behavior, 85, 74-85.
Ekman, P., & Friesen, W. V. (1969). Nonverbal leakage and clues to deception. Psychiatry, 32(1), 88–106.
* Eng, N., DiRusso, C., Troy, C. L. C., Freeman, J. R., Liao, M. Q., & Sun, Y. (2021). ‘I had no idea that
greenwashing was even a thing’: Identifying the cognitive mechanisms of exemplars in greenwash-
ing literacy interventions. Environmental Education Research, 27(11), 1599-1617.
* Fazio, L. K., Barber, S. J., Rajaram, S., Ornstein, P. A., & Marsh, E. J. (2013). Creating illusions of knowl-
edge: Learning errors that contradict prior knowledge. Journal of Experimental Psychology, 142(1), 1-5.
* Fazio, L. K., Dolan, P. O., & Marsh, E. J. (2015). Learning misinformation from fictional sources: Under-
standing the contributions of transportation and item-specific processing. Memory, 23(2), 167-177.
* Fiedler, K., & Walka, I. (1993). Training lie detectors to use nonverbal cues instead of global heuristics.
Human Communication Research, 20(2), 199-223.
* Filho, M. C., Rafael, D. N., Barros, L. S. G., & Mesquita, E. (2023). Mind the fake reviews! Protecting consum-
ers from deception through persuasion knowledge acquisition. Journal of Business Research, 156, 113538.
Fogg, B. J., Marshall, J., Laraki, O., Osipovich, A., Varma, C., Fang, N., Paul, J., Rangnekar, A., Shon,
J., & Swani, P. (2001). What makes web sites credible? A report on a large quantitative study. Pro-
ceedings of the SIGCHI conference on Human factors in computing systems,
* Galin, K. E., & Thorn, B. E. (1993). Unmasking pain: Detection of deception in facial expressions.
Journal of Social and Clinical Psychology, 12(2), 182-197.
Garrett, R. K. (2011). Troubling consequences of online political rumoring. Human Communication
Research., 37(2), 255–274.
* Geers, S., Boukes, M., & Moeller, J. (2020). Bridging the gap? The impact of a media literacy educa-
tional intervention on news media literacy, political knowledge, political efficacy among lower-
educated youth. Journal of Media Literacy Education, 12(2), 41-53.
* Geiselman, E. R., Elmgren, S., Green, C., & Rystad, I. (2013). Training novices to detect deception in
oral narratives and exchanges. American Journal of Forensic Psychiatry, 31(1), 15.
* Gordon, L. T., & Thomas, A. K. (2017). The forward effects of testing on eyewitness memory: The ten-
sion between suggestibility and learning. Journal of Memory and Language, 95, 190-199.
* Green, M., McShane, C. J., & Swinbourne, A. (2022). Active versus passive: evaluating the effective-
ness of inoculation techniques in relation to misinformation about climate change. Australian Jour-
nal of Psychology, 74(1), 2113340.
* Greene, E., Flynn, M. S., & Loftus, E. F. (1982). Inducing resistance to misleading information. Jour-
nal of Verbal Learning and Verbal Behavior, 21(2), 207-219.
Greene, C. M., & Murphy, G. (2021). Quantifying the effects of fake news on behavior: Evidence from a
study of COVID-19 misinformation. Journal of Experimental Psychology, 27(4), 773–784.
Greifender, R., Jaffe, M., Newman, E., & Schwarz, N. (2021). What is new and true about fake news? In
R. Greifeneder, M. Jaffe, E. Newman, & N. Schwarz (Eds.), The psychology of fake news: Accept-
ing, sharing, and correcting misinformation (pp. 1–8). Routledge.
* Holliday, R. E. (2003a). The effect of a prior cognitive interview on children’s acceptance of misinfor-
mation. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in
Memory and Cognition, 17(4), 443-457.
* Holliday, R. E. (2003b). Reducing misinformation effects in children with cognitive interviews: Disso-
ciating recollection and familiarity. Child development, 74(3), 728-751.
* Huff, M. J., & Umanath, S. (2018). Evaluating suggestibility to additive and contradictory misinforma-
tion following explicit error detection in younger and older adults. Journal of experimental psy-
chology: applied, 24(2), 180.
* Iyengar, A., Gupta, P., & Priya, N. (2023). Inoculation against conspiracy theories: A consumer side
approach to India’s fake news problem. Applied Cognitive Psychology, 37(2), 290-303.
* Jordan, S., Brimbal, L., Wallace, D. B., Kassin, S. M., Hartwig, M., & Street, C. N. H. (2019). A test
of the microexpressions training tool: Does it improve lie detection? Journal of Investigative Psy-
chology and Offender Profiling, 16(3), 222-235.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 38 of 41
* Kassin, S. M., & Fong, C. T. (1999). “I’m innocent!”: Effects of training on judgments of truth and
deception in the interrogation room. Law and Human Behavior, 23, 499-516.
Kendeou, P., & Johnson, V. (2024). The nature of misinformation in education. Current Opinion in Psy-
chology, 55, 101734.
* Köhnken, G. (1987). Training police officers to detect deceptive eyewitness statements: Does it
work?.Social Behaviour, 2(1), 1–17.
LaPaglia, J. A., & Chan, J. C. K. (2013). Testing increases suggestibility for narrative-based misin-
formation but reduces suggestibility for question-based misinformation. Behavioral Sciences
& the Law, 31(5), 593–606.
* LaPaglia, J. A., Wilford, M. M., Rivard, J. R., Chan, J. C. K., & Fisher, R. P. (2014). Misleading sug-
gestions can alter later memory reports even following a cognitive interview. Applied Cognitive
Psychology, 28(1), 1-9.
Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., ..., Sunstein,
C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science,
359(6380), 1094–1096.
* Levine, T. R., Feeley, T. H., McCornack, S. A., Hughes, M., & Harms, C. M. (2005). Testing the effects
of nonverbal behavior training on accuracy in deception detection with the inclusion of a bogus
training control group. Western Journal of Communication, 69(3), 203-217.
Lewandowsky, S., & Van Der Linden, S. (2021). Countering misinformation and fake news through inoc-
ulation and prebunking. European Review of Social Psychology, 32(2), 348–384.
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and
its correction: Continued influence and successful debiasing. Psychological Science in the Public
Interest, 13(3), 106–131.
Lieneck, C., Heinemann, K., Patel, J., Huynh, H., Leafblad, A., Moreno, E., & Wingfield, C. (2022).
Facilitators and barriers of COVID-19 vaccine promotion on social media in the United States: A
systematic review. Healthcare, 10(2), 321.
Loftus, E. F. (2005). Planting misinformation in the human mind: A 30-year investigation of the malle-
ability of memory. Learning & Memory, 12(4), 361–366.
Loveland, M., Ibrahim, R., & Block, G. (2024). The prevalence and characteristics of misinformation on
“TikTok” related to cirrhosis and liver disease: A comparative analysis of accurate and misleading
content. Journal of Investigative Medicine, 72(4), 383–386.
Luke, T. J. (2019). Lessons from Pinocchio: Cues to deception may be highly exaggerated. Perspectives
on Psychological Science, 14(4), 646–671.
* Luke, T. J., Crozier, W. E., & Strange, D. (2017). Memory errors in police interviews: The bait question as
a source of misinformation. Journal of Applied Research in Memory and Cognition, 6(3), 260-273.
* Manley, K. D., & Chan, J. C. K. (2019). Does retrieval enhance suggestibility because it increases per-
ceived credibility of the postevent information? Journal of Applied Research in Memory and Cogni-
tion, 8(3), 355-366.
* Marche, T. A., & Howe, M. L. (1995). Preschoolers report misinformation despite accurate memory.
Developmental Psychology, 31(4), 554.
* Marche, T. A., Jordan, J. J., & Owre, K. P. (2002). Younger adults can be more suggestible than older
adults: The influence of learning differences on misinformation reporting. Canadian Journal on
Aging/La Revue canadienne du vieillissement, 21(1), 85-93.
* Marche, T. A. (1999). Memory strength affects reporting of misinformation. Journal of Experimental
Child Psychology, 73(1), 45-71.
Masip, J., Martínez, C., Blandón-Gitlin, I., Sánchez, N., Herrero, C., & Ibabe, I. (2018). Learning to
detect deception from evasive answers and inconsistencies across repeated interviews: A study
with lay respondents and police officers. Frontiers in Psychology, 8, 2207.
* Matsumoto, D., Hwang, H. C., Skinner, L. G., & Frank, M. G. (2014). Positive effects in detecting lies from
training to recognize behavioral anomalies. Journal of Police and Criminal Psychology, 29(1), 28-35.
Mauk, M., & Grömping, M. (2024). Online disinformation predicts inaccurate beliefs about election fair-
ness among both winners and losers. Comparative Political Studies, 57(6), 965–998.
Mazzeo, V., Rapisarda, A., & Giuffrida, G. (2021). Detection of fake news on COVID-19 on web search
engines. Frontiers in Physics, 9, 685730.
* McGrew, S., & Chinoy, I. (2022). Fighting misinformation in college: Students learn to search and evalu-
ate online information through flexible modules. Information and Learning Sciences, 123(1/2), 45-64.
McKernon, E. (1925). Fake news and the public. Harper’s Magazine. Retrieved October from https://
harpe rs. org/ archi ve/ 1925/ 10/ fake- news- and- the- public/
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 39 of 41 43
* McPhee, I., Paterson, H. M., & Kemp, R. I. (2014). The power of the spoken word: Can spoken-
recall enhance eyewitness evidence? Psychiatry, Psychology and Law, 21(4), 551-566.
* Memon, A., Zaragoza, M., Clifford, B. R., & Kidd, L. (2010). Inoculation or antidote? The effects
of cognitive interview timing on false memory for forcibly fabricated events. Law and Human
Behavior, 34(2), 105.
* Merkt, M., & Sochatzy, F. (2015). Becoming aware of cinematic techniques in propaganda: Instruc-
tional support by cueing and training. Learning and Instruction, 39, 55-71.
* Miller, C. H., Dunbar, N. E., Jensen, M. L., Massey, Z. B., Lee, Y.-H., Nicholls, S. B., Anderson,
C., Adams, A. S., Cecena, J. E., & Thompson, W. M. (2019). Training law enforcement officers
to identify reliable deception cues with a serious digital game. International Journal of Game-
Based Learning (IJGBL), 9(3), 1-22.
* Modirrousta-Galian, A., Higham, P. A., & Seabrooke, T. (2023). Effects of inductive learning and
gamification on news veracity discernment. Journal of Experimental Psychology: Applied,
29(3), 599–619.
Molina, M. D., Sundar, S. S., Le, T., & Lee, D. (2021). “Fake news” is not simply false information: A
concept explication and taxonomy of online content. American Behavioral Scientist, 65(2), 180–212.
* Moreno-Fernández, M. M., Blanco, F., Garaizar, P., & Matute, H. (2017). Fishing for phishers.
Improving Internet users’ sensitivity to visual deception cues to prevent electronic fraud. Com-
puters in Human Behavior, 69, 421-436.
* Motz, B. A., Fyfe, E. R., & Guba, T. P. (2022). Learning to call bullsh* t via induction: Categoriza-
tion training improves critical thinking performance. Journal of Applied Research in Memory
and Cognition, 12(3), 310–324.
* Mullet, H. G., Umanath, S., & Marsh, E. J. (2014). Recent study, but not retrieval, of knowledge
protects against learning errors. Memory & Cognition, 42, 1239-1249.
* Murphy, G., Loftus, E. F., Grady, R. H., Levine, L. J., & Greene, C. M. (2020). Fool me twice: How
effective is debriefing in false memory studies? Memory, 28(7), 938-949.
* Murrock, E., Amulya, J., Druckman, M., & Liubyva, T. (2018). Winning the war on state-sponsored
propaganda: Results from an impact study of a Ukrainian news media and information literacy
program. Journal of Media Literacy Education, 10(2), 53-85.
* Neil, G. J., Higham, P. A., & Fox, S. (2021). Can you trust what you hear? Concurrent misinformation
affects recall memory and judgments of guilt. Journal of Experimental Psychology, 150(9), 1741.
* Osborn, W. W. (1939). An experiment in teaching resistance to propaganda. The Journal of Experi-
mental Education, 8(1), 1-17.
Oyserman, D., & Dawson, A. (2021). Your fake news, our facts: Identity-based motivation shapes
what we believe, share, and accept. In R. Greifeneder, M. Jaffe, E. Newman, & N. Schwarz
(Eds.), The psychology of fake news: Accepting, sharing, and correcting misinformation (pp.
173–195). Routledge.
* Pansky, A., & Tenenboim, E. (2011). Inoculating against eyewitness suggestibility via interpolated
verbatim vs. gist testing. Memory & Cognition, 39(1), 155–170.
Pennycook, G., & Rand, D. G. (2021). The psychology of fake news. Trends in Cognitive Sciences,
25(5), 388–402.
Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID-19 misin-
formation on social media: Experimental evidence for a scalable accuracy-nudge intervention.
Psychological Science, 31(7), 770–780.
* Pereverseff, R. S., Bodner, G. E., & Huff, M. J. (2020). Protective effects of testing across misinfor-
mation formats in the household scene paradigm. Quarterly Journal of Experimental Psychol-
ogy, 73(3), 425-441.
Porter, S., Woodworth, M., & Birt, A. R. (2000). Truth, lies, and videotape: An investigation of the
ability of federal parole officers to detect deception. Law and Human Behavior, 24, 643–658.
* Porter, S., Juodis, M., ten Brinke, L. M., Klein, R., & Wilson, K. (2010). Evaluation of the effectiveness of a
brief deception detection training program. Journal of Forensic Psychiatry & Psychology, 21(1), 66-76.
Posetti, J., & Matthews, A. (2018). A short guide to the history of ‘fake news’ and disinformation. Inter-
national Center for Journalists, 7(2018), 2018–2007.
* Prichard, C., & Rucynski Jr, J. (2019). Second language learners’ ability to detect satirical news and the
effect of humor competency training. TESOL Journal, 10(1), e00366.
* Putnam, A. L., Sungkhasettee, V. W., & Roediger III, H. L. (2017). When misinformation improves
memory: The effects of recollecting change. Psychological Science, 28(1), 36-46.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43
43 Page 40 of 41
* Qi, H., Zhang, H. H., Hanceroglu, L., Caggianiello, J., & Roberts, K. P. (2018). The influence of mind-
fulness on young adolescents’ eyewitness memory and suggestibility. Applied Cognitive Psychol-
ogy, 32(6), 823-829.
* Ranick, J., Persicke, A., Tarbox, J., & Kornack, J. A. (2013). Teaching children with autism to detect
and respond to deceptive statements. Research in Autism Spectrum Disorders, 7(4), 503-508.
* Rapp, D. N., Hinze, S. R., Kohlhepp, K., & Ryskin, R. A. (2014). Reducing reliance on inaccurate
information. Memory & Cognition, 42, 11-26.
* Rindal, E. J., DeFranco, R. M., Rich, P. R., & Zaragoza, M. S. (2016). Does reactivating a witnessed
memory increase its susceptibility to impairment by subsequent misinformation? Journal of Exper-
imental Psychology, 42(10), 1544.
* Robertson, D. J., Mungall, A., Watson, D. G., Wade, K. A., Nightingale, S. J., & Butler, S. (2018).
Detecting morphed passport photos: A training and individual differences approach. Cognitive
research: principles and implications, 3(1), 1-11.
Roediger, H. L., III., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves
long-term retention. Psychological Science, 17(3), 249–255.
Rubin, V. L., & Lukoianova, T. (2015). Truth and deception at the rhetorical structure level. Journal of
the Association for Information Science and Technology, 66(5), 905–917.
* Ryu, D., Abernethy, B., Park, S. H., & Mann, D. L. (2018). The perception of deceptive information can
be enhanced by training that removes superficial visual information. Frontiers in Psychology, 9, 1132.
* Salovich, N. A., & Rapp, D. N. (2022). How susceptible are you? Using feedback and monitoring to
reduce the influence of false information. Journal of Applied Research in Memory and Cognition,
12(3), 352–363.
* Santarcangelo, M., Cribbie, R. A., & Hubbard, A. S. E. (2004). Improving accuracy of veracity judg-
ment through cue training. Perceptual and Motor Skills, 98(3), 1039-1048.
* Sarno, D. M., McPherson, R., & Neider, M. B. (2022). Is the key to phishing training persistence? Devel-
oping a novel persistent intervention. Journal of experimental psychology: applied, 28(1), 85.
* Scheibe, S., Notthoff, N., Menkin, J., Ross, L., Shadel, D., Deevy, M., & Carstensen, L. L. (2014). Fore-
warning reduces fraud susceptibility in vulnerable consumers. Basic and applied social psychology,
36(3), 272-279.
* Scheibenzuber, C., Hofer, S., & Nistor, N. (2021). Designing for fake news literacy training: A prob-
lem-based undergraduate online-course. Computers in Human Behavior, 121, 106796.
Scherer, L. D., McPhetres, J., Pennycook, G., Kempe, A., Allen, L. A., Knoepke, C. E., Tate, C. E., &
Matlock, D. D. (2021). Who is susceptible to online health misinformation? A test of four psycho-
social hypotheses. Health Psychology, 40(4), 274.
Schwartz, S. (2021, 1/20/2021). New media literacy standards aim to combat ‘truth decay’. Education
Week. https:// www. edweek. org/ teach ing- learn ing/ new- media- liter acy- stand ards- aim- to- combat-
truth- decay/ 2021/ 01
Schwieren, J., Barenberg, J., & Dutke, S. (2017). The testing effect in the psychology classroom: A meta-ana-
lyticperspective. Psychology Learning & Teaching, 16(2), 179–196.
* Sellabona, E. S., Sánchez, C. R., Majoral, E. V., Guitart, M. E., Caballero, F. S., & Ortiz, J. S. (2013). Label-
ling improves false belief understanding. A training study. The Spanish Journal of Psychology, 16, E6.
* Shaw, J., Porter, S., & ten Brinke, L. (2011). Catching liars: Training mental health and legal professionals
to detect extremely high-stakes lies. The Journal of Forensic Psychiatry & Psychology, 24(2), 145-159.
* Simpson, K. E. (2008). Classic and modern propaganda in documentary film: Teaching the psychology
of persuasion. Teaching of Psychology, 35(2), 103-108.
* Sooniste, T., Granhag, P. A., & Strömwall, L. A. (2017). Training police investigators to interview to
detect false intentions. Journal of Police and Criminal Psychology, 32, 152-162.
* Stanley, J. T., & Webster, B. A. (2019). A comparison of the effectiveness of two types of deceit detec-
tion training methods in older adults. Cognitive research: principles and implications, 4, 1-13.
* Stel, M., Van Dijk, E., & Olivier, E. (2009). You want to know the truth? Then don’t mimic! Psycho-
logical Science, 20(6), 693-699.
* Szpitalak, M., Woltmann, A., Polczyk, R., & Kękuś, M. (2021). Memory training as a method for
reducing the misinformation effect. Current Psychology, 40, 5410-5419.
ten Brinke, L., Vohs, K. D., & Carney, D. R. (2016). Can ordinary people detect deception after all?
Trends in Cognitive Sciences, 20(8), 579–588.
* ten Brinke, L., Lee, J. J., & Carney, D. R. (2019). Different physiological reactions when observing lies
versus truths: Initial evidence and an intervention to enhance accuracy. Journal of Personality and
Social Psychology, 117(3), 560.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Educational Psychology Review (2025) 37:43 Page 41 of 41 43
* Tetterton, V. A., & Warren, A. R. (2005). Using witness confidence can impair the ability to detect
deception. Criminal Justice and Behavior, 32(4), 433-451.
* ToppManriquez, L. D., McQuiston, D., & Malpass, R. S. (2016). Facial composites and the misinforma-
tion effect: How composites distort memory. Legal and Criminological Psychology, 21(2), 372-389.
* Tousignant, J. P., Hall, D., & Loftus, E. F. (1986). Discrepancy detection and vulnerability to mislead-
ing postevent information. Memory & Cognition, 14(4), 329-338.
* Tseng, A. S., Bonilla, S., & MacPherson, A. (2021). Fighting “bad science” in the information age: The
effects of an intervention to stimulate evaluation and critique of false scientific claims. Journal of
Research in Science Teaching, 58(8), 1152-1178.
Tsfati, Y., Boomgaarden, H., Strömbäck, J., Vliegenthart, R., Damstra, A., & Lindgren, E. (2020). Causes
and consequences of mainstream media dissemination of fake news: Literature review and synthe-
sis. Annals of the International Communication Association, 44(2), 157–173.
Umanath, S. (2016). Age differences in suggestibility to contradictions of demonstrated knowledge: The
influence of prior knowledge. Aging, Neuropsychology, and Cognition, 23(6), 744–767.
* Umanath, S., Ries, F., & Huff, M. J. (2019). Reducing suggestibility to additive versus contradictory
misinformation in younger and older adults via divided attention and/or explicit error detection.
Applied Cognitive Psychology, 33(5), 793-805.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380),
1146–1151.
Vrij, A. (2019). Deception and truth detection when analyzing nonverbal and verbal cues. Applied Cogni-
tive Psychology, 33(2), 160–167.
* Vrij, A., Leal, S., Mann, S., Vernham, Z., & Brankaert, F. (2015). Translating theory into practice:
Evaluating a cognitive lie detection training workshop. Journal of Applied Research in Memory
and Cognition, 4(2), 110-120.
* Vrij, A., Mann, S., Leal, S., Vernham, Z., & Vaughan, M. (2016). Train the trainers: A first step towards
a sciencebased cognitive lie detection training workshop delivered by a practitioner. Journal of
Investigative Psychology and Offender Profiling, 13(2), 110-130.
Vrij, A. (2014). Opportunities: How people can improve their lie detection skills. In Detecting lies and
deceit: Pitfalls and opportunities (pp. 389–418).
* Wang, E., Paterson, H., & Kemp, R. (2014). The effects of immediate recall on eyewitness accuracy
and susceptibility to misinformation. Psychology, Crime & Law, 20(7), 619-634.
* Weaver, B. W., Braly, A. M., & Lane, D. M. (2021). Training users to identify phishing emails. Journal
of Educational Computing Research, 59(6), 1169-1183.
West, J. D., & Bergstrom, C. T. (2021). Misinformation in and about science. Proceedings of the National
Academy of Sciences, 118(5).
Wu, L., Morstatter, F., Carley, K. M., & Liu, H. (2019). Misinformation in social media: Definition,
manipulation, and detection. ACM SIGKDD Explorations Newsletter, 21(2), 80–90.
* Yang, S., Lee, J. W., Kim, H.-J., Kang, M., Chong, E., & Kim, E.-M. (2021). Can an online educational
game contribute to developing information literate citizens? Computers & Education, 161, 104057.
Zannettou, S., Sirivianos, M., Blackburn, J., & Kourtellis, N. (2019). The web of false information:
Rumors, fake news, hoaxes, clickbait, and various other shenanigans. Journal of Data and Infor-
mation Quality (JDIQ), 11(3), 1–37.
Zeng, E., Kohno, T., & Roesner, F. (2020). Bad news: Clickbait and deceptive ads on news and misinfor-
mation websites. Workshop on Technology and Consumer Protection,
Zhang, S., Ma, F., Liu, Y., & Pian, W. (2022). Identifying features of health misinformation on social
media sites: An exploratory analysis. Library Hi Tech, 40(5), 1384–1401.
Zhou, L., Burgoon, J. K., Nunamaker, J. F., & Twitchell, D. (2004). Automating linguistics-based cues
for detecting deception in text-based asynchronous computer-mediated communications. Group
Decision and Negotiation, 13, 81–106.
Zhou, X., & Zafarani, R. (2018). Fake news: A survey of research, detection methods, and opportunities.
arXiv preprint arXiv:1812.00315, 2.
Zhu, B., Chen, C., Loftus, E. F., He, Q., Chen, C., Lei, X., Lin, C., & Dong, Q. (2012). Brief exposure to
misinformation can lead to long-term false memories. Applied Cognitive Psychology, 26(2), 301–307.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:
use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at
onlineservice@springernature.com
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The Internet and social media have transformed the information landscape, democratizing content access and production. While making information easily accessible, these platforms can also act as channels for spreading misinformation, posing crucial societal challenges. To address this, understanding news consumption patterns and unraveling the complexities of the online information environment are essential. Previous studies highlight polarization and misinformation in online discussions, but many focus on specific topics or contexts, often overlooking comprehensive cross-country and cross-topic analyses. However, the dynamics of debates, misinformation prevalence, and the efficacy of countermeasures are intrinsically tied to socio-cultural contexts. This work aims to bridge this gap by exploring information consumption patterns across four European countries over three years. Analyzing the Twitter activity of news outlets in France, Germany, Italy, and the UK, this study seeks to shed light on how topics of European significance resonate across these nations and the role played by misinformation sources. The results spotlight that while reliable sources predominantly shape the information landscape, unreliable content persists across all countries and topics. Though most users favor trustworthy sources, a small percentage predominantly consumes content from questionable sources, with even fewer maintaining a mixed information diet. The cross-country comparison unravels disparities in audience overlap among news sources, the prevalence of misinformation, and the proportion of users relying on questionable sources. Such distinctions surface not only across countries but also within various topics. These insights underscore the pressing need for tailored studies, crucial in designing targeted and effective countermeasures against misinformation and extreme polarization in the digital space.
Article
Full-text available
Electoral disinformation is feared to variously undermine democratic trust by inflaming incorrect negative beliefs about the fairness of elections, or to shore up dictators by creating falsely positive ones. Recent studies of political misperceptions, however, suggest that disinformation has at best minimal effects on beliefs. In this article, we investigate the drivers of public perceptions and misperceptions of election fairness. We build on theories of rational belief updating and motivated reasoning, and link public opinion data from 82 national elections with expert survey data on disinformation and de facto electoral integrity. We show that, overall, people arrive at largely accurate perceptions, but that disinformation campaigns are indeed associated with less accurate and more polarized beliefs about election fairness. This contributes a cross-nationally comparative perspective to studies of (dis)information processing and belief updating, as well as attitude formation and trust surrounding highly salient political institutions such as elections.
Article
Full-text available
Scientifically relevant misinformation, defined as false claims concerning a scientific measurement procedure or scientific evidence, regardless of the author’s intent, is illustrated by the fiction that the coronavirus disease 2019 vaccine contained microchips to track citizens. Updating science-relevant misinformation after a correction can be challenging, and little is known about what theoretical factors can influence the correction. Here this meta-analysis examined 205 effect sizes (that is, k, obtained from 74 reports; N = 60,861), which showed that attempts to debunk science-relevant misinformation were, on average, not successful (d = 0.19, P = 0.131, 95% confidence interval −0.06 to 0.43). However, corrections were more successful when the initial science-relevant belief concerned negative topics and domains other than health. Corrections fared better when they were detailed, when recipients were likely familiar with both sides of the issue ahead of the study and when the issue was not politically polarized.
Article
Full-text available
In the last decade there has been a proliferation of research on misinformation. One important aspect of this work that receives less attention than it should is exactly why misinformation is a problem. To adequately address this question, we must first look to its speculated causes and effects. We examined different disciplines (computer science, economics, history, information science, journalism, law, media, politics, philosophy, psychology, sociology) that investigate misinformation. The consensus view points to advancements in information technology (e.g., the Internet, social media) as a main cause of the proliferation and increasing impact of misinformation, with a variety of illustrations of the effects. We critically analyzed both issues. As to the effects, misbehaviors are not yet reliably demonstrated empirically to be the outcome of misinformation; correlation as causation may have a hand in that perception. As to the cause, advancements in information technologies enable, as well as reveal, multitudes of interactions that represent significant deviations from ground truths through people's new way of knowing (intersubjectivity). This, we argue, is illusionary when understood in light of historical epistemology. Both doubts we raise are used to consider the cost to established norms of liberal democracy that come from efforts to target the problem of misinformation.
Article
Full-text available
Psychological inoculation has proven effective at reducing susceptibility to misinformation. We present a novel storytelling approach to inoculation against susceptibility to the conjunction fallacy (dmeta‐analysis = 0.82), a known cognitive predictor of conspiracy beliefs. In Study 1 (Pilot; N = 161), a narrative inoculation (vs. control) reduced susceptibility to conjunction errors, and in turn, conspiracy beliefs regarding government malfeasance. In Study 2 (main experiment; N = 141; pre‐registered), two separate narrative inoculations (vs. control) directly reduced susceptibility to conjunction errors, and indirectly reduced conspiracy beliefs regarding extra‐terrestrial cover‐ups. In addition, the inoculation messages improved detection of both real and fake news (‘truth discernment’). We discuss theoretical and practical implications, including the use of inoculation to induce critical thinking styles, and tailoring inoculations that may suit storytelling mediums.
Article
Full-text available
General Audience Summary Reading false information, even when it is obviously incorrect, can have problematic effects on what people remember and report to be true. Previous research has shown that asking people to evaluate the accuracy of information as they read can reduce reliance on inaccuracies and encourage the use of relevant, correct prior knowledge. But how might evaluation be encouraged without explicitly asking people to consider the accuracy of information? In two experiments, we found that confronting individuals with their potential susceptibility to false ideas reduced their reliance on inaccurate information in accord with an “evaluative mindset.” Participants rated a mixture of true and false general knowledge statements for how interesting they were (e.g., the capital of France is Paris/Marseille) and then answered trivia questions referencing those ideas (e.g., What is the capital of France?). In Experiment 1, we found that participants who received positive or negative performance feedback about their susceptibility to inaccurate information were less likely to reproduce false ideas and more likely to respond with the correct answers to trivia questions than were participants who received no feedback. Experiment 2 demonstrated that one reason for this benefit might be attributed to people realizing that their susceptibility to false information was being monitored during the task. These results have practical implications for reducing people’s belief in and the spread of false and misleading information.
Article
A growing body of research has shown that while computers can effectively detect fake reviews, humans are no more accurate than chance. Since consumers strongly trust online reviews, and fake reviews are pervasive, they often make suboptimal choices. However, whether consumers can learn to detect fake reviews and whether this knowledge would help them make better-informed decisions remain open questions. We propose that learning four distinctive features of fake reviews (one-sidedness, exaggeration, personal selling style, and generic descriptions) affects consumers' trustworthiness in them and their perceived favorability, thus affecting their purchase intentions toward the target product. Five studies support our theoretical model. We also show that one-sidedness is the most discriminating among the four features and that simply activating consumers' current knowledge is not enough to protect them from fake reviews.
Article
We study the impact of fake news on votes for populist parties in the Italian elections of 2018. Our empirical strategy exploits the historical variation in Italian-speaking and German-speaking voters in the Italian region of Trentino Alto-Adige/Südtirol as an exogenous source of assignment to fake news exposure. Using municipal data, we compare the effect of exposure to fake news on the vote for populist parties in the 2013 and 2018 elections. To do so, we introduce a novel indicator of populism using text mining on the Facebook posts of Italian parties before the elections. Our findings support the view that exposure to fake news favours populist parties regardless of prior support for populist parties, but also that fake news alone cannot explain most of the growth in populism.