ArticlePDF Available

Good News about Bad News: Gamified Inoculation Boosts Confidence and Cognitive Immunity Against Fake News


Abstract and Figures

Recent research has explored the possibility of building attitudinal resistance against online misinformation through psychological inoculation. The inoculation metaphor relies on a medical analogy: by pre-emptively exposing people to weakened doses of misinformation cognitive immunity can be conferred. A recent example is the Bad News game, an online fake news game in which players learn about six common misinformation techniques. We present a replication and extension into the effectiveness of Bad News as an anti-misinformation intervention. We address three shortcomings identified in the original study: the lack of a control group, the relatively low number of test items, and the absence of attitudinal certainty measurements. Using a 2 (treatment vs. control) × 2 (pre vs. post) mixed design (N = 196) we measure participants' ability to spot misinformation techniques in 18 fake headlines before and after playing Bad News. We find that playing Bad News significantly improves people's ability to spot misinformation techniques compared to a gamified control group, and crucially, also increases people's level of confidence in their own judgments. Importantly, this confidence boost only occurred for those who updated their reliability assessments in the correct direction. This study offers further evidence for the effectiveness of psychological inoculation against not only specific instances of fake news, but the very strategies used in its production. Implications are discussed for inoculation theory and cognitive science research on fake news.
Content may be subject to copyright.
journal of cognition
Basol, M., et al. 2020 Good News about Bad News: Gamified Inoculation
Boosts Confidence and Cognitive Immunity Against Fake News.
of Cognition,
3(1): 2, pp. 1–9. DOI:
Good News about Bad News: Gamified Inoculation
Boosts Confidence and Cognitive Immunity Against
Fake News
Melisa Basol, Jon Roozenbeek and Sander van der Linden
Department of Psychology, University of Cambridge, UK
Corresponding author: Sander van der Linden (
Recent research has explored the possibility of building attitudinal resistance against
online misinformation through psychological inoculation. The inoculation metaphor relies on
a medical analogy: by pre-emptively exposing people to weakened doses of misinformation
cognitive immunity can be conferred. A recent example is the
Bad News
game, an online fake
news game in which players learn about six common misinformation techniques. We present
a replication and extension into the effectiveness of
Bad News
as an anti-misinformation
intervention. We address three shortcomings identified in the original study: the lack of a
control group, the relatively low number of test items, and the absence of attitudinal certainty
measurements. Using a 2 (treatment vs. control) × 2 (pre vs. post) mixed design (N = 196) we
measure participants’ ability to spot misinformation techniques in 18 fake headlines before
and after playing
Bad News
. We find that playing
Bad News
significantly improves people’s
ability to spot misinformation techniques compared to a gamified control group, and crucially,
also increases people’s level of confidence in their own judgments. Importantly, this confidence
boost only occurred for those who updated their reliability assessments in the correct direc-
tion. This study offers further evidence for the effectiveness of psychological inoculation
against not only specific instances of fake news, but the very strategies used in its production.
Implications are discussed for inoculation theory and cognitive science research on fake news.
Keywords: Judgment; Decision making: Reasoning
The prevalence and propagation of online misinformation is a threat to science, society, and democracy
(Lazer et al., 2018; Lewandowsky et al., 2017; van der Linden, Maibach, et al., 2017). Recent research has
shown that increased exposure to false and misleading information can have serious consequences rang-
ing from societal misconceptions around climate change and vaccinations (Schmid & Betsch, 2019; van
der Linden, Leiserowitz, et al., 2017) to physical danger and death (Arun, 2019). Although much research
continues to debate the effectiveness of debunking and fact-checking (Chan et al., 2017; Nyhan & Reifler,
2019), a large body of research in cognitive psychology emphasises the continued influence of misinforma-
tion: falsehoods are difficult to correct once they have manifested themselves in memory (Lewandowsky
et al., 2012) and repeated exposure increases the perceived accuracy of fake news (Pennycook et al., 2018).
Consequently, some scholars have started to explore the possibility of “prebunking, i.e. preventative
strategies against the spread of misinformation (Roozenbeek & van der Linden, 2018, 2019). Because the
spread of fake news in online networks bears close resemblance to the manner in which a virus replicates
(Kucharski, 2016), one promising avenue has been the revival of inoculation theory.
Cognitive inoculation is based on the biological analogy of vaccine immunisation (McGuire & Papageorgis,
1961; McGuire, 1964). It posits that the process of injecting a weakened dose of a virus to activate antibody
production (to help confer resistance against future infection) can similarly be applied to the context of
information processing. In other words, by warning and exposing people to severely weakened doses of
attitudinal challenges, cognitive resistance or “mental antibodies” are generated against future persuasion
Basol et al: Good News about Bad NewsArt. 2, page 2 of 9
attempts (Compton & Pfau, 2005), partly by fortifying the structure of associative memory networks (Pfau
et al., 2005). Although meta-analyses have shown that inoculation messages are effective (Banas & Rains,
2010), early inoculation research was mostly restricted to “cultural truisms”, i.e. beliefs so commonly shared
across the social milieu that the notion of persuasive attacks against them appeared unlikely (McGuire,
1964). In the real-world, however, people will often hold very different prior beliefs about a particular
issue. Accordingly, McGuire’s restrictive use of the metaphor has been criticized (Pryor & Steinfatt, 1978)
and ultimately led to a rethinking of the medical analogy (Wood, 2007). In fact, more recent studies have
demonstrated the efficacy of inoculation even when participants have differing prior attitudes, for exam-
ple in the context of disinformation campaigns about climate change (Cook et al., 2017; van der Linden,
Leiserowitz, et al., 2017). Accordingly, the consensus view is that “the analogy is more instructive than restric-
tive” (Compton, 2013, p. 233). Of course, from a theoretical point of view, we cannot speak of purely prophy-
lactic inoculation in the context of most real-world settings but just as medicine has advanced to distinguish
between prophylactic and therapeutic vaccines, therapeutic inoculation approaches can still confer protec-
tive benefits even among those already “afflicted” by boosting immune responses in the desired direction
(Compton, 2019). Yet, it remains unclear whether the same theoretical mechanisms that facilitate prophylac-
tic inoculation (e.g. confidence in defending one’s beliefs) also boost the efficacy of therapeutic inoculation.
Moreover, current inoculation research suffers from two primary limitations; 1) scholarship has pre-
dominantly focused on conferring attitudinal resistance against specific issues and 2) preemptive refu-
tation has traditionally been done in a passive rather active manner (Banas & Rains, 2010). These two
issues substantially limit both the scalability and generalisability of the “vaccine” metaphor (Bonetto et
al., 2018; Roozenbeek & van der Linden, 2019). Accordingly, recent research has focused on the possibility
of a “broad-spectrum vaccine” against misinformation (Roozenbeek & van der Linden, 2018, 2019). The
broad-spectrum approach requires two theoretical innovations; 1) shifting focus away from pre-emptively
exposing participants to weakened examples of specific instances of (mis)information to pre-emptively
exposing participants to weakened examples of the techniques that underlie the production of most mis-
information and 2) revisiting McGuire’s original prediction (McGuire & Papageorgis, 1961) that active
inoculation (letting participants generate their own “antibodies”) would be more effective in conferring
resistance to persuasion than when participants are provided with a defensive pre-treatment in a passive
manner. In a novel paradigm pioneered by Roozenbeek and van der Linden (2019), participants enter a
simulated social media environment (Twitter) where they are gradually exposed to weakened “doses” of
misinformation strategies and actively encouraged to generate their own content. The intervention is a
free social impact game called Bad News (; Figure 1A), developed in collaboration
with the Dutch media platform DROG (DROG, 2018), in which players learn about six common misinfor-
mation techniques (impersonating people online, using emotional language, group polarisation, spread-
ing conspiracy theories, discrediting opponents, and trolling, Figure 1B).
The purpose of the game is to produce and disseminate disinformation in a controlled environment whilst
gaining an online following and maintaining credibility. Players start out as an anonymous netizen and
eventually rise to manage their own fake news empire. The theoretical motivation for the inclusion of these
six strategies are explained in detail in Roozenbeek and van der Linden (2019) and cover many common
disinformation scenarios including false amplification and echo chambers. Moreover, although the game
scenarios themselves are fictional they are modelled after real-world events. In short, the gamified inocula-
tion treatment incorporates an active and experiential component to resistance-building.
The initial study by Roozenbeek and van der Linden (2019) relied on a self-selected online sample of
approximately 15,000 participants in a pre-post (within) gameplay design. Although the study provided
Figure 1: Landing screen Bad News (Panel A) and simulated twitter engine (Panel B).
Basol et al: Good News about Bad News Art. 2, page 3 of 9
preliminary evidence that the game increases people’s ability to detect and resist a whole range of misinfor-
mation (in the form of deceptive Twitter posts), the study suffered from a number of important theoretical
and methodological limitations. For example, although the original study did include various “real news”
control items, it lacked a proper randomized control group. This is important because there could be a secu-
lar trend so that people downgrade their reliability ratings of the fake tweets (pre-post) regardless of what
intervention they are assigned to. Second, because the testing happened within the game environment,
the original study only included a limited number of fake news items (one survey item per misinformation
technique). Third, on a theoretical level, the study only looked at reliability judgments and thus could not
determine how confident or certain people actually were in their beliefs. This is important, because atti-
tude certainty (a dimension of attitude strength) is generally regarded as the conviction that held attitudes
are correct (Tormala & Petty, 2004) and functions as a critical mechanism in resisting persuasion attempts
(Compton & Pfau, 2005). Accordingly, this study addresses three key shortcomings in the original research
by 1) including a randomized control group, 2) adding a larger battery of items, and 3) evaluating whether
the intervention also boosts confidence in reliability judgments.
Participants and procedure
This study employed a 2 (Bad News. vs. Control) * 2 (pre-post) mixed design to test the efficacy of active
(gamified) inoculation in conferring attitudinal resistance to misinformation. The independent vari-
able consisted of either the treatment condition in which participants played the Bad News game or a
control condition in which participants were assigned to play Tetris (to control for gamification; Tetris
specifically was chosen because it is in the public domain and requires little prior explanation before
Following Roozenbeek and van der Linden (2019), the dependent variable consisted of an assessment
of the reliability of 18 misinformation headlines in the form of Twitter posts (please see Supplementary
Figure S5). As the Bad News game covers six misinformation techniques, three items per technique were
included.1 These Twitter posts were created to be realistic, but not real, both to avoid memory confounds
(participants may have seen “real” fake news headlines before) and to able to experimentally isolate the mis-
information techniques. Taking into account the average inoculation effect reported in previous research
(Roozenbeek & van der Linden, 2019), an a priori power analysis was conducted with G* power using
α = 0.05, f = 0.26 (d = 0.52) and power of 0.90 with two experimental conditions. The minimal sample size
required for detecting the main effect was approximately 158. A total of 197 participants were recruited
through the online crowdsourcing platform, Prolific Academic, which has been reported to produce higher
data quality than MTurk (Peer et al., 2017). Consenting participants (58% male, modal age bracket = 18–24,
20% higher educated, 61% liberal, 80% white2) completed the survey, were debriefed, and paid £2.08 in
compensation. This study was approved by the Cambridge Psychology Research Ethics Committee.
A plug-in was created so that the game could be embedded in Qualtrics and pre-post testing could take
place outside of the game environment to further enhance ecological validity. Upon giving informed con-
sent, participants were randomly presented with 18 fictitious Twitter posts (Figure S5) and on a standard
7-point scale, reported on how reliable they received each post to be and how confident they were in their
judgements. Subsequently, participants were randomly assigned to a condition. In the inoculation condi-
tion participants (n = 96) were asked to play the “Bad News” game for about 15 minutes. Participants were
assigned a password for completion which they could only receive after completing the final level (badge).
Participants (n = 102) in the control condition played Tetris for 15 minutes in the same manner. After treat-
ment exposure, all participants were asked to complete the same set of outcome measures.
Outcome Measures
Perceived reliability
To assess participants’ perceived reliability, a single-item measure was presented alongside 18 (6*3) fake
Twitter posts (example item polarization; “New study shows that right-wing people lie more often than left-
wing people, see Figure S5). Participants reported the perceived reliability of each post on a 7-point Likert-
scale from not reliable at all (1), neutral (4) to very reliable (7). Following Roozenbeek and van der Linden
(2019), to form a general fake news scale of perceived reliability, all 18 fake news items were averaged. An
initial reliability analysis suggested good internal consistency (M = 3.17, SD = 0.85, α = 0.84) of the 18-item
1 In the original study by Roozenbeek and van der Linden (2019), only six items were included. We included the original items plus
two new ones for each badge using the same approach.
2 Socio-demographics (except for ideology) were answered by 52% (n = 104) of the 197 participants.
Basol et al: Good News about Bad NewsArt. 2, page 4 of 9
fake news scale. A subsequent exploratory principal component analysis (PCA) was also run on the fake news
items. According to the Kaiser criterion, results indicated that the items clearly loaded on a single dimen-
sion with an eigenvalue of 3.15, accounting for 53% of the variance (please see Scree plot, Supplementary
Figure S6). Thus, for ease of interpretation and to limit multiple testing, all 18 items were collapsed and
treated as one overall measure of fake news judgments. Nonetheless, descriptive statistics for badge-level
results are also presented in Supplementary Table 1.
Attitudinal certainty
Similarly, a single-item measure was presented alongside each of the news items, asking participants to
indicate how confident they are in their reliability assessment on a 7-point Likert scale, ranging from not
at all confident (1) to neutral (4) to very confident (7). Scale reliability analysis on the averaged 18 attitude
certainty items (6*3) indicated high internal validity (M = 5.23, SD = 0.84, α = .89). Similarly, PCA results
indicated that the items loaded on a single dimension with an eigenvalue of 3.88, accounting for 65% of
variance (Supplementary Figure S7, for badge-level results see Table S2).
Political ideology
Political ideology was measured on a standard self-placement scale, ranging from 1 = very conservative,
4 = moderate, to 7 = very liberal. Although often more diverse than Mturk (Peer et al., 2017), the Prolific
sample (M = 4.69, SD = 1.42) was fairly liberal with 21% conservatives, 18% moderates, and 61% identifying
as liberal.
A One-way ANOVA was conducted to compare the effect of treatment condition (inoculation, control) on the
difference in pre-and-post reliability scores of the fake news items. Results demonstrate a significant main
effect of treatment condition on aggregated reliability judgements: F(1, 196) = 17.54, MSE = 0.36, p < .001,
η2 = .082).3 Specifically, compared to the control condition, the shift in post-pre difference scores was sig-
nificantly more negative in the inoculation condition (M = –0.09 vs M = –0.45, Mdiff = –0.36, 95% CI [–0.19,
–0.52], d = –0.60, Figure 2). A separate two-way ANOVA revealed no main effect F(2, 179) = 2.80, p = 0.06
nor interaction F(2, 179) = 0.96, p = 0.38 with political ideology.4 In short, compared to their assessments
on the pre-test, individuals demonstrated a larger decrease in perceived reliability of fake news items when
in the inoculation group versus the control condition. Similar patterns were observed at the badge level in
the game (please see Supplementary Table 1) although there was some heterogeneity across badges with
average effect-sizes ranging from d = 0.14 (polarization) to d = 0.58 (discrediting).
3 A linear regression with post-test as the dependent variable, condition as a dummy, and pre-test as a covariate gives the same
result. There was no significant difference at pre-test between the conditions (Minoculation = 3.14 vs. Mcontrol = 3.32, Mdiff = –0.185 95%
CI [–0.42 0.05], p = 0.12, see Supplementary Table S1 and Figs S1–2).
4 Though conservatives (M = 3.56) were significantly more susceptible than liberals (M = 3.05) on the pre-test, t(147) = 3.22, d = 0.61,
p < 0.01, consistent with Roozenbeek and van der Linden (2019).
Figure 2: Median difference (post-pre) in reliability assessments of fake news items across treatment
conditions with jitter (Panel A) and density plots of the data distributions (Panel B).
Basol et al: Good News about Bad News Art. 2, page 5 of 9
Furthermore, a one-way ANOVA also demonstrated a significant main effect of treatment condition on
(post-pre) confidence scores (Figure 3), F(1, 196) = 13.49, MSE = 0.27, p < .001, η2 = .06. Mean difference
comparisons across conditions indicate a significantly higher (positive) difference score in the inoculation
group compared to the control condition (M = 0.22 vs. M = –0.06, Mdiff = 0.27, 95% CI [0.13, 0.42], d = 0.52).5
This suggests that compared to their assessments prior to treatment exposure, individuals demonstrated a
larger increase in confidence in the inoculation versus the control condition. Once again a two-way ANOVA
revealed no main effect F(2, 179) = 1.22, p = 0.30 nor interaction F(2, 179) = 0.14, p = 0.87 with political
ideology. At the badge level (Supplementary Table 2), effect-sizes for increased confidence ranged from
d = 0.23 (discrediting) to emotion (d = 0.49). Importantly, the increase in confidence only occurred for those
(71%) who broadly updated their reliability judgments in the right direction6 (Minoculation = 0.29 vs. Mcontrol =
–0.02 Mdif f = 0.31, 95%[0.13, 0.49], t(126) = 3.37, p < 0.01). In contrast, no gain in confidence was found
among those who either did not change or updated their judgments in the wrong direction (Minoculation = 0.03
vs. Mcontrol = –0.11, Mdiff = 0.14 95%[–0.11, 0.39], t(68) = 1.13, p = 0.26).
Discussion and conclusion
This study successfully demonstrated the efficacy of a “broad-spectrum” inoculation against misinforma-
tion in the form of an online fake news game. Using a randomized design, multiple items, and measures
of attitudinal certainty, we expand on the initial study by Roozenbeek and van der Linden (2019). Overall,
we find clear evidence in support of the intervention. Whereas Roozenbeek and van der Linden (2019)
reported an average effect-size of d = 0.52 for aggregated reliability judgments using a self-selected within-
subject design, we find very similar effect-sizes in a randomized controlled design (d = 0.60). The range in
effect-sizes observed on the badge level (d = 0.14 to d = 0.58) are also similar to what Roozenbeek and van
der Linden (2019) reported (d = 0.16 to d = 0.36), and can be considered sizeable in the context of resist-
ance to persuasion research (Banas & Rains, 2010; Walter & Murphy, 2018). In fact, Funder and Ozer (2019)
recommend describing these effects as medium to large and practically meaningful, especially considering
the refutational-different rather than refutational-same approach adopted here, i.e. in the game, partici-
pants were trained on different misleading headlines than they were tested on pre-and-post. Moreover, the
fictitious nature of the items help rule out potential memory confounds and the lack of variation on the
measures (pre-post) in the control group should decrease concerns about potential demand characteristics.
Importantly, consistent with Roozenbeek and van der Linden (2019), none of the main effects revealed
an interaction with political ideology, suggesting that the intervention works as a “broad-spectrum” vaccine
across the political spectrum. However, it is interesting that in both studies, the smallest effect is observed
5 A linear regression with post-test as the dependent variable, condition as a dummy, and pre-test as a covariate gives the same result.
There was no significant difference in confidence judgments at pre-test between conditions (Minoculation = 5.25 vs. Mcontrol = 5.27,
Mdiff = 0.02 95% CI [–0.24 0.20], p = 0.88, please see Supplementary Table S2 and Figures S3–4).
6 Meaning that fake headlines were deemed less reliable on the post-test compared to the pre-test (i.e. Mdiff < 0).
Figure 3: Median change scores (post-pre) of confidence in reliability judgments across treatment condi-
tions with jitter (Panel A) and density plots of the data distributions (Panel B).
Basol et al: Good News about Bad NewsArt. 2, page 6 of 9
for the polarization badge. One potential explanation for the lower effect on polarization is confirmation
bias: in the game, decisions can still be branched in an ideologically congenial manner. Given the worldview
backfire effect (Lewandowsky et al., 2012), future research should evaluate to what extent inoculation is
effective for ideologically congruent versus non-congruent fake news. Nonetheless, these results comple-
ment prior findings which suggest that susceptibility to fake news is the result of lack of thinking rather
than only partisan motivated reasoning (Pennycook & Rand, 2019).
Lastly, the current study also significantly advances our understanding of the theoretical mechanisms on
which the intervention acts. For example, while inoculated individuals improved in their reliability assess-
ments of the fake news items, the average confidence they expressed in their judgements also increased sig-
nificantly and substantially. Importantly, the intervention only significantly increased confidence amongst
those who updated their judgments in the right direction (i.e. correctly judging manipulative items to be
less reliable). These findings are supported by previous literature demonstrating the certainty-bolstering
effects of inoculation treatments (Tormala & Petty, 2004) and may suggest that confidence plays a key role
in both prophylactic and therapeutic inoculation approaches. Yet, more research is required to identify
whether an increase in confidence pertains to the fake items themselves or rather the ability to refute mis-
information in general. For example, Tormala and Petty (2004) have argued that these mechanisms are likely
to be intertwined as individuals might be confident in their ability to refute counterarguments because they
perceive their attitudes to be valid and therefore, are both more willing and likely to defend their beliefs.
This study did suffer from a number of necessary limitations. First, we controlled for modality (given that
both Bad News and Tetris are games), but lacked a condition that is cognitively comparable to the inoculation
condition. It will be important for future research to evaluate to what extent “active” gamified inoculation
is superior to “passive” approaches—including traditional fact-checking and other critical thinking interven-
tions—especially in terms of eliciting a) motivation, b) the ability to help people discern reliable from fake
news, and c) the rate at which the inoculation effect decays over time. Second, although we improved on
the initial design by having participants evaluate simulated twitter posts (pre and post) outside of the game
environment, we were not able to determine if playing the Bad News game led to increased ability to detect
real news or changes in online behaviour (e.g. if players shared less fake news on social media than people
who did not play the game). Third, the fact that a small minority of individuals appear to engage in contrary
updating is worth noting and a finding future work may want to investigate further (e.g. in terms of prior
motivations). Fourth, we did not examine the duration of the inoculation effect over time but we encourage
future research to do so given that inoculation treatments are known to decay over time (Banas & Rains,
2010). Lastly, our Prolific sample was likely not representative of the U.K. population.
In conclusion, this study addressed the main shortcomings identified by Roozenbeek and van der Linden
(2019) in their original evaluation of the Bad News game: the lack of a control group, a relatively small
number of items to measure effectiveness, and the absence of attitudinal certainty measurements. We con-
clude that, compared to a control group, the generalized inoculation intervention not only successfully
conferred resistance to online manipulation, but also boosted confidence in the ability to resist fake news
and misinformation.
Data Accessibility Statement
The raw dataset necessary to reproduce the analyses reported in this paper can be retrieved from https://
Additional Files
The additional files for this article can be found as follows:
• Supplementary Table 1. Average reliability (pre-post) judgments overall and for each fake news
badge by experimental condition. DOI:
• Supplementary Table 2. Average confidence (pre-post) judgments overall and for each fake news
badge by experimental condition. DOI:
• Supplementary Figure 1. Mean reliability judgments by condition (pre-test). DOI: https://doi.
• Supplementary Figure 2. Mean reliability judgments by condition (post-test). DOI: https://doi.
• Supplementary Figure 3. Mean confidence judgments by condition (pre-test). DOI: https://doi.
Basol et al: Good News about Bad News Art. 2, page 7 of 9
• Supplementary Figure 4. Mean confidence judgments by condition (post-test). DOI: https://doi.
• Supplementary Figure 5. All 18 fake news items participants viewed pre-post by badge. DOI:
• Supplementary Figure 6. Scree plot for reliability judgments following PCA. DOI: https://doi.
• Supplementary Figure 7. Scree plot for confidence judgments following PCA. DOI: https://doi.
Ethics and Consent
Ethics for this study was approved by the Cambridge Psychology Research Ethics Committee
We would like to thank Ruurd Oosterwoud, DROG and Gusmanson Design for their efforts in helping to
create the Bad News game.
Funding Information
The authors thank the University of Cambridge and the Bill and Melinda Gates Foundation for funding
this research.
Competing Interests
The authors have no competing interests to declare.
Author Contributions
M.B. and J.R. designed the study, developed the items and measures, and carried out the study. M.B. con-
ducted the data analysis and wrote the majority of the paper. J.R. developed the content of the Bad News
game and wrote part of the paper. S.v.d.L. wrote part of the paper, conducted data analysis, co-developed the
Bad News game, and supervised the development of the survey items and study design.
Arun, C. (2019). On WhatsApp, Rumours, and Lynchings. Economic & Political Review Weekly, 6, 30–35.
Banas, J. A., & Rains, S. A. (2010). A Meta-Analysis of Research on Inoculation Theory. Communication
Monographs, 77(3), 281–311. DOI:
Bonetto, E., Troïan, J., Varet, F., Monaco, G. L., & Girandola, F. (2018). Priming Resistance to Persuasion
decreases adherence to Conspiracy Theories. Social Influence, 13(3), 125–136. DOI:
Chan, M., Pui, S., Jones, C. R., Hall Jamieson, K., & Albarracín, D. (2017). Debunking: A Meta-Analysis
of the Psychological Efficacy of Messages Countering Misinformation. Psychological Science, 28(11),
1531–1546. DOI:
Compton, J. (2013). Inoculation theory. In J. P. Dillard, & L. Shen (Eds.), The Sage Handbook of
Persuasion: Developments in Theory and Practice (pp. 220–237). DOI:
Compton, J. (2019). Prophylactic versus therapeutic inoculation treatments for resistance to influence.
Communication Theory, qtz004. DOI:
Compton, J. A., & Pfau, M. (2005). Inoculation theory of resistance to influence at maturity: Recent progress
in theory development and application and suggestions for future research. Annals of the International
Communication Association, 29(1), 97–146. DOI:
Cook, J., Lewandowsky, S., & Ecker, U. K. (2017). Neutralizing misinformation through inoculation:
Exposing misleading argumentation techniques reduces their influence. PloS one, 12(5), e0175799.
DROG. (2018). A good way to fight bad news. Retrieved from www.aboutbad-
Funder, D. C., & Ozer, D. J. (2019). Evaluating Effect Size in Psychological Research: Sense and Non-
sense. Advances in Methods and Practices in Psychological Science, 2(2), 156–168. DOI: https://doi.
Basol et al: Good News about Bad NewsArt. 2, page 8 of 9
Kucharski, A. (2016). Post-truth: Study epidemiology of fake news. Nature, 540(7634), 525. DOI: https://
Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., & Schudson, M.
(2018). The science of fake news. Science, 359(6380), 1094–1096. DOI:
Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond Misinformation: Understanding and Coping
with the “Post-Truth” Era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369. DOI:
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and Its
Correction: Continued Influence and Successful Debiasing. Psychological Science in the Public Interest,
13(3), 106–131. DOI:
McGuire, W. J. (1964). Inducing resistance against persuasion: Some contemporary approaches. Advances
in Experimental Social Psychology, 1, 191–229. DOI:
McGuire, W. J., & Papageorgis, D. (1961). Resistance to persuasion conferred by active and passive prior
refutation of the same and alternative counterarguments. Journal of Abnormal and Social Psychology, 63,
326–332. DOI:
Nyhan, B., Porter, E., Reifler, J., & Wood, T. J. (2019). Taking fact-checks literally but not seriously?
The effects of journalistic fact-checking on factual beliefs and candidate favorability. Political Behavior
(pp. 1–22). DOI:
Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative platforms for
crowdsourcing behavioral research. Journal of Experimental Social Psychology, 70, 153–163. DOI: https://
Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake
news. Journal of Experimental Psychology: General, 147(12), 1865–1880. DOI:
Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better
explained by lack of reasoning than by motivated reasoning. Cognition, 18 8, 39–50. DOI: https://doi.
Pfau, M., Ivanov, B., Houston, B., Haigh, M., Sims, J., Gilchrist, E., & Richert, N. (2005). Inocula-
tion and mental processing: The instrumental role of associative networks in the process of resist-
ance to counterattitudinal influence. Communication Monographs, 72(4), 414–441. DOI: https://doi.
Pryor, B., & Steinfatt, T. M. (1978). The effects of initial belief level on inoculation theory and
its proposed mechanisms. Human Communication Research, 4(3), 217–230. DOI: https://doi.
Roozenbeek, J., & van der Linden, S. (2018). The fake news game: actively inoculating against the risk of
misinformation. Journal of Risk Research, 22(5), 570–580. DOI:
Roozenbeek, J., & van der Linden, S. (2019). Fake news game confers psychological resistance against
online misinformation. Nature Palgrave Communications, 5(65). DOI:
Schmid, P., & Betsch, C. (2019). Effective strategies for rebutting science denialism in public discussions.
Nature Human Behaviour. DOI:
Tormala, Z. L., & Petty, R. E. (2004). Source Credibility and Attitude Certainty: A Metacognitive Analy-
sis of Resistance to Persuasion. Journal of Consumer Psychology, 14(4), 427–442. DOI: https://doi.
van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E. (2017). Inoculating the Public against
Misinformation about Climate Change. Global Challenges, 1(2), 1600008. DOI:
van der Linden, S., Maibach, E., Cook, J., Leiserowitz, A., & Lewandowsky, S. (2017). Inoculating against
misinformation. Science, 358(6367), 1141–1142. DOI:
Walter, N., & Murphy, S. T. (2018). How to unring the bell: A meta-analytic approach to correction of mis-
information. Communication Monographs, 85(3), 423–441. DOI:
Wood, M. L. (2007). Rethinking the inoculation analogy: Effects on subjects with differing preexisting
attitudes. Human Communication Research, 33(3), 357–378. DOI:
Basol et al: Good News about Bad News Art. 2, page 9 of 9
How to cite this article: Basol, M., Roozenbeek, J., and van der Linden, S. 2020 Good News about Bad News:
Gamified Inoculation Boosts Confidence and Cognitive Immunity Against Fake News.
Journal of Cognition,
2, pp. 1–9. DOI:
Submitted: 05 August 2019 Accepted: 02 December 2019 Published: 10 January 2020
Copyright: © 2020 The Author(s). This is an open-access article distributed under the terms of the Creative
Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and
reproduction in any medium, provided the original author and source are credited. See
Journal of Cognition
is a peer-reviewed open access journal published by Ubiquity
... The team of authors may also influence the effectiveness of psychological inoculation. Van der Linden's team [1] introduced psychological inoculation into the misinformation issue and has conducted many related studies with his collaborators [24,[26][27][28]. We used van der Linden's team and other researchers as a moderator to examine the reliability of the psychological inoculation effect. ...
... Major misinformation themes included health (k=15) and climate change (k=7), the mean participant age distribution was 18 to 48 years, and most studies were conducted in the United States (k=28). A total of 31 studies reported misinformation credibility assessment [7,13,14,16,[22][23][24][26][27][28][39][40][41][42][43][44][45][46][47][48], 26 reported real information credibility assessment [1,7,[14][15][16][22][23][24]27,[41][42][43][49][50][51][52][53][54], 12 reported credibility discernment [7,23,24,27,41], 12 reported misinformation sharing intention [7,13,23,40,45,50], 11 reported real information sharing intention [7,23,51,55], and 8 studies reported sharing discernment [7,23]. ...
... Sensitivity analysis revealed effect sizes ranging from d=-0.37 (95% CI -0.51 to -0.23; P<.001) to d=-0.27 (95% CI -0.34 to -0.21; P<.001), demonstrating stability of the results. [7,13,14,16,[22][23][24][26][27][28][39][40][41][42][43][44][45][46][47][48]. SMD: standardized mean difference. ...
Full-text available
Background The prevalence of misinformation poses a substantial threat to individuals’ daily lives, necessitating the deployment of effective remedial approaches. One promising strategy is psychological inoculation, which pre-emptively immunizes individuals against misinformation attacks. However, uncertainties remain regarding the extent to which psychological inoculation effectively enhances the capacity to differentiate between misinformation and real information. Objective To reduce the potential risk of misinformation about digital health, this study aims to examine the effectiveness of psychological inoculation in countering misinformation with a focus on several factors, including misinformation credibility assessment, real information credibility assessment, credibility discernment, misinformation sharing intention, real information sharing intention, and sharing discernment. Methods Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines, we conducted a meta-analysis by searching 4 databases (Web of Science, APA PsycINFO, Proquest, and PubMed) for empirical studies based on inoculation theory and outcome measure–related misinformation published in the English language. Moderator analyses were used to examine the differences in intervention strategy, intervention type, theme, measurement time, team, and intervention design. ResultsBased on 42 independent studies with 42,530 subjects, we found that psychological inoculation effectively reduces misinformation credibility assessment (d=–0.36, 95% CI –0.50 to –0.23; P
... This is becoming a critical skill as it is increasingly easy to produce, distribute, and weaponize misinformation with serious real-world consequences [3,93]. The idea of using games to approach this problem is not new [12,40,56,91], but results have been mixed at best, possibly because the predominant format of these games seems to be quiz-like, resembling more a lecture than a game [39]. We propose that providing the foundational skills necessary to become autonomous in learning and adapting to new information stimuli may be a better option, something that games like the Orwell series [126] have attempted by placing the player in the role of the investigator and constructor of truth. ...
... 11 May only be available on site, may require paid entry. 12 APK available for download. 13 Requires a SNES console or an appropriate emulator to run. ...
... Moreover, some have been specifically designed to foster media literacy and help distinguish between genuine and fake news, such as Bad News or Factitious-Pandemic Edition [48]. But their main advantage lies in their ability to demonstrate how things work in a practical manner by engaging users in a vivid experience [24]. However, it should not be surprising that their effectiveness relies on the inherent possibilities of each game [38]. ...
Full-text available
Despite access to reliable information being essential for equal opportunities in our society, current school curricula only include some notions about media literacy in a limited context. Thus, it is necessary to create scenarios for reflection on and a well-founded analysis of misinformation. Video games may be an effective approach to foster these skills and can seamlessly integrate learning content into their design, enabling achieving multiple learning outcomes and building competencies that can transfer to real-life situations. We analyzed 24 video games about media literacy by studying their content, design, and characteristics that may affect their implementation in learning settings. Even though not all learning outcomes considered were equally addressed, the results show that media literacy video games currently on the market could be used as effective tools to achieve critical learning goals and may allow users to understand, practice, and implement skills to fight misinformation, regardless of their complexity in terms of game mechanics. However, we detected that certain characteristics of video games may affect their implementation in learning environments, such as their availability, estimated playing time, approach, or whether they include real or fictional worlds, variables that should be further considered by both developers and educators.
... One well-researched game is the "Bad News" game developed by Roozenbeek and van der Linden (TILT & Cambridge Social Sciences Decision-Making Lab). This game was found to significantly improve participants' misinformation identification techniques, as well as increase their confidence about their evaluations (Basol, Roozenbeek, & van der Linden, 2020). Unlike an intervention that focuses on a specific issue or topic, the Bad News game is an active, technique-based inoculation strategy that teaches about misinformationspreading techniques without being limited to specific topics . ...
Full-text available
This narrative review examines what misinformation interventions—both before misconception (prebunking), and after misconception (debunking)—are effective, and how they can be applied to library information literacy instruction and outreach. To conduct this review, the researcher conducted carefully considered searches in several library science, education, psychology, and communication databases. The review revealed that there is considerable potential for librarians to combat misinformation using the interventions explored in the literature, both in their instruction and in their outreach/programming efforts. Some ideas for how this could be accomplished are explored.
... For instance, when dealing with racial hoaxes, it becomes possible to implement preventive pre-bunking interventions. Here, adolescents, as potential news consumers and future information professionals, are equipped with the tools to recognize media biases (Lutzke et al. 2019;Paul and Elder 2004;Basol et al. 2020;D'Errico et al. 2023) and understand their long-term effects. ...
Full-text available
This paper explores the possibility of preventing prejudice among adolescents by promoting the analytical processing of social media content emerging from racial misinformation. Specifically, we propose, at this aim, an intervention that centers on recognizing stereotypical beliefs and other media biases about a group of people in misleading news. To better understand the variables that contribute to improving socio-analytical performance in the face of such misinformation, we investigated the influence of implicit associations as a tendency toward the automatic labeling of groups, as well as two dimensions of perceived self-efficacy in the face of misinformation, one active and one inhibitory. Our results demonstrate the presence of a negative link between affective prejudice and socio-analytical processing, and that this analytical performance toward misleading news is negatively related to the individual tendency toward implicit activation, and is also explained by the inhibitory factor of the perceived efficacy toward misinformation. The role of the active factor related to the perceived ability of fact-checking is not significant. This research suggests that education focused on the socio-analytical processing of misleading news in social media feeds can be an effective means of intervening in online affective prejudice among adolescents; the implications and limitations of our findings for future research in this area are discussed.
... In ethics education, serious games, and other forms of experiential learning have been found to be among the most effective teaching approaches (Katsarov et al. 2021). Serious games have been developed to sensitize people to a wide range of ethical issues related to social media, including cybermobbing (Calvo-Morata et al. 2020), data protection and cybersecurity (Dewes et al. 2022;Ryan et al. 2020), misinformation and trolling (e.g., Basol et al. 2020), and political radicalization (e.g., Hidden Codes, Playing History 2022; Menendez-Ferreira et al. 2022). However, thus far, we have not been able to detect any serious games on a responsible use of social media in marketing and advertisement -one of the focal areas of our project (DI-SZENARIO), which focuses on designing and implementing game-based-learning curricula for students of business and technology. ...
Conference Paper
Full-text available
Many ethical problems and risks are connected to the use of social media for purposes of marketing. Serious games have been shown to be particularly effective in promoting ethical sensitivity in other domains. In this paper, we outline the concept of uFood, a serious moral game to sensitize students of business and technology to issues related to social-media marketing. In this paper, we present the game's educational strategy before the background of findings from moral psychology and studies of game-based learning. A central question was whether to employ a "no-warning" strategy with or without immediate feedback on ethical dimensions of decision making. It is an open research question, which of these strategies could be more promising. Therefore, we have decided to develop two versions of the game for experimental purposes.
... Then G*power was used, a digital program to assist in providing a minimal sample size to test a hypothesis based on previous effect sizes in similar studies (Statistics Solutions, 2023). The study by Basol, Roozenbeek and van der Linden (2020) required 158 yet recruited 198 participants, with 96 in the inoculation group and 102 in the control group. Similar to the previous study, all participants participated in the pre-post survey, which found reliability of the twitter post decreased after playing the Bad News game especially compared to the control group. ...
Full-text available
The rise of fake news in society and the speed at which it can spread on social media has generated concerns for democracy as it can potentially influence an individual and polarise society. However, the definition of fake news is problematic, as it is too narrow and only applies to fabricated news. Therefore, the terms misinformation and disinformation are used to encompass labels for fake news to online harm and hate speech. The main research aim was to conduct a case study, drawing on vocational students’ perspectives about misinformation encountered on social media to create a post-16 lesson to develop critical thinking, and literacy skills by incorporating Inoculation Theory. The research is justified because teachers are required to teach and develop skills to counteract misinformation and disinformation as the younger demographic are most likely to encounter and engage with misinformation on social media. Drawing on the previous research by Roozenbeek and van der Linden (2021), a qualitative embedded case study design combined with an element of quantitative data was conducted. Data was collected in three one-hour sessions each guided by one of the three research questions which was analysed thematically. A purposive sampling strategy was used to recruit a group of six participants with a mix of genders aged between 16-19 years from the Media Studies department of a Further Education institute in England (N = 6). Based on the case study findings, a post-16 lesson was created to develop critical thinking, literacy skills and incorporate Inoculation Theory to be able to identify social media misinformation. A key finding was every participant’s epistemology of misinformation was different. Additionally, a conducive research environment was found to be instrumental in discussing the participants’ opinions and beliefs about misinformation encountered on social media to develop critical thinking and literacy skills potentially mitigating polarisation in society. Therefore, an appropriate educational environment is essential for future lesson delivery about social media misinformation.
QAnon has emerged as the defining conspiracy group of our times, and its far-right conspiracies are extraordinary for their breadth and extremity. Bringing together scholars from psychology, sociology, communications, and political science, this cutting-edge volume uses social science theory to investigate aspects of QAnon. Following an introduction to the 'who, what, and why' of QAnon, Part I focuses on the psychological characteristics of QAnon followers and the group's methods for recruiting and maintaining these followers. Part II includes chapters at the intersection of QAnon and society, arguing that society has constructed QAnon as a threat and the social need to belong motivates its followers. Part III discusses the role of communication in promoting and limiting QAnon support, while Part IV concludes by considering the future of QAnon. The Social Science of QAnon is vital reading for scholars and students across the social sciences, and for legal and policy professionals.
Full-text available
The spread of online misinformation poses serious challenges to societies worldwide. In a novel attempt to address this issue, we designed a psychological intervention in the form of an online browser game. In the game, players take on the role of a fake news producer and learn to master six documented techniques commonly used in the production of misinformation: polarisation, invoking emotions, spreading conspiracy theories, trolling people online, deflecting blame, and impersonating fake accounts. The game draws on an inoculation metaphor, where preemptively exposing, warning, and familiarising people with the strategies used in the production of fake news helps confer cognitive immunity when exposed to real misinformation. We conducted a large-scale evaluation of the game with N = 15,000 participants in a pre-post gameplay design. We provide initial evidence that people's ability to spot and resist misinformation improves after gameplay, irrespective of education, age, political ideology, and cognitive style.
Full-text available
Science deniers question scientific milestones and spread misinformation, contradicting decades of scientific endeavour. Advocates for science need effective rebuttal strategies and are concerned about backfire effects in public debates. We conducted six experiments to assess how to mitigate the influence of a denier on the audience. An internal meta-analysis across all the experiments revealed that not responding to science deniers has a negative effect on attitudes towards behaviours favoured by science (for example, vaccination) and intentions to perform these behaviours. Providing the facts about the topic or uncovering the rhetorical techniques typical for denialism had positive effects. We found no evidence that complex combinations of topic and technique rebuttals are more effective than single strategies, nor that rebutting science denialism in public discussions backfires, not even in vulnerable groups (for example, US conservatives). As science deniers use the same rhetoric across domains, uncovering their rhetorical techniques is an effective and economic addition to the advocates’ toolbox.
Full-text available
There are two kinds of problems with rumour spread over WhatsApp: one is disinformation and the other is incitement to violence. Why the rumours leading to the lynchings are more appropriately treated as incitement to violence is explained here. The significance of WhatsApp in this context, and whether the changes made by WhatsApp in reaction to the public criticism and government pressure are likely to put a stop to the lynchings are also examined.
Full-text available
Are citizens willing to accept journalistic fact-checks of misleading claims from candidates they support and to update their attitudes about those candidates? Previous studies have reached conflicting conclusions about the effects of exposure to counter-attitudinal information. As fact-checking has become more prominent, it is therefore worth examining how respondents respond to fact-checks of politicians—a question with important implications for understanding the effects of this journalistic format on elections. We present results to two experiments conducted during the 2016 campaign that test the effects of exposure to realistic journalistic fact-checks of claims made by Donald Trump during his convention speech and a general election debate. These messages improved the accuracy of respondents’ factual beliefs, even among his supporters, but had no measurable effect on attitudes toward Trump. These results suggest that journalistic fact-checks can reduce misperceptions but often have minimal effects on candidate evaluations or vote choice.
Full-text available
The 2016 U.S. presidential election brought considerable attention to the phenomenon of “fake news”: entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual fake-news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake-news headlines occurs despite a low level of overall believability and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem. It is interesting, however, that we also found that prior exposure does not impact entirely implausible statements (e.g., “The earth is a perfect square”). These observations indicate that although extreme implausibility is a boundary condition of the illusory truth effect, only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and impact of repetition on beliefs is greater than has been previously assumed.
Effect sizes are underappreciated and often misinterpreted—the most common mistakes being to describe them in ways that are uninformative (e.g., using arbitrary standards) or misleading (e.g., squaring effect-size rs). We propose that effect sizes can be usefully evaluated by comparing them with well-understood benchmarks or by considering them in terms of concrete consequences. In that light, we conclude that when reliably estimated (a critical consideration), an effect-size r of .05 indicates an effect that is very small for the explanation of single events but potentially consequential in the not-very-long run, an effect-size r of .10 indicates an effect that is still small at the level of single events but potentially more ultimately consequential, an effect-size r of .20 indicates a medium effect that is of some explanatory and practical use even in the short run and therefore even more important, and an effect-size r of .30 indicates a large effect that is potentially powerful in both the short and the long run. A very large effect size (r = .40 or greater) in the context of psychological research is likely to be a gross overestimate that will rarely be found in a large sample or in a replication. Our goal is to help advance the treatment of effect sizes so that rather than being numbers that are ignored, reported without interpretation, or interpreted superficially or incorrectly, they become aspects of research reports that can better inform the application and theoretical development of psychological research.
One of the most significant departures from conventional inoculation theory is its intentional application for individuals already “infected”—that is, inoculation not as a preemptive strategy to protect existing positions from future challenges, but instead, inoculation as a means to change a position (e.g., from negative to positive) and to protect the changed position against future challenges. The issue is important for persuasion scholarship in general, as theoretical boundary conditions help at each stage of persuasion research development, serving as a guide for literature review, analysis, synthesis, research design, interpretation, theory building, and so on. It is an important issue for inoculation theory and resistance to influence research, specifically, for it gets at the very heart—and name and foundation—of inoculation theory. This article offers a theoretical analysis of inoculation theory used as both prophylactic and therapeutic interventions and concludes with a set of recommendations for inoculation theory scholarship moving forward.
Why do people believe blatantly inaccurate news headlines ("fake news")? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news - even for headlines that align with individuals' political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant's ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one's political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se - a finding that opens potential avenues for fighting fake news.
The study reports on a meta-analysis of attempts to correct misinformation (k = 65). Results indicate that corrective messages have a moderate influence on belief in misinformation (r = .35); however, it is more difficult to correct for misinformation in the context of politics (r = .15) and marketing (r = .18) than health (r = .27). Correction of real-world misinformation is more challenging (r = .14), as opposed to constructed misinformation (r = .48). Rebuttals (r = .38) are more effective than forewarnings (r = .16), and appeals to coherence (r = .55) outperform fact-checking (r = .25), and appeals to credibility (r = .14).
Research in the field of Resistance to Persuasion (RP) has demonstrated that inoculating individuals with counter arguments is effective for lowering their levels of adherence to conspiracist beliefs (CB). Yet, this strategy is limited because it requires specific arguments tailored against targeted conspiracist narratives. Therefore, we investigated whether priming Resistance to Persuasion would reduce individual adherence to CB among undergraduate student samples. A first study (N = 81) demonstrated that participants primed by filling a RP scale had lower CB scores than control participants. This effect was directly replicated twice (N = 205 and N = 265) and confirmed by a mini meta-analysis (N = 519; d = .20). Practical and theoretical implications are then discussed.