Content uploaded by Stefan Herzog
Author content
All content in this area was uploaded by Stefan Herzog on Dec 09, 2024
Content may be subject to copyright.
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1044
nature human behaviour
Review article https://doi.org/10.1038/s41562-024-01881-0
Toolbox of individual-level interventions
against online misinformation
Anastasia Kozyreva 1 , Philipp Lorenz-Spreen 1,29, Stefan M. Herzog 1,29,
Ullrich K. H. Ecker 2,29, Stephan Lewandowsky 3,4,29, Ralph Hertwig 1,29,
Ayesha Ali5, Joe Bak-Coleman 6, Sarit Barzilai 7, Melisa Basol8,
Adam J. Berinsky 9, Cornelia Betsch 10,11, John Cook12, Lisa K. Fazio 13,
Michael Geers 1,14 , Andrew M. Guess 15, Haifeng Huang 16,
Horacio Larreguy 17, Rakoen Maertens 18, Folco Panizza 19,
Gordon Pennycook20,21, David G. Rand 22, Steve Rathje 23, Jason Reiler 24,
Philipp Schmid 10,11,25, Mark Smith 26, Briony Swire-Thompson 27,
Paula Szewach 24,28, Sander van der Linden 8 & Sam Wineburg26
The spread of misinformation through media and social networks
threatens many aspects of society, including public health and the state
of democracies. One approach to mitigating the eect of misinformation
focuses on individual-level interventions, equipping policymakers and the
public with essential tools to curb the spread and inuence of falsehoods.
Here we introduce a toolbox of individual-level interventions for reducing
harm from online misinformation. Comprising an up-to-date account
of interventions featured in 81 scientic papers from across the globe,
the toolbox provides both a conceptual overview of nine main types of
interventions, including their target, scope and examples, and a summary
of the empirical evidence supporting the interventions, including the
methods and experimental paradigms used to test them. The nine
types of interventions covered are accuracy prompts, debunking and
rebuttals, friction, inoculation, lateral reading and verication strategies,
media-literacy tips, social norms, source-credibility labels, and warning and
fact-checking labels.
The proliferation of false and misleading information online is a com-
plex and global phenomenon that is influenced by both large online
platforms and individual and collective behaviour
1,2
. In its capacity
to damage public health and the health of democracies, online mis
-
information poses a major policy problem3–6. Online platforms have
addressed this problem by attempting to curtail the reach of misinfor-
mation (for example, by algorithmically downgrading false content in
newsfeeds), by highlighting certain features of the content to users (for
example, fact-checking and labelling) and by removing false content
and suspending accounts that spread falsehoods
7
. Content modera-
tion, however, is a contentious issue and a moral minefield, where the
goal of preventing the spread of harmful misinformation can clash
with freedom of expression
8,9
. A key policy concern is therefore how
to reduce the spread and influence of misinformation while keeping
harsh content-removal measures to a minimum.
In recent years, research on misinformation in the behavioural
and social sciences has introduced a range of interventions designed
to target users’ competences and behaviours in a variety of ways: by
debunking false claims10, by boosting people’s competences (for
example, digital media literacy11) and resilience against manipulation
(for example, pre-emptive inoculation12,13), by implementing design
choices that slow the process of sharing misinformation
14
, by directing
attention to the importance of accuracy15 or by highlighting whether
the information in question is trustworthy
16
. These interventions stem
Received: 1 February 2023
Accepted: 5 April 2024
Published online: 13 May 2024
Check for updates
A full list of afiliations appears at the end of the paper. e-mail: kozyreva@mpib-berlin.mpg.de
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1045
Review article https://doi.org/10.1038/s41562-024-01881-0
of interventions against online misinformation that can be tailored
according to the environment and target audience. Here we report
the results of a review of behavioural and cognitive interventions to
counter online misinformation, based on 81 scientific papers pro-
viding evidence from studies conducted around the world. We also
review the four main experimental paradigms in research on misinfor-
mation interventions: the misinformation-correction paradigm, the
headline-discernment paradigm, the technique-adoption paradigm
and the emerging paradigm of field studies on social media. This paper
is accompanied by a detailed online research and policy resource,
the toolbox of interventions against online misinformation (https://
interventionstoolbox.mpib-berlin.mpg.de and Fig. 1).
This Review represents a collaborative effort of an international
group of experts. The main goal of this Review is to identify a collec-
tion of empirically validated cognitive and behavioural interventions
against misinformation that target different behavioural and cognitive
outcomes. The intervention types in the toolbox explicitly tackle the
challenge of misinformation, encompassing disinformation, false and
misleading information, fake news and related issues. Our toolbox
does not evaluate the interventions’ potential to be implemented, nor
does it analyse their comparative effectiveness
34
. Although it does not
provide a systematic synthesis of the evidence or a meta-analysis of
misinformation interventions, we aimed to ensure that our toolbox
from different disciplines, including cognitive science17, political and
social psychology
18,19
, and education research
20,21
. They also represent
different categories of behavioural policy interventions (for example,
nudges, boosts and refutation strategies; for overviews, see refs. 22–25).
Across interventions, different research methodologies have been
used to test their effectiveness, including having participants evalu-
ate headlines in online experiments26, asking students to evaluate
websites20 and running field experiments on social networks13,15,27,28.
Several literature reviews have addressed various aspects of misin-
formation interventions. Some focus on psychological drivers behind
beliefs in falsehoods and how to correct them24,29, others distinguish
between key classes of behavioural interventions
22,23,25
and the entry
points for those interventions in people’s engagement with misinfor-
mation30, and still others focus on measures targeting specific misin-
formation topics such as climate change denialism
31
or false narratives
about COVID-19 (ref. 32). Most recently, researchers have also started
reviewing the state of evidence for misinformation interventions in
the Global South33.
In light of this rapid progress, we believe now is the time to develop
a common conceptual and methodological space in the field, to evalu-
ate the state of the evidence and to provide a resource that inspires
future research directions. Importantly, our aim is not to rate interven-
tions or approaches but rather to produce a versatile digital toolbox
1. Deinition and scope
2. Problem addressed
3. Non-redundancy
4. Evidence
5. Expert opinion
Structure
Conceptual toolbox
Nine intervention types
deined along ten
dimensions
Evidence toolbox
81 scientiic papers
summarized and deined
along ten dimensions plus
study details
Criteria for inclusionExperts
Toolbox of individual-level interventions
against online misinformation
Interventions
World map of evidence
Accuracy prompts
Debunking and rebuttals
Friction
Inoculation
Lateral reading and verification strategies
Media-literacy tips
Social norms
Source-credibility labels
Warning and fact-checking labels
Number
of studies 1 2 53 864
30 misinformation
researchers from 11
countries and 27
universities and
research institutions
Fig. 1 | Structure of the toolbox and map of evidence. The upper part of the
figure summarizes the structure and composition of the toolbox, which is
available at https://interventionstoolbox.mpib-berlin.mpg.de. The world map
of evidence shows the studies from the evidence toolbox by the country in which
they were conducted and by intervention type. Circle size denotes the number of
studies. For the interactive version of the map, see https://interventionstoolbox.
mpib-berlin.mpg.de/toolbox_map.html. Map made with Natural Earth.
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1046
Review article https://doi.org/10.1038/s41562-024-01881-0
covers all relevant interventions in the field of misinformation research
that satisfy our inclusion criteria (for the details of our approach and
inclusion criteria, see Supplementary Information).
Our Review includes evidence from published or about-to-be-
published academic research. We could not include evidence from
interventions that have been rolled out by and on social media plat-
forms due to a lack of published evidence. Platforms undoubtedly
conduct large-scale research on interventions such as friction treat-
ments (for example, prompts on Twitter
35
), crowdsourced corrections
(for example, community notes on X
36
), and warning and fact-checking
labels, but this research and its findings are, regrettably, not available
to academic researchers.
The toolbox
This toolbox focuses on two points of interest: a conceptual overview
of the interventions and an overview of the empirical evidence support-
ing the interventions, including the methods used to test them. Both
overviews are publicly available as an online supplement in the form
of two dynamic tables: a conceptual overview (https://intervention
stoolbox.mpib-berlin.mpg.de/table_concept.html) and an evi-
dence overview (https://interventionstoolbox.mpib-berlin.mpg.de/
table_evidence.html). The online supplement also contains selected
examples of interventions (https://interventionstoolbox.mpib-berlin.
mpg.de/table_examples.html) and a world map of evidence (https://
interventionstoolbox.mpib-berlin.mpg.de/toolbox_map.html).
Conceptual overview of interventions
The toolbox includes nine types of individual-level interventions, all
supported by peer-reviewed, published evidence: accuracy prompts,
debunking and rebuttals, friction, inoculation, lateral reading and veri-
fication strategies, media-literacy tips, social norms, source-credibility
labels, and warning and fact-checking labels. These nine types of inter-
ventions fall under three intervention categories: nudges, which tar-
get behaviours; boosts and educational interventions, which target
competences; and refutation strategies, which target beliefs. Note
that interventions may fall under more than one category. Table 1
provides a condensed overview of the intervention types, listed by
policy intervention category.
Nudging is a behavioural policy approach that uses principles of
human psychology to design choice architectures that steer people’s
decisions—ideally towards a greater individual or public good37,38.
Nudging interventions primarily target behaviour, such as sharing
behaviour on social media. For example, accuracy prompts (Table 1,
‘Accuracy prompts’) remind people of the importance of information
accuracy to encourage them to share fewer false headlines
39
. Other
nudges introduce friction into a decision-making process to slow
the process of sharing information—for instance, asking a person to
pause and think before sharing content on social media
14
or to read
an article before sharing it40 (Table 1, ‘Friction’). Nudges that leverage
social norms—what other people believe, do or find acceptable—can
encourage people to adopt similar standards. For instance, telling users
that most other users do not share or act on certain misinformation can
lead them to react in a similar way41 (Table 1, ‘Social norms’).
Boosting is a behavioural policy approach that enlists human
cognition, the environment or both to help people to strengthen exist-
ing competences or develop new ones42 that are useful for coping
with a given policy problem, such as the proliferation of misinfor-
mation. Because people choose for themselves whether and how to
engage with a boost, these interventions are by necessity transpar-
ent to the target audience. In online environments, boosts and other
educational interventions aim to foster digital competences; for
instance, they may help people to improve their media literacy by
offering them simple tips
11
(Table 1, ‘Media-literacy tips’) or help them
to acquire online fact-checking competences such as lateral reading,
which involves conducting additional independent online searches to
check the trustworthiness of a claim or source. Lateral reading and
other verification strategies (for example, image searching or tracing
the original context of the information) can be implemented as educa-
tional interventions in a school curriculum20,43 or as boosts via a short
online video, a simple pop-up
44
or an online game
45
(Table 1, ‘Lateral
reading and verification strategies’). Another type of boost, inocula-
tion, aims to foster people’s ability to recognize misleading tactics and
information. Inoculation can be topic-specific46, or it can highlight and
explain general argumentation strategies used to mislead audiences.
On the basis of inoculation theory47, inoculation can be implemented
via text, online games or short videos
12,13
. Inoculation is also a refutation
strategy because it can be used to pre-emptively refute false informa-
tion (Table 1, ‘Inoculation’).
The objective of refutation strategies is to debunk misinforma-
tion or to prebunk it—that is, to warn people about misinformation
before they encounter it. Their primary target is belief calibration
10
.
Refutation strategies aim to reduce false beliefs by providing factual
information alongside an explanation of why a piece of misinformation
is false or misleading (Table 1, ‘Debunking and rebuttals’). Debunking
tends to be fact-based but can also appeal to logic and critical think-
ing—for instance, by explaining a misleading argumentation strategy
or discrediting the source of the misinformation (for a review, see
ref. 24). Refutation strategies can also take the form of informa-
tional labels, including source-credibility labels and warning and
fact-checking labels. Such labels have already been rolled out by
some social media platforms and search engines (Table 1, ‘Warning
and fact-checking labels’ and ‘Source-credibility labels’). Although
refutation strategies reliably reduce misconceptions, applying them
reactively to specific pieces of misinformation requires fact-checking
at scale
48
. Applied pre-emptively via prebunking or inoculation (see the
previous paragraph), refutation strategies can build people’s resilience
against misinformation they have not yet encountered.
Overview of the evidence behind the interventions
The toolbox also provides a summary of the evidence behind the nine
types of interventions. This part of the toolbox is based on 81 scientific
papers and is available online as a searchable and expandable table at
https://interventionstoolbox.mpib-berlin.mpg.de/table_evidence.
html. The table includes several empirical papers for each interven-
tion, as well as an overview of each paper’s sample, experimental
paradigm, study design, outcome measures, main findings and lon-
gevity tests. Expanding the row associated with a paper reveals more
detailed information about methods and effect sizes, the full refer-
ence, a link to open data (if available) and the abstract. A separate sec-
tion of the toolbox maps these empirical papers to the countries in
which they were conducted (Fig. 1 and https://interventionstoolbox.
mpib-berlin.mpg.de/toolbox_map.html).
Several observations about the current state of the literature
can be derived from this evidence overview. First, evidence from the
Global North, especially the USA, is overrepresented for many inter-
vention types in the toolbox. However, several intervention types have
been tested across the globe (for example, debunking
49–52
, accuracy
prompts
53,54
and media-literacy tips
11,28,53,55
). This generally reflects the
state of the field, as others have also pointed out the lack of evidence
from the Global South
33
—a situation that is thankfully changing rapidly.
Although the interventions target universal problems and behaviours,
they are sensitive to cultural differences. In cases where there is evi-
dence from multiple countries or even multiple probability samples
(for example, tests of media-literacy tips in India28 and Pakistan55),
interventions were not always equally effective across cultures and
populations and were often less effective for less educated rural popu-
lations (see also ref. 11). These findings point to a potentially complex
relationship between people’s cultural contexts and their competences
and behaviours vis-à-vis misinformation. Future studies are needed to
examine this potentially intricate relationship in more detail.
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1047
Review article https://doi.org/10.1038/s41562-024-01881-0
Table 1 | Overview of intervention types in the toolbox
Intervention type Description Example Targeted outcome Outcome variables
Nudges
Accuracy prompts Accuracy prompts are used to shift people’s
attention to the concept of accuracy. Asking people to evaluate
the accuracy of a headline or
showing people a video about
the importance of sharing only
accurate content.
Behaviour: thinking
about accuracy
before sharing
information online
Sharing discernment
Friction Friction makes relevant processes slower or
more effortful by design. Asking people to pause and think
before sharing content on social
media. This could be as simple as a
short prompt—for example, ‘Want
to read this before sharing?’
Behaviour: pausing
rather than acting
on initial impulse
Sharing intentions
Social norms Social norms leverage social information
(peer inluence) to encourage people not to
believe, endorse or share misinformation.
Emphasizing that most people
of a given group disapprove of
sharing or using false information
(descriptive norm) and/or that such
actions are generally considered
wrong, inappropriate or harmful
(inductive norm).
Belief calibration
and behaviour:
following normative
beliefs, for example,
when sharing
information online
Beliefs in misinformation;
sharing intentions
Boosts and educational interventions
Inoculation Inoculation is a pre-emptive intervention
that exposes people to a weakened
form of common misinformation and/or
manipulation strategies to build up their
ability to resist them.
Teaching people about the strategy
of using ‘fake experts’ (presenting
unqualiied people as credible) to
increase their recognition of and
resilience to this strategy.
Belief calibration
and competence:
detecting
and resisting
manipulative and
false information
Accuracy/credibility
discernment;
manipulation technique
recognition
Lateral reading and
veriication strategies Veriication strategies for evaluating
online information encompass a range of
techniques and methods used to assess the
credibility, accuracy and reliability of digital
content. Lateral reading is a strategy used
by professional fact-checkers that involves
investigating the credibility of a website by
searching for information about it on other
sites. Other veriication strategies include
image searching and tracing the original
context of the information.
School-based interventions with
instructional strategies such as
teacher modelling and guided
practice can be used to teach
lateral reading. Pop-up graphics
can also be used to prompt social
media users to read laterally.
Competence:
evaluating the
credibility of online
sources
Credibility assessment
of websites; use of
veriication strategies
(self-reported or tracked)
Media-literacy tips Media-literacy tips give people a list
of strategies for identifying false and
misleading information in their newsfeeds.
Facebook offers tips to spot false
news, including “be sceptical of
headlines”, “look closely at the
URL” and “investigate the source”.
Competence: media
literacy and social
media skills
Accuracy discernment;
sharing discernment
Refutation strategies
Debunking and
rebuttals Debunking and rebuttals are strategies
aimed at dispelling misconceptions
and countering false beliefs. Debunking
involves offering corrective information
to address a speciic misconception.
Rebuttals, particularly in the context of
science denialism, consist of presenting
accurate facts related to a topic that has
been inaccurately addressed (topic rebuttal)
or exposing the rhetorical tactics often
used to reject established scientiic indings
(technique rebuttal).
Debunking can be implemented
in four steps: (1) state the
truth, (2) warn about imminent
misinformation exposure, (3)
specify the misinformation and
explain why it is wrong, and (4)
reinforce the truth by offering the
correct explanation. Depending on
the circumstances (for example,
the availability of a pithy fact),
starting with step 2 may be
appropriate.
Belief calibration
and competence:
detecting
and resisting
manipulative and
false information
Beliefs in misinformation;
attitudes to relevant topics
(for example, vaccination);
behavioural intentions;
continued inluence of
misinformation
Warning and
fact-checking labels Warning labels explicitly alert individuals
to the possibility of being misled by a
particular piece of information or its
source. Fact-checking labels indicate the
trustworthiness rating assigned to a piece of
content by professional fact-checkers.
Facebook adds the labels “False
(Independent fact-checkers say
this information has no basis in
fact)” or “Partly false (Independent
fact-checkers say this information
has some factual inaccuracies)”
Belief calibration
and competence:
detecting false
or other types
of problematic
information
Accuracy judgements;
sharing intentions
Source-credibility
labels Source-credibility labels show how a
particular news source was rated by
professional fact-checking organizations.
NewsGuard labels indicate the
trustworthiness of news and
information websites with a
reliability rating from 0 to 100, on
the basis of nine journalistic criteria
that assess basic practices of
reliability and transparency.
Belief calibration
and competence:
detecting
sources of false
or untrustworthy
information
Sharing intentions;
accuracy judgements;
information diet quality
For the full version of the conceptual overview, see https://interventionstoolbox.mpib-berlin.mpg.de/table_concept.html; for the evidence overview, see https://interventionstoolbox.
mpib-berlin.mpg.de/table_evidence.html; for examples of interventions implementations, see https://interventionstoolbox.mpib-berlin.mpg.de/table_examples.html.
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1048
Review article https://doi.org/10.1038/s41562-024-01881-0
Second, few studies have tested the long-term effects of inter-
ventions. Of those that have, many observed some decay in effective-
ness56–58. The mechanisms behind the longevity or lack thereof of
interventions’ effects are currently poorly understood.
Third, it is difficult to compare interventions due to variability
in how their effectiveness was studied. The core differences relate
to participants’ tasks—in particular, the paradigm used, including
the test stimuli (for example, news headlines, real-world claims or
websites), and the measured outcome variables (for example, belief
or credibility ratings and behavioural measures). Ongoing efforts
to systematically compare interventions in large-scale megastudies
would benefit from appropriate standards for paradigms, test stimuli
and outcome measures.
As a first step, we identify four main paradigms in research on mis-
information interventions: the misinformation-correction paradigm,
the headline-discernment paradigm, the technique-adoption paradigm
and field studies on social media, an important emerging paradigm.
Experimental paradigms in testing interventions
The misinformation-correction paradigm typically presents people
with corrections and measures the effect on their belief in relevant
misinformation claims, their claim-related inferential reasoning and
their attitudes towards associated issues. Key outcome variables in
this paradigm include belief and attitude ratings, as well as reliance
on misinformation when responding to inferential-reasoning ques-
tions. Stimuli can be real-world claims that can be fact-checked (for
example, urban myths, scientific claims and politicians’ statements)
or fictional reports containing information that is deemed to be false.
Variations of this paradigm have been used in studies on refutation
strategies, including debunking59, rebuttals of science denialism60,
inoculation, and warning and fact-checking labels. The main meas-
ure of the effectiveness of corrections in this paradigm is whether the
intervention eliminates or attenuates the continued influence of the
misinformation—that is, whether people still make inferences as if
that information were true despite remembering the correction. For
instance, someone may continue to infer a person’s guilt despite know-
ing that they have been cleared of charges61,62. This occurs primarily
when the misinformation affords a causal explanation; providing an
alternative factual explanation can therefore reduce the influence
of misinformation. Misinformation correction is one of the earliest
paradigms in misinformation research.
In the headline-discernment paradigm, participants evaluate
the plausibility, credibility or veracity of true and false headlines and
indicate whether they would be willing to share them (for example,
see ref. 26). Outcome variables include accuracy ratings and people’s
ability to discern true and false headlines, as well as measures of sharing
intention and sharing discernment (that is, the difference in sharing
true versus false headlines). Specifically, Guay et al.
63
argued in favour
of an approach that “exposes participants to a mix of true and false con-
tent, and incorporates ratings of both into a measure of discernment”
(p. 1232). News headlines presented as stimuli can be real (commonly
taken from mainstream news or fake-news websites and fact-checking
sources) or fictional. Several interventions have been tested with vari-
ations of the headline-discernment paradigm, including accuracy
prompts, online media-literacy tips, friction, labels and inoculation.
Headline discernment is one of the most widely adopted paradigms in
research on misinformation interventions in the past decade.
The technique-adoption paradigm assesses whether participants
successfully learn the skills and strategies required to evaluate informa-
tion veracity. Its affinity with assessments of educational curricula is
particularly evident in studies of lateral-reading interventions in the
field (for example, refs. 20,43). Outcome variables include assessment
scores for demonstrating the skills learned during the intervention (for
example, identifying an information source and assessing its credi-
bility), and stimuli include online articles or entire websites. Studies
that test the effectiveness of lateral reading in online settings often
gauge success by measuring browsing behaviour (self-reported or
tracked; for example, see ref. 44). This measure renders lateral reading
difficult to compare to other interventions. The adoption of a specific
skill is also relevant for some inoculation-type interventions, where
a key measure of the intervention’s success is participants’ ability to
detect manipulation tactics (which is assumed to predict subsequent
resistance to misdirection). For instance, studies on video-based inocu-
lation often assess the ability to spot persuasion techniques (such as
detecting the use of emotional language to influence the audience13)
as a primary outcome measure.
In addition to these three established experimental paradigms,
a growing body of research tests laboratory-based misinformation
interventions in the field, often on social media64. Outcome variables
vary; they include the quality of people’s information diets, the qual-
ity of what people share on their newsfeeds and people’s accuracy
or sharing discernment. For example, a recent study
65
used digital
trace data to study how the consumption of news from low-quality
sources would be affected by the NewsGuard browser add-on (which
provides source-credibility labels). Another deployed an accuracy
nudge on Twitter
15
: users who followed US news outlets rated as highly
untrustworthy by professional fact-checkers were sent a private mes-
sage asking them to rate the accuracy of a headline. This intervention
increased the quality of sources that they subsequently shared. A test
of two psychological inoculation videos shown as YouTube ads
13
found
that the videos helped US users to identify a manipulation technique in
a subsequent test headline. Similarly, a five-day text-message educa-
tional course on emotion-based and reasoning-based misinformation
techniques successfully reduced misinformation-sharing intentions in
Kenya
66
. Field studies on social media thus represent a new and growing
paradigm in research on misinformation interventions.
A cumulative science of interventions against online
misinformation
The heterogeneity in methods and experimental paradigms across
interventions is noteworthy—and is, in our view, a strength. Heter-
ogeneity should eventually make it possible to qualitatively assess
the convergence (or lack thereof) of insights and implications across
methods over time. Moreover, using a variety of outcome measures is
in principle advantageous because misinformation appears in many
forms across a range of contexts. Testing the effectiveness of interven-
tions against different outcome measures can be useful for assessing
their scope and generalizability.
However, this heterogeneity also comes at the price of slowing
cumulative progress in the field67,68, as it can be difficult to directly
compare results across diverse intervention types and idiosyncratic
paradigms. Different studies not only use different paradigms but also
measure different outcome variables and focus on different stages of
engaging with misinformation (that is, selecting information sources,
choosing what information to consume or ignore, evaluating the accu-
racy of the information and/or the credibility of the source, or judging
whether and how to react to the information
30
). Furthermore, stud-
ies generally do not use the same kinds of stimuli—and in those that
do, the stimuli probably vary in their distributions of difficulty (for
example, in terms of accuracy discernment). All these discrepancies
greatly reduce the extent to which meaningful conclusions can be
drawn by directly comparing, say, standardized effect sizes between
all intervention types.
Certain comparisons can, however, be meaningful when the rel-
evant characteristics of the studies are aligned (for example, studies
on accuracy prompts and media-literacy tips can be reasonably com-
pared because they tend to use similar tasks and outcome variables;
these have even been tested within one study
53
). We advocate for three
complementary research practices to facilitate meaningful compari-
sons. First, we call for routine reporting of both raw and standardized
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1049
Review article https://doi.org/10.1038/s41562-024-01881-0
effect sizes
69,70
to facilitate research synthesis
68
. Second, the field would
benefit from the development of ontologies of misinformation inter-
ventions71, which would make it possible to consolidate results from
the literature into databases. This would allow researchers to effi-
ciently identify and aggregate results and effect sizes of relevant sets
of studies on the basis of their design, participants and item features
(see ref. 72 for a database of 2,636 studies on human cooperation that
enables researchers to efficiently conduct meta-analyses). It can also
be worthwhile to create databases for individual participant data,
which allows for much richer re-analyses than standard meta-analyses
based on study-level effect sizes
73
. Third, larger groups of researchers
74
should unite to conduct megastudies75,76 that compare many relevant
interventions within the same setting, thus allowing for more stringent
comparisons (for example, see ref. 77).
Implications and ways forward
This Review provides an overview of nine types of interventions aimed
at fighting online misinformation, as well as an overview of the asso-
ciated evidence. With this, we also provide an accessible resource to
the research community, policymakers and the public in the form of
an interactive online toolbox comprising two databases (https://inter
ventionstoolbox.mpib-berlin.mpg.de).
We envision several uses for the toolbox. For researchers, it pro-
vides a starting point for meta-analytic studies, systematic reviews and
studies comparing the effectiveness of different interventions. It can
also inform efforts to standardize and coordinate methods, thereby
increasing the comparability of future results78. Furthermore, the tool-
box highlights important gaps in the available evidence (for example,
underrepresented populations and cultures and the lack of studies on
long-term effects) that should be addressed in future studies.
The toolbox is also a valuable resource for policymakers and the
public. It provides accessible, up-to-date scientific knowledge that can
inform policy discussions about misinformation countermeasures
and platform regulation. The toolbox can also be used as a resource
for educational programmes and for individuals who wish to practice
self-nudging
79
. The versatility and diversity of interventions in the
toolbox increase the likelihood of addressing the heterogeneous needs,
preferences and skills of different users.
Relatedly, where a single intervention may have only limited effects,
the toolbox helps researchers, policymakers, educators and the public
to combine interventions to address different aspects of a misinfor-
mation problem80,81. The interventions in our toolbox act on people’s
cognition and behaviour to reduce either their tendency to share mis-
information or the extent to which they are affected by it. However,
there is no simple link between reductions in sharing misinformation
at the individual level and the burden and spread of misinformation at
the platform level. Computational work suggests that small reductions
in sharing misinformation would produce outsized effects on total
posts80—an ideal scenario. Yet this work indicates that cognitive and
behavioural interventions alone are unlikely to fully address the prob-
lem of misinformation. Even in combination with traditional content
moderation (for example, account suspension and post removal), these
interventions may substantially reduce but not eliminate the spread of
misinformation
80
. User- and content-targeted approaches may be lim-
ited by the manner in which platform design (for example, algorithmic
sorting, recommendations, user interface and network topology) ampli-
fies misinformation
82,83
. The potential for design changes to reduce
misinformation is poorly understood due to a lack of regulated access
to platforms’ data on their ongoing interventions84. Access to this data
is necessary for testing types of interventions and design changes at
scale (for example, as recently demonstrated in ref. 85).
To understand the factors that influence the spread of misinfor-
mation, data access and collaborative efforts between researchers
and platforms are crucial. Individual-focused interventions can only
go so far in the face of complex global threats such as misinformation.
Whereas individual-level approaches aim at mitigating misinformation
by acting on individuals’ ability to recognize and not spread falsehoods,
system-level approaches can aim at making the entire online ecosystem
less conducive to the spread of misinformation—for instance, through
platform design, content moderation, communities of fact-checkers
and journalists, and high-level regulatory and policy interventions
(such as investing in public broadcasters and establishing regulatory
frameworks that promote a diverse media landscape). System-level
interventions may be particularly effective and long-lasting. Indeed,
they elicit the most confidence among misinformation researchers
and practitioners33.
In the meantime, the most urgent next steps for research on
individual-level tools are investigating their medium- and long-term
effects, exploring ways to scale them up (for example, via school cur-
ricula, apps, platform cooperations and pop-ups), building interven-
tions that reach people across educational backgrounds and creating
integrative interventions that empower people to reckon with different
types of content (for example, headlines and websites). Furthermore,
the toolbox of interventions against online misinformation will need to
continuously evolve, given the dynamic nature of information environ-
ments. Future research will therefore need not only to track the extent
to which interventions remain relevant and effective but also to refine
existing tools and develop new ones on the basis of systematic investi
-
gations into environmental and individual factors that may influence
trust in interventions and the likelihood of their adoption.
Data availability
All data are available at OSF (https://osf.io/ejyh6) and in the online sup
plement (https://interventionstoolbox.mpib-berlin.mpg.de).
Code availability
All code is available at OSF (https://osf.io/ejyh6).
References
1. Lazer, D. M. J. et al. The science of fake news. Science 359,
1094–1096 (2018).
2. Lewandowsky, S. et al. Technology and Democracy: Understanding
the Inluence of Online Technologies on Political Behaviour and
Decision Making JRC Science for Policy Report (Publications
Oice of the European Union, 2020).
3. Proposal for a Regulation of the European Parliament and of the
Council on a Single Market for Digital Services (Digital Services
Act) and Amending Directive 2000/31/EC (COM/2020/825 Final),
https://www.europarl.europa.eu/doceo/document/TA-9-2022-
0269_EN.html#title2 (European Parliament, 2020).
4. Lorenz-Spreen, P., Oswald, L., Lewandowsky, S. & Hertwig, R.
A systematic review of worldwide causal and correlational
evidence on digital media and democracy. Nat. Hum. Behav.
https://doi.org/10.1038/s41562-022-01460-1 (2022).
5. Kozyreva, A., Smillie, L. & Lewandowsky, S. Incorporating
psychological science into policy making. Eur. Psychol. 28,
206–224 (2023).
6. Lewandowsky, S. et al. Misinformation and the epistemic integrity
of democracy. Curr. Opin. Psychol. 54, 101711 (2023).
7. Rosen, G. Remove, Reduce, Inform: New Steps to Manage
Problematic Content, https://about.b.com/news/2019/04/
remove-reduce-inform-new-steps (Meta, 2019).
8. Douek, E. Governing online speech: from ‘posts-as-trumps’ to
proportionality and probability. Columbia Law Rev. 121, 759–834
(2021).
9. Kozyreva, A. et al. Resolving content moderation dilemmas
between free speech and harmful misinformation. Proc. Natl
Acad. Sci. USA 120, 2210666120 (2023).
10. Lewandowsky, S. et al. The Debunking Handbook 2020. Databrary
https://doi.org/10.17910/b7.1182 (2020).
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1050
Review article https://doi.org/10.1038/s41562-024-01881-0
11. Guess, A. M. et al. A digital media literacy intervention increases
discernment between mainstream and false news in the United
States and India. Proc. Natl Acad. Sci. USA 117, 15536–15545
(2020).
12. Basol, M., Roozenbeek, J. & Linden, S. Good news about bad
news: gamiied inoculation boosts conidence and cognitive
immunity against fake news. J. Cogn. https://doi.org/10.5334/
joc.91 (2020).
13. Roozenbeek, J., Linden, S., Goldberg, B., Rathje, S. &
Lewandowsky, S. Psychological inoculation improves resilience
against misinformation on social media. Sci. Adv. 8, 6254 (2022).
14. Fazio, L. Pausing to consider why a headline is true or false
can help reduce the sharing of false news. Harv. Kennedy Sch.
Misinformation Rev. https://doi.org/10.37016/mr-2020-009
(2020).
15. Pennycook, G. et al. Shifting attention to accuracy can reduce
misinformation online. Nature 592, 590–595 (2021).
16. Clayton, K. et al. Real solutions for fake news? Measuring the
eectiveness of general warnings and fact-check tags in reducing
belief in false stories on social media. Polit. Behav. 42, 1073–1095
(2020).
17. Pennycook, G. & Rand, D. G. The psychology of fake news. Trends
Cogn. Sci. 25, 388–402 (2021).
18. Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A. & Van Bavel, J. J.
Emotion shapes the diusion of moralized content in social
networks. Proc. Natl Acad. Sci. USA 114, 7313–7318 (2017).
19. Van Bavel, J. J. et al. Political psychology in the digital (mis)
information age: a model of news belief and sharing. Soc. Issues
Policy Rev. 15, 84–113 (2021).
20. Wineburg, S., Breakstone, J., McGrew, S., Smith, M. D. & Ortega, T.
Lateral reading on the open internet: a district-wide ield study in
high school government classes. J. Educ. Psychol. 114, 893–909
(2022).
21. Osborne, J. et al. Science Education in an Age of Misinformation
(Stanford Univ., 2022).
22. Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R. & Hertwig,
R. How behavioural sciences can promote truth, autonomy and
democratic discourse online. Nat. Hum. Behav. 4, 1102–1109
(2020).
23. Kozyreva, A., Lewandowsky, S. & Hertwig, R. Citizens versus the
internet: confronting digital challenges with cognitive tools.
Psychol. Sci. Public Interest 21, 103–156 (2020).
24. Ecker, U. K. H. et al. The psychological drivers of misinformation
belief and its resistance to correction. Nat. Rev. Psychol. 1, 13–29
(2022).
25. Roozenbeek, J., Culloty, E. & Suiter, J. Countering misinformation:
evidence, knowledge gaps, and implications of current
interventions. Eur. Psychol. 28, 189–205 (2023).
26. Pennycook, G., Binnendyk, J., Newton, C. & Rand, D. G.
A practical guide to doing behavioural research on fake news
and misinformation. Collabra Psychol. 7, 25293 (2020).
27. Wright, C. et al. Eects of brief exposure to misinformation
about e-cigarette harms on Twitter: a randomised controlled
experiment. BMJ Open 11, 045445 (2021).
28. Badrinathan, S. Educative interventions to combat
misinformation: evidence from a ield experiment in India. Am.
Polit. Sci. Rev. 115, 1325–1341 (2021).
29. Ziemer, C.-T. & Rothmund, T. Psychological underpinnings of
misinformation countermeasures. J. Media Psychol. https://doi.
org/10.1027/1864-1105/a000407 (2024).
30. Geers, M. et al. The online misinformation engagement
framework. Curr. Opin. Psychol. 55, 101739 (2023).
31. Hornsey, M. J. & Lewandowsky, S. A toolkit for understanding and
addressing climate scepticism. Nat. Hum. Behav. 6, 1454–1464
(2022).
32. Fasce, A. et al. A taxonomy of anti-vaccination arguments from a
systematic literature review and text modelling. Nat. Hum. Behav.
7, 1462–1480 (2023).
33. Blair, R. A. et al. Interventions to counter misinformation: lessons
from the Global North and applications to the Global South.
Curr. Opin. Psychol. 55, 101732 (2024).
34. IJzerman, H. et al. Use caution when applying behavioural science
to policy. Nat. Hum. Behav. 4, 1092–1094 (2020).
35. Twitter Comms. More reading—people open articles 40% more
often after seeing the prompt. X, https://web.archive.org/
web/20220804154748/; https://twitter.com/twittercomms/
status/1309178716988354561 (2020).
36. About Community Notes on X, https://help.twitter.com/en/
using-x/community-notes (accessed 16 February 2024).
37. Thaler, R. H. & Sunstein, C. R. Nudge: Improving Decisions about
Health, Wealth, and Happiness (Yale Univ. Press, 2008).
38. Thaler, R. H. & Sunstein, C. R. Nudge: The Final Edition
(Yale Univ. Press, 2021).
39. Pennycook, G. & Rand, D. G. Accuracy prompts are a replicable
and generalizable approach for reducing the spread of
misinformation. Nat. Commun. 13, 2333 (2022).
40. X Support. Sharing an article can spark conversation, so you may
want to read it before you Tweet it. X, https://twitter.com/
twittersupport/status/1270783537667551233 (2020).
41. Andı, S. & Akesson, J. Nudging away false news: evidence from a
social norms experiment. Digit. J. 9, 106–125 (2020).
42. Hertwig, R. & Grüne-Yano, T. Nudging and boosting: steering or
empowering good decisions. Perspect. Psychol. Sci. 12, 973–986
(2017).
43. Brodsky, J. E. et al. Improving college students’ fact-checking
strategies through lateral reading instruction in a general
education civics course. Cogn. Res. Princ. Implic. 6, 23 (2021).
44. Panizza, F. et al. Lateral reading and monetary incentives to spot
disinformation about science. Sci. Rep. 12, 5678 (2022).
45. Barzilai, S. et al. Misinformation is contagious: middle school
students learn how to evaluate and share information responsibly
through a digital game. Comput. Educ. 202, 104832 (2023).
46. Tay, L. Q., Hurlstone, M. J., Kurz, T. & Ecker, U. K. H. A comparison
of prebunking and debunking interventions for implied versus
explicit misinformation. Br. J. Psychol. 113, 591–607 (2022).
47. Lewandowsky, S. & Linden, S. Countering misinformation and fake
news through inoculation and prebunking. Eur. Rev. Soc. Psychol.
32, 348–384 (2021).
48. Gottfried, J. A., Hardy, B. W., Winneg, K. M. & Jamieson, K. H. Did
fact checking matter in the 2012 presidential campaign? Am.
Behav. Sci. 57, 1558–1567 (2013).
49. Huang, H. A war of (mis)information: the political eects of
rumors and rumor rebuttals in an authoritarian country. Br. J. Polit.
Sci. 47, 283–311 (2017).
50. Porter, E. & Wood, T. J. The global eectiveness of fact-checking:
evidence from simultaneous experiments in Argentina, Nigeria,
South Africa, and the United Kingdom. Proc. Natl Acad. Sci. USA
118, 2104235118 (2021).
51. Porter, E., Velez, Y. & Wood, T. J. Correcting COVID-19 vaccine
misinformation in 10 countries. R. Soc. Open Sci. 10, 221097 (2023).
52. Badrinathan, S. & Chauchard, S. ‘I don’t think that’s true, bro!’
Social corrections of misinformation in India. Int. J. Press Polit.
https://doi.org/10.1177/19401612231158770 (2023).
53. Arechar, A. A. et al. Understanding and combatting
misinformation across 16 countries on six continents. Nat. Hum.
Behav. 7, 1502–1513 (2023).
54. Oer-Westort, M., Rosenzweig, L. R. & Athey, S. Battling the
coronavirus 'infodemic' among social media users in Kenya and
Nigeria. Nat. Hum. Behav., https://doi.org/10.1038/s41562-023-
01810-7 (2024).
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1051
Review article https://doi.org/10.1038/s41562-024-01881-0
55. Ali, A. & Qazi, I. A. Countering misinformation on social media
through educational interventions: evidence from a randomized
experiment in Pakistan. J. Dev. Econ. 163, 103108 (2023).
56. Maertens, R., Roozenbeek, J., Basol, M. & Linden, S. Long-term
eectiveness of inoculation against misinformation: three
longitudinal experiments. J. Exp. Psychol. Appl. 27, 1–16 (2021).
57. Grady, R. H., Ditto, P. H. & Loftus, E. F. Nevertheless, partisanship
persisted: fake news warnings help briely, but bias returns with
time. Cogn. Res. Princ. Implic. 6, 52 (2021).
58. Paynter, J. et al. Evaluation of a template for countering
misinformation—real-world autism treatment myth debunking.
PLoS ONE 14, 0210746 (2019).
59. Ecker, U. K. H., Butler, L. H. & Hamby, A. You don’t have to tell a
story! A registered report testing the eectiveness of narrative
versus non-narrative misinformation corrections. Cogn. Res.
Princ. Implic. 5, 64 (2020).
60. Schmid, P. & Betsch, C. Eective strategies for rebutting science
denialism in public discussions. Nat. Hum. Behav. 3, 931–939
(2019).
61. Johnson, H. M. & Seifert, C. M. Sources of the continued inluence
eect: when misinformation in memory aects later inferences.
J. Exp. Psychol. Learn. Mem. Cogn. 20, 1420–1436 (1994).
62. Lewandowsky, S., Stritzke, W. G. K., Oberauer, K. & Morales, M.
Memory for fact, iction, and misinformation: the Iraq War 2003.
Psychol. Sci. 16, 190–195 (2005).
63. Guay, B., Berinsky, A. J., Pennycook, G. & Rand, D. How to think
about whether misinformation interventions work. Nat. Hum.
Behav. 7, 1231–1233 (2023).
64. Mosleh, M., Pennycook, G. & Rand, D. G. Field experiments on
social media. Curr. Dir. Psychol. Sci. 31, 69–75 (2021).
65. Aslett, K., Guess, A. M., Bonneau, R., Nagler, J. & Tucker, J. A. News
credibility labels have limited average eects on news
diet quality and fail to reduce misperceptions. Sci. Adv. 8,
eabl3844 (2022).
66. Carleton Athey, S., Cersosimo, M., Koutout, K. & Li, Z. Emotion-
versus Reasoning-Based Drivers of Misinformation Sharing:
A Field Experiment Using Text Message Courses in Kenya Stanford
University Graduate School of Business Research Paper
No. 4489759 (SSRN, 2023).
67. Almaatouq, A. et al. Beyond playing 20 questions with
nature: integrative experiment design in the social and
behavioral sciences. Behav. Brain Sci. https://doi.org/10.1017/
S0140525X22002874 (2022).
68. Cooper, H., Hedges, L. V. & Valentine, J. C. (eds) The Handbook of
Research Synthesis and Meta-analysis (Russell Sage Foundation,
2019).
69. Lakens, D. Calculating and reporting eect sizes to facilitate
cumulative science: a practical primer for t-tests and ANOVAs.
Front. Psychol. 4, 62627 (2013).
70. Pek, J. & Flora, D. B. Reporting eect sizes in original
psychological research: a discussion and tutorial. Psychol.
Methods 23, 208–225 (2018).
71. Sharp, C., Kaplan, R. M. & Strauman, T. J. The use of ontologies
to accelerate the behavioral sciences: promises and challenges.
Curr. Dir. Psychol. Sci. 32, 418–426 (2023).
72. Spadaro, G. et al. The Cooperation Databank: machine-readable
science accelerates research synthesis. Perspect. Psychol. Sci. 17,
1472–1489 (2022).
73. Cooper, H. & Patall, E. A. The relative beneits of meta-analysis
conducted with individual participant data versus aggregated
data. Psychol. Methods 14, 165–176 (2009).
74. Forscher, P. S. et al. The beneits, barriers, and risks of big-team
science. Perspect. Psychol. Sci. 18, 607–623 (2022).
75. Duckworth, A. L. & Milkman, K. L. A guide to megastudies. PNAS
Nexus 5, pgac214 (2022).
76. Hameiri, B. & Moore-Berg, S. L. Intervention tournaments: an
overview of concept, design, and implementation. Perspect.
Psychol. Sci. 17, 1525–1540 (2022).
77. Susmann, M., Fazio, L., Rand, D. G. & Lewandowsky, S. Mercury
Project Misinformation Intervention Comparison Study. OSF
https://doi.org/10.17605/OSF.IO/FE8C4 (2023).
78. Roozenbeek, J. et al. Susceptibility to misinformation is consistent
across question framings and response modes and better
explained by myside bias and partisanship than analytical
thinking. Judgm. Decis. Mak. 17, 547–573 (2022).
79. Reijula, S. & Hertwig, R. Self-nudging and the citizen choice
architect. Behav. Public Policy 6, 119–149 (2022).
80. Bak-Coleman, J. B. et al. Combining interventions to reduce the
spread of viral misinformation. Nat. Hum. Behav. 6, 1372–1380
(2022).
81. Bode, L. & Vraga, E. The Swiss cheese model for mitigating online
misinformation. Bull. At. Sci. 77, 129–133 (2021).
82. Milli, S., Carroll, M., Wang, Y., Pandey, S., Zhao, S. & Dragan, A.
Engagement, user satisfaction, and the ampliication of divisive
content on social media. Knight First Amend. Inst. https://perma.cc/
YUB7-4HMY (2024).
83. Willaert, T. A computational analysis of Telegram’s narrative
aordances. PLoS ONE 18, e0293508 (2023).
84. Pasquetto, I. V. et al. Tackling misinformation: what researchers
could do with social media data. Harv. Kennedy Sch.
Misinformation Rev. https://doi.org/10.37016/mr-2020-49 (2020).
85. Guess, A. M. et al. How do social media feed algorithms aect
attitudes and behavior in an election campaign? Science 381,
398–404 (2023).
Acknowledgements
We thank S. Vrtovec, F. Stock and A. Horsley for research assistance and
D. Ain for editing the manuscript and the online appendix. We also thank
J. van Bavel, W. Brady, Z. Epstein, M. Leiser, L. Oswald, J. Rozenbeek and
A. Simchon for their contributions during the workshop ‘Behavioral
interventions for promoting truth and democratic discourse in online
environments’. The study was funded by a grant from the Volkswagen
Foundation to R.H., S.L. and S.M.H. (project ‘Reclaiming individual
autonomy and democratic discourse online: how to rebalance human
and algorithmic decision making’). A.K., P.L.-S., R.H., S.L. and S.M.H.
also acknowledge funding from the EU Horizon project no. 101094752
‘Social media for democracy (SoMe4Dem)’. S.L. was supported by a
Research Award from the Humboldt Foundation in Germany and by an
ERC Advanced Grant (no. 101020961 PRODEMINFO) while this research
was conducted. U.K.H.E. was supported by an Australian Research
Council Future Fellowship (no. FT190100708). H.L. acknowledges
funding from the French Agence Nationale de la Recherche under the
Investissement d'Avenir program ANR-17-EURE-0010.
Author contributions
Conceptualization: A.K., P.L.-S., S.M.H., S.L., U.K.H.E. and R.H.
Visualization: A.K. and S.M.H. Supervision: P.L.-S., S.M.H., S.L., U.K.H.E.
and R.H. Writing—original draft: A.K., P.L.-S., U.K.H.E., M.G. and J.B.-C.
Writing—review and editing: A.K., P.L.-S., S.M.H., S.L., U.K.H.E. and
R.H. Coordinating authors: A.K., P.L.-S., S.M.H., S.L., U.K.H.E. and R.H.
Contributing authors: A.A., J.B.-C., S.B., M.B., A.J.B., C.B., J.C., L.K.F.,
M.G., A.M.G., H.H., H.L., R.M., F.P., G.P., D.G.R., S.R., J.R., P. Schmid, M.S.,
B.S.-T., P. Szewach, S.v.d.L. and S.W.
Competing interests
For studies included in the evidence overview, G.P., D.G.R. and A.J.B.
received research funding and research support through gifts from
Google and Meta. A.M.G. and A.A. received an unrestricted research
grant from Meta. L.K.F. received research funding from Meta. S.v.d.L.,
S.R. and S.L. received research funding from Google Jigsaw. S.W. and
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1052
Review article https://doi.org/10.1038/s41562-024-01881-0
M.S. received research funding from Google.org. All other authors
declare no competing interests.
Additional information
Supplementary information The online version contains
supplementary material available at
https://doi.org/10.1038/s41562-024-01881-0.
Correspondence and requests for materials should be addressed to
Anastasia Kozyreva.
Peer review information Nature Human Behaviour thanks Madalina
Vlasceanu and Kevin Aslett for their contribution to the peer review of
this work.
Reprints and permissions information is available at
www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard
to jurisdictional claims in published maps and institutional
ailiations.
Springer Nature or its licensor (e.g. a society or other partner) holds
exclusive rights to this article under a publishing agreement with
the author(s) or other rightsholder(s); author self-archiving of the
accepted manuscript version of this article is solely governed by the
terms of such publishing agreement and applicable law.
© Springer Nature Limited 2024
1Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany. 2School of Psychological Science & Public Policy
Institute, University of Western Australia, Perth, Western Australia, Australia. 3School of Psychological Science, University of Bristol, Bristol, UK.
4Department of Psychology, University of Potsdam, Potsdam, Germany. 5Department of Economics, Lahore University of Management Sciences,
Lahore, Pakistan. 6Craig Newmark Center, School of Journalism, Columbia University, New York, NY, USA. 7Department of Learning and
Instructional Sciences, University of Haifa, Haifa, Israel. 8Department of Psychology, University of Cambridge, Cambridge, UK. 9Department of
Political Science, Massachusetts Institute of Technology, Cambridge, MA, USA. 10Institute for Planetary Health Behaviour, University of Erfurt,
Erfurt, Germany. 11Bernhard Nocht Institute for Tropical Medicine, Hamburg, Germany. 12Melbourne Centre for Behaviour Change, University of
Melbourne, Melbourne, Victoria, Australia. 13Department of Psychology and Human Development, Vanderbilt University, Nashville, TN, USA.
14Department of Psychology, Humboldt University of Berlin, Berlin, Germany. 15Department of Politics and School of Public and International
Aairs, Princeton University, Princeton, NJ, USA. 16Department of Political Science, Ohio State University, Columbus, OH, USA. 17Departments of
Economics and Political Science, Instituto Tecnológico Autónomo de México, Mexico City, Mexico. 18Department of Experimental Psychology,
University of Oxford, Oxford, UK. 19IMT School for Advanced Studies Lucca, Lucca, Italy. 20Department of Psychology, Cornell University, Ithaca,
NY, USA. 21Department of Psychology, University of Regina, Regina, Saskatchewan, Canada. 22Sloan School of Management, Massachusetts
Institute of Technology, Cambridge, MA, USA. 23Department of Psychology, New York University, New York, NY, USA. 24 Department of Politics,
University of Exeter, Exeter, UK. 25Centre for Language Studies, Radboud University Nijmegen, Nijmegen, the Netherlands. 26Graduate School
of Education, Stanford University, Stanford, CA, USA. 27Department of Political Science, Northeastern University, Boston, MA, USA. 28Barcelona
Supercomputing Center, Barcelona, Spain. 29These authors jointly supervised this work: Anastasia Kozyreva, Philipp Lorenz-Spreen,
Stefan M. Herzog, Ullrich K. H. Ecker, Stephan Lewandowsky, Ralph Hertwig. e-mail: kozyreva@mpib-berlin.mpg.de