PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

The spread of misinformation through media and social networks threatens many aspects of society, including public health and the state of democracies. A wide range of individual-focused interventions aimed at reducing harm from online misinformation have been developed in the behavioral and cognitive sciences. We, an international group of 26 experts, introduce and analyze our toolbox of interventions against misinformation, which includes an up-to-date account of the interventions featured in 42 scientific papers. A resource for scientists, policy makers, and the public, the toolbox delivers both a conceptual overview of the breadth of interventions, including their target and scope, and a summary of the empirical evidence supporting the interventions, including the methods and experimental paradigms used to test them. The toolbox covers 10 types of interventions: accuracy prompts, debunking, friction, inoculation, lateral reading, media-literacy tips, rebuttals of science denialism, self-reflection tools, social norms, and warning and fact-checking labels.
Content may be subject to copyright.
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1044
nature human behaviour
Review article https://doi.org/10.1038/s41562-024-01881-0
Toolbox of individual-level interventions
against online misinformation
Anastasia Kozyreva 1 , Philipp Lorenz-Spreen 1,29, Stefan M. Herzog 1,29,
Ullrich K. H. Ecker 2,29, Stephan Lewandowsky 3,4,29, Ralph Hertwig 1,29,
Ayesha Ali5, Joe Bak-Coleman 6, Sarit Barzilai 7, Melisa Basol8,
Adam J. Berinsky 9, Cornelia Betsch 10,11, John Cook12, Lisa K. Fazio 13,
Michael Geers 1,14 , Andrew M. Guess 15, Haifeng Huang 16,
Horacio Larreguy 17, Rakoen Maertens 18, Folco Panizza 19,
Gordon Pennycook20,21, David G. Rand 22, Steve Rathje 23, Jason Reiler 24,
Philipp Schmid 10,11,25, Mark Smith 26, Briony Swire-Thompson 27,
Paula Szewach 24,28, Sander van der Linden 8 & Sam Wineburg26
The spread of misinformation through media and social networks
threatens many aspects of society, including public health and the state
of democracies. One approach to mitigating the eect of misinformation
focuses on individual-level interventions, equipping policymakers and the
public with essential tools to curb the spread and inuence of falsehoods.
Here we introduce a toolbox of individual-level interventions for reducing
harm from online misinformation. Comprising an up-to-date account
of interventions featured in 81 scientic papers from across the globe,
the toolbox provides both a conceptual overview of nine main types of
interventions, including their target, scope and examples, and a summary
of the empirical evidence supporting the interventions, including the
methods and experimental paradigms used to test them. The nine
types of interventions covered are accuracy prompts, debunking and
rebuttals, friction, inoculation, lateral reading and verication strategies,
media-literacy tips, social norms, source-credibility labels, and warning and
fact-checking labels.
The proliferation of false and misleading information online is a com-
plex and global phenomenon that is influenced by both large online
platforms and individual and collective behaviour
1,2
. In its capacity
to damage public health and the health of democracies, online mis
-
information poses a major policy problem36. Online platforms have
addressed this problem by attempting to curtail the reach of misinfor-
mation (for example, by algorithmically downgrading false content in
newsfeeds), by highlighting certain features of the content to users (for
example, fact-checking and labelling) and by removing false content
and suspending accounts that spread falsehoods
7
. Content modera-
tion, however, is a contentious issue and a moral minefield, where the
goal of preventing the spread of harmful misinformation can clash
with freedom of expression
8,9
. A key policy concern is therefore how
to reduce the spread and influence of misinformation while keeping
harsh content-removal measures to a minimum.
In recent years, research on misinformation in the behavioural
and social sciences has introduced a range of interventions designed
to target users’ competences and behaviours in a variety of ways: by
debunking false claims10, by boosting people’s competences (for
example, digital media literacy11) and resilience against manipulation
(for example, pre-emptive inoculation12,13), by implementing design
choices that slow the process of sharing misinformation
14
, by directing
attention to the importance of accuracy15 or by highlighting whether
the information in question is trustworthy
16
. These interventions stem
Received: 1 February 2023
Accepted: 5 April 2024
Published online: 13 May 2024
Check for updates
A full list of afiliations appears at the end of the paper. e-mail: kozyreva@mpib-berlin.mpg.de
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1045
Review article https://doi.org/10.1038/s41562-024-01881-0
of interventions against online misinformation that can be tailored
according to the environment and target audience. Here we report
the results of a review of behavioural and cognitive interventions to
counter online misinformation, based on 81 scientific papers pro-
viding evidence from studies conducted around the world. We also
review the four main experimental paradigms in research on misinfor-
mation interventions: the misinformation-correction paradigm, the
headline-discernment paradigm, the technique-adoption paradigm
and the emerging paradigm of field studies on social media. This paper
is accompanied by a detailed online research and policy resource,
the toolbox of interventions against online misinformation (https://
interventionstoolbox.mpib-berlin.mpg.de and Fig. 1).
This Review represents a collaborative effort of an international
group of experts. The main goal of this Review is to identify a collec-
tion of empirically validated cognitive and behavioural interventions
against misinformation that target different behavioural and cognitive
outcomes. The intervention types in the toolbox explicitly tackle the
challenge of misinformation, encompassing disinformation, false and
misleading information, fake news and related issues. Our toolbox
does not evaluate the interventions’ potential to be implemented, nor
does it analyse their comparative effectiveness
34
. Although it does not
provide a systematic synthesis of the evidence or a meta-analysis of
misinformation interventions, we aimed to ensure that our toolbox
from different disciplines, including cognitive science17, political and
social psychology
18,19
, and education research
20,21
. They also represent
different categories of behavioural policy interventions (for example,
nudges, boosts and refutation strategies; for overviews, see refs. 2225).
Across interventions, different research methodologies have been
used to test their effectiveness, including having participants evalu-
ate headlines in online experiments26, asking students to evaluate
websites20 and running field experiments on social networks13,15,27,28.
Several literature reviews have addressed various aspects of misin-
formation interventions. Some focus on psychological drivers behind
beliefs in falsehoods and how to correct them24,29, others distinguish
between key classes of behavioural interventions
22,23,25
and the entry
points for those interventions in people’s engagement with misinfor-
mation30, and still others focus on measures targeting specific misin-
formation topics such as climate change denialism
31
or false narratives
about COVID-19 (ref. 32). Most recently, researchers have also started
reviewing the state of evidence for misinformation interventions in
the Global South33.
In light of this rapid progress, we believe now is the time to develop
a common conceptual and methodological space in the field, to evalu-
ate the state of the evidence and to provide a resource that inspires
future research directions. Importantly, our aim is not to rate interven-
tions or approaches but rather to produce a versatile digital toolbox
1. Deinition and scope
2. Problem addressed
3. Non-redundancy
4. Evidence
5. Expert opinion
Structure
Conceptual toolbox
Nine intervention types
deined along ten
dimensions
Evidence toolbox
81 scientiic papers
summarized and deined
along ten dimensions plus
study details
Criteria for inclusionExperts
Toolbox of individual-level interventions
against online misinformation
Interventions
World map of evidence
Accuracy prompts
Debunking and rebuttals
Friction
Inoculation
Lateral reading and verification strategies
Media-literacy tips
Social norms
Source-credibility labels
Warning and fact-checking labels
Number
of studies 1 2 53 864
30 misinformation
researchers from 11
countries and 27
universities and
research institutions
Fig. 1 | Structure of the toolbox and map of evidence. The upper part of the
figure summarizes the structure and composition of the toolbox, which is
available at https://interventionstoolbox.mpib-berlin.mpg.de. The world map
of evidence shows the studies from the evidence toolbox by the country in which
they were conducted and by intervention type. Circle size denotes the number of
studies. For the interactive version of the map, see https://interventionstoolbox.
mpib-berlin.mpg.de/toolbox_map.html. Map made with Natural Earth.
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1046
Review article https://doi.org/10.1038/s41562-024-01881-0
covers all relevant interventions in the field of misinformation research
that satisfy our inclusion criteria (for the details of our approach and
inclusion criteria, see Supplementary Information).
Our Review includes evidence from published or about-to-be-
published academic research. We could not include evidence from
interventions that have been rolled out by and on social media plat-
forms due to a lack of published evidence. Platforms undoubtedly
conduct large-scale research on interventions such as friction treat-
ments (for example, prompts on Twitter
35
), crowdsourced corrections
(for example, community notes on X
36
), and warning and fact-checking
labels, but this research and its findings are, regrettably, not available
to academic researchers.
The toolbox
This toolbox focuses on two points of interest: a conceptual overview
of the interventions and an overview of the empirical evidence support-
ing the interventions, including the methods used to test them. Both
overviews are publicly available as an online supplement in the form
of two dynamic tables: a conceptual overview (https://intervention
stoolbox.mpib-berlin.mpg.de/table_concept.html) and an evi-
dence overview (https://interventionstoolbox.mpib-berlin.mpg.de/
table_evidence.html). The online supplement also contains selected
examples of interventions (https://interventionstoolbox.mpib-berlin.
mpg.de/table_examples.html) and a world map of evidence (https://
interventionstoolbox.mpib-berlin.mpg.de/toolbox_map.html).
Conceptual overview of interventions
The toolbox includes nine types of individual-level interventions, all
supported by peer-reviewed, published evidence: accuracy prompts,
debunking and rebuttals, friction, inoculation, lateral reading and veri-
fication strategies, media-literacy tips, social norms, source-credibility
labels, and warning and fact-checking labels. These nine types of inter-
ventions fall under three intervention categories: nudges, which tar-
get behaviours; boosts and educational interventions, which target
competences; and refutation strategies, which target beliefs. Note
that interventions may fall under more than one category. Table 1
provides a condensed overview of the intervention types, listed by
policy intervention category.
Nudging is a behavioural policy approach that uses principles of
human psychology to design choice architectures that steer people’s
decisions—ideally towards a greater individual or public good37,38.
Nudging interventions primarily target behaviour, such as sharing
behaviour on social media. For example, accuracy prompts (Table 1,
‘Accuracy prompts’) remind people of the importance of information
accuracy to encourage them to share fewer false headlines
39
. Other
nudges introduce friction into a decision-making process to slow
the process of sharing information—for instance, asking a person to
pause and think before sharing content on social media
14
or to read
an article before sharing it40 (Table 1, ‘Friction’). Nudges that leverage
social norms—what other people believe, do or find acceptable—can
encourage people to adopt similar standards. For instance, telling users
that most other users do not share or act on certain misinformation can
lead them to react in a similar way41 (Table 1, ‘Social norms’).
Boosting is a behavioural policy approach that enlists human
cognition, the environment or both to help people to strengthen exist-
ing competences or develop new ones42 that are useful for coping
with a given policy problem, such as the proliferation of misinfor-
mation. Because people choose for themselves whether and how to
engage with a boost, these interventions are by necessity transpar-
ent to the target audience. In online environments, boosts and other
educational interventions aim to foster digital competences; for
instance, they may help people to improve their media literacy by
offering them simple tips
11
(Table 1, ‘Media-literacy tips’) or help them
to acquire online fact-checking competences such as lateral reading,
which involves conducting additional independent online searches to
check the trustworthiness of a claim or source. Lateral reading and
other verification strategies (for example, image searching or tracing
the original context of the information) can be implemented as educa-
tional interventions in a school curriculum20,43 or as boosts via a short
online video, a simple pop-up
44
or an online game
45
(Table 1, ‘Lateral
reading and verification strategies’). Another type of boost, inocula-
tion, aims to foster people’s ability to recognize misleading tactics and
information. Inoculation can be topic-specific46, or it can highlight and
explain general argumentation strategies used to mislead audiences.
On the basis of inoculation theory47, inoculation can be implemented
via text, online games or short videos
12,13
. Inoculation is also a refutation
strategy because it can be used to pre-emptively refute false informa-
tion (Table 1, ‘Inoculation’).
The objective of refutation strategies is to debunk misinforma-
tion or to prebunk it—that is, to warn people about misinformation
before they encounter it. Their primary target is belief calibration
10
.
Refutation strategies aim to reduce false beliefs by providing factual
information alongside an explanation of why a piece of misinformation
is false or misleading (Table 1, ‘Debunking and rebuttals’). Debunking
tends to be fact-based but can also appeal to logic and critical think-
ing—for instance, by explaining a misleading argumentation strategy
or discrediting the source of the misinformation (for a review, see
ref. 24). Refutation strategies can also take the form of informa-
tional labels, including source-credibility labels and warning and
fact-checking labels. Such labels have already been rolled out by
some social media platforms and search engines (Table 1, ‘Warning
and fact-checking labels’ and ‘Source-credibility labels’). Although
refutation strategies reliably reduce misconceptions, applying them
reactively to specific pieces of misinformation requires fact-checking
at scale
48
. Applied pre-emptively via prebunking or inoculation (see the
previous paragraph), refutation strategies can build people’s resilience
against misinformation they have not yet encountered.
Overview of the evidence behind the interventions
The toolbox also provides a summary of the evidence behind the nine
types of interventions. This part of the toolbox is based on 81 scientific
papers and is available online as a searchable and expandable table at
https://interventionstoolbox.mpib-berlin.mpg.de/table_evidence.
html. The table includes several empirical papers for each interven-
tion, as well as an overview of each paper’s sample, experimental
paradigm, study design, outcome measures, main findings and lon-
gevity tests. Expanding the row associated with a paper reveals more
detailed information about methods and effect sizes, the full refer-
ence, a link to open data (if available) and the abstract. A separate sec-
tion of the toolbox maps these empirical papers to the countries in
which they were conducted (Fig. 1 and https://interventionstoolbox.
mpib-berlin.mpg.de/toolbox_map.html).
Several observations about the current state of the literature
can be derived from this evidence overview. First, evidence from the
Global North, especially the USA, is overrepresented for many inter-
vention types in the toolbox. However, several intervention types have
been tested across the globe (for example, debunking
4952
, accuracy
prompts
53,54
and media-literacy tips
11,28,53,55
). This generally reflects the
state of the field, as others have also pointed out the lack of evidence
from the Global South
33
—a situation that is thankfully changing rapidly.
Although the interventions target universal problems and behaviours,
they are sensitive to cultural differences. In cases where there is evi-
dence from multiple countries or even multiple probability samples
(for example, tests of media-literacy tips in India28 and Pakistan55),
interventions were not always equally effective across cultures and
populations and were often less effective for less educated rural popu-
lations (see also ref. 11). These findings point to a potentially complex
relationship between people’s cultural contexts and their competences
and behaviours vis-à-vis misinformation. Future studies are needed to
examine this potentially intricate relationship in more detail.
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1047
Review article https://doi.org/10.1038/s41562-024-01881-0
Table 1 | Overview of intervention types in the toolbox
Intervention type Description Example Targeted outcome Outcome variables
Nudges
Accuracy prompts Accuracy prompts are used to shift people’s
attention to the concept of accuracy. Asking people to evaluate
the accuracy of a headline or
showing people a video about
the importance of sharing only
accurate content.
Behaviour: thinking
about accuracy
before sharing
information online
Sharing discernment
Friction Friction makes relevant processes slower or
more effortful by design. Asking people to pause and think
before sharing content on social
media. This could be as simple as a
short prompt—for example, ‘Want
to read this before sharing?’
Behaviour: pausing
rather than acting
on initial impulse
Sharing intentions
Social norms Social norms leverage social information
(peer inluence) to encourage people not to
believe, endorse or share misinformation.
Emphasizing that most people
of a given group disapprove of
sharing or using false information
(descriptive norm) and/or that such
actions are generally considered
wrong, inappropriate or harmful
(inductive norm).
Belief calibration
and behaviour:
following normative
beliefs, for example,
when sharing
information online
Beliefs in misinformation;
sharing intentions
Boosts and educational interventions
Inoculation Inoculation is a pre-emptive intervention
that exposes people to a weakened
form of common misinformation and/or
manipulation strategies to build up their
ability to resist them.
Teaching people about the strategy
of using ‘fake experts’ (presenting
unqualiied people as credible) to
increase their recognition of and
resilience to this strategy.
Belief calibration
and competence:
detecting
and resisting
manipulative and
false information
Accuracy/credibility
discernment;
manipulation technique
recognition
Lateral reading and
veriication strategies Veriication strategies for evaluating
online information encompass a range of
techniques and methods used to assess the
credibility, accuracy and reliability of digital
content. Lateral reading is a strategy used
by professional fact-checkers that involves
investigating the credibility of a website by
searching for information about it on other
sites. Other veriication strategies include
image searching and tracing the original
context of the information.
School-based interventions with
instructional strategies such as
teacher modelling and guided
practice can be used to teach
lateral reading. Pop-up graphics
can also be used to prompt social
media users to read laterally.
Competence:
evaluating the
credibility of online
sources
Credibility assessment
of websites; use of
veriication strategies
(self-reported or tracked)
Media-literacy tips Media-literacy tips give people a list
of strategies for identifying false and
misleading information in their newsfeeds.
Facebook offers tips to spot false
news, including “be sceptical of
headlines”, “look closely at the
URL” and “investigate the source”.
Competence: media
literacy and social
media skills
Accuracy discernment;
sharing discernment
Refutation strategies
Debunking and
rebuttals Debunking and rebuttals are strategies
aimed at dispelling misconceptions
and countering false beliefs. Debunking
involves offering corrective information
to address a speciic misconception.
Rebuttals, particularly in the context of
science denialism, consist of presenting
accurate facts related to a topic that has
been inaccurately addressed (topic rebuttal)
or exposing the rhetorical tactics often
used to reject established scientiic indings
(technique rebuttal).
Debunking can be implemented
in four steps: (1) state the
truth, (2) warn about imminent
misinformation exposure, (3)
specify the misinformation and
explain why it is wrong, and (4)
reinforce the truth by offering the
correct explanation. Depending on
the circumstances (for example,
the availability of a pithy fact),
starting with step 2 may be
appropriate.
Belief calibration
and competence:
detecting
and resisting
manipulative and
false information
Beliefs in misinformation;
attitudes to relevant topics
(for example, vaccination);
behavioural intentions;
continued inluence of
misinformation
Warning and
fact-checking labels Warning labels explicitly alert individuals
to the possibility of being misled by a
particular piece of information or its
source. Fact-checking labels indicate the
trustworthiness rating assigned to a piece of
content by professional fact-checkers.
Facebook adds the labels “False
(Independent fact-checkers say
this information has no basis in
fact)” or “Partly false (Independent
fact-checkers say this information
has some factual inaccuracies)”
Belief calibration
and competence:
detecting false
or other types
of problematic
information
Accuracy judgements;
sharing intentions
Source-credibility
labels Source-credibility labels show how a
particular news source was rated by
professional fact-checking organizations.
NewsGuard labels indicate the
trustworthiness of news and
information websites with a
reliability rating from 0 to 100, on
the basis of nine journalistic criteria
that assess basic practices of
reliability and transparency.
Belief calibration
and competence:
detecting
sources of false
or untrustworthy
information
Sharing intentions;
accuracy judgements;
information diet quality
For the full version of the conceptual overview, see https://interventionstoolbox.mpib-berlin.mpg.de/table_concept.html; for the evidence overview, see https://interventionstoolbox.
mpib-berlin.mpg.de/table_evidence.html; for examples of interventions implementations, see https://interventionstoolbox.mpib-berlin.mpg.de/table_examples.html.
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1048
Review article https://doi.org/10.1038/s41562-024-01881-0
Second, few studies have tested the long-term effects of inter-
ventions. Of those that have, many observed some decay in effective-
ness5658. The mechanisms behind the longevity or lack thereof of
interventions’ effects are currently poorly understood.
Third, it is difficult to compare interventions due to variability
in how their effectiveness was studied. The core differences relate
to participants’ tasks—in particular, the paradigm used, including
the test stimuli (for example, news headlines, real-world claims or
websites), and the measured outcome variables (for example, belief
or credibility ratings and behavioural measures). Ongoing efforts
to systematically compare interventions in large-scale megastudies
would benefit from appropriate standards for paradigms, test stimuli
and outcome measures.
As a first step, we identify four main paradigms in research on mis-
information interventions: the misinformation-correction paradigm,
the headline-discernment paradigm, the technique-adoption paradigm
and field studies on social media, an important emerging paradigm.
Experimental paradigms in testing interventions
The misinformation-correction paradigm typically presents people
with corrections and measures the effect on their belief in relevant
misinformation claims, their claim-related inferential reasoning and
their attitudes towards associated issues. Key outcome variables in
this paradigm include belief and attitude ratings, as well as reliance
on misinformation when responding to inferential-reasoning ques-
tions. Stimuli can be real-world claims that can be fact-checked (for
example, urban myths, scientific claims and politicians’ statements)
or fictional reports containing information that is deemed to be false.
Variations of this paradigm have been used in studies on refutation
strategies, including debunking59, rebuttals of science denialism60,
inoculation, and warning and fact-checking labels. The main meas-
ure of the effectiveness of corrections in this paradigm is whether the
intervention eliminates or attenuates the continued influence of the
misinformation—that is, whether people still make inferences as if
that information were true despite remembering the correction. For
instance, someone may continue to infer a person’s guilt despite know-
ing that they have been cleared of charges61,62. This occurs primarily
when the misinformation affords a causal explanation; providing an
alternative factual explanation can therefore reduce the influence
of misinformation. Misinformation correction is one of the earliest
paradigms in misinformation research.
In the headline-discernment paradigm, participants evaluate
the plausibility, credibility or veracity of true and false headlines and
indicate whether they would be willing to share them (for example,
see ref. 26). Outcome variables include accuracy ratings and people’s
ability to discern true and false headlines, as well as measures of sharing
intention and sharing discernment (that is, the difference in sharing
true versus false headlines). Specifically, Guay et al.
63
argued in favour
of an approach that “exposes participants to a mix of true and false con-
tent, and incorporates ratings of both into a measure of discernment”
(p. 1232). News headlines presented as stimuli can be real (commonly
taken from mainstream news or fake-news websites and fact-checking
sources) or fictional. Several interventions have been tested with vari-
ations of the headline-discernment paradigm, including accuracy
prompts, online media-literacy tips, friction, labels and inoculation.
Headline discernment is one of the most widely adopted paradigms in
research on misinformation interventions in the past decade.
The technique-adoption paradigm assesses whether participants
successfully learn the skills and strategies required to evaluate informa-
tion veracity. Its affinity with assessments of educational curricula is
particularly evident in studies of lateral-reading interventions in the
field (for example, refs. 20,43). Outcome variables include assessment
scores for demonstrating the skills learned during the intervention (for
example, identifying an information source and assessing its credi-
bility), and stimuli include online articles or entire websites. Studies
that test the effectiveness of lateral reading in online settings often
gauge success by measuring browsing behaviour (self-reported or
tracked; for example, see ref. 44). This measure renders lateral reading
difficult to compare to other interventions. The adoption of a specific
skill is also relevant for some inoculation-type interventions, where
a key measure of the intervention’s success is participants’ ability to
detect manipulation tactics (which is assumed to predict subsequent
resistance to misdirection). For instance, studies on video-based inocu-
lation often assess the ability to spot persuasion techniques (such as
detecting the use of emotional language to influence the audience13)
as a primary outcome measure.
In addition to these three established experimental paradigms,
a growing body of research tests laboratory-based misinformation
interventions in the field, often on social media64. Outcome variables
vary; they include the quality of people’s information diets, the qual-
ity of what people share on their newsfeeds and people’s accuracy
or sharing discernment. For example, a recent study
65
used digital
trace data to study how the consumption of news from low-quality
sources would be affected by the NewsGuard browser add-on (which
provides source-credibility labels). Another deployed an accuracy
nudge on Twitter
15
: users who followed US news outlets rated as highly
untrustworthy by professional fact-checkers were sent a private mes-
sage asking them to rate the accuracy of a headline. This intervention
increased the quality of sources that they subsequently shared. A test
of two psychological inoculation videos shown as YouTube ads
13
found
that the videos helped US users to identify a manipulation technique in
a subsequent test headline. Similarly, a five-day text-message educa-
tional course on emotion-based and reasoning-based misinformation
techniques successfully reduced misinformation-sharing intentions in
Kenya
66
. Field studies on social media thus represent a new and growing
paradigm in research on misinformation interventions.
A cumulative science of interventions against online
misinformation
The heterogeneity in methods and experimental paradigms across
interventions is noteworthy—and is, in our view, a strength. Heter-
ogeneity should eventually make it possible to qualitatively assess
the convergence (or lack thereof) of insights and implications across
methods over time. Moreover, using a variety of outcome measures is
in principle advantageous because misinformation appears in many
forms across a range of contexts. Testing the effectiveness of interven-
tions against different outcome measures can be useful for assessing
their scope and generalizability.
However, this heterogeneity also comes at the price of slowing
cumulative progress in the field67,68, as it can be difficult to directly
compare results across diverse intervention types and idiosyncratic
paradigms. Different studies not only use different paradigms but also
measure different outcome variables and focus on different stages of
engaging with misinformation (that is, selecting information sources,
choosing what information to consume or ignore, evaluating the accu-
racy of the information and/or the credibility of the source, or judging
whether and how to react to the information
30
). Furthermore, stud-
ies generally do not use the same kinds of stimuli—and in those that
do, the stimuli probably vary in their distributions of difficulty (for
example, in terms of accuracy discernment). All these discrepancies
greatly reduce the extent to which meaningful conclusions can be
drawn by directly comparing, say, standardized effect sizes between
all intervention types.
Certain comparisons can, however, be meaningful when the rel-
evant characteristics of the studies are aligned (for example, studies
on accuracy prompts and media-literacy tips can be reasonably com-
pared because they tend to use similar tasks and outcome variables;
these have even been tested within one study
53
). We advocate for three
complementary research practices to facilitate meaningful compari-
sons. First, we call for routine reporting of both raw and standardized
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1049
Review article https://doi.org/10.1038/s41562-024-01881-0
effect sizes
69,70
to facilitate research synthesis
68
. Second, the field would
benefit from the development of ontologies of misinformation inter-
ventions71, which would make it possible to consolidate results from
the literature into databases. This would allow researchers to effi-
ciently identify and aggregate results and effect sizes of relevant sets
of studies on the basis of their design, participants and item features
(see ref. 72 for a database of 2,636 studies on human cooperation that
enables researchers to efficiently conduct meta-analyses). It can also
be worthwhile to create databases for individual participant data,
which allows for much richer re-analyses than standard meta-analyses
based on study-level effect sizes
73
. Third, larger groups of researchers
74
should unite to conduct megastudies75,76 that compare many relevant
interventions within the same setting, thus allowing for more stringent
comparisons (for example, see ref. 77).
Implications and ways forward
This Review provides an overview of nine types of interventions aimed
at fighting online misinformation, as well as an overview of the asso-
ciated evidence. With this, we also provide an accessible resource to
the research community, policymakers and the public in the form of
an interactive online toolbox comprising two databases (https://inter
ventionstoolbox.mpib-berlin.mpg.de).
We envision several uses for the toolbox. For researchers, it pro-
vides a starting point for meta-analytic studies, systematic reviews and
studies comparing the effectiveness of different interventions. It can
also inform efforts to standardize and coordinate methods, thereby
increasing the comparability of future results78. Furthermore, the tool-
box highlights important gaps in the available evidence (for example,
underrepresented populations and cultures and the lack of studies on
long-term effects) that should be addressed in future studies.
The toolbox is also a valuable resource for policymakers and the
public. It provides accessible, up-to-date scientific knowledge that can
inform policy discussions about misinformation countermeasures
and platform regulation. The toolbox can also be used as a resource
for educational programmes and for individuals who wish to practice
self-nudging
79
. The versatility and diversity of interventions in the
toolbox increase the likelihood of addressing the heterogeneous needs,
preferences and skills of different users.
Relatedly, where a single intervention may have only limited effects,
the toolbox helps researchers, policymakers, educators and the public
to combine interventions to address different aspects of a misinfor-
mation problem80,81. The interventions in our toolbox act on people’s
cognition and behaviour to reduce either their tendency to share mis-
information or the extent to which they are affected by it. However,
there is no simple link between reductions in sharing misinformation
at the individual level and the burden and spread of misinformation at
the platform level. Computational work suggests that small reductions
in sharing misinformation would produce outsized effects on total
posts80—an ideal scenario. Yet this work indicates that cognitive and
behavioural interventions alone are unlikely to fully address the prob-
lem of misinformation. Even in combination with traditional content
moderation (for example, account suspension and post removal), these
interventions may substantially reduce but not eliminate the spread of
misinformation
80
. User- and content-targeted approaches may be lim-
ited by the manner in which platform design (for example, algorithmic
sorting, recommendations, user interface and network topology) ampli-
fies misinformation
82,83
. The potential for design changes to reduce
misinformation is poorly understood due to a lack of regulated access
to platforms’ data on their ongoing interventions84. Access to this data
is necessary for testing types of interventions and design changes at
scale (for example, as recently demonstrated in ref. 85).
To understand the factors that influence the spread of misinfor-
mation, data access and collaborative efforts between researchers
and platforms are crucial. Individual-focused interventions can only
go so far in the face of complex global threats such as misinformation.
Whereas individual-level approaches aim at mitigating misinformation
by acting on individuals’ ability to recognize and not spread falsehoods,
system-level approaches can aim at making the entire online ecosystem
less conducive to the spread of misinformation—for instance, through
platform design, content moderation, communities of fact-checkers
and journalists, and high-level regulatory and policy interventions
(such as investing in public broadcasters and establishing regulatory
frameworks that promote a diverse media landscape). System-level
interventions may be particularly effective and long-lasting. Indeed,
they elicit the most confidence among misinformation researchers
and practitioners33.
In the meantime, the most urgent next steps for research on
individual-level tools are investigating their medium- and long-term
effects, exploring ways to scale them up (for example, via school cur-
ricula, apps, platform cooperations and pop-ups), building interven-
tions that reach people across educational backgrounds and creating
integrative interventions that empower people to reckon with different
types of content (for example, headlines and websites). Furthermore,
the toolbox of interventions against online misinformation will need to
continuously evolve, given the dynamic nature of information environ-
ments. Future research will therefore need not only to track the extent
to which interventions remain relevant and effective but also to refine
existing tools and develop new ones on the basis of systematic investi
-
gations into environmental and individual factors that may influence
trust in interventions and the likelihood of their adoption.
Data availability
All data are available at OSF (https://osf.io/ejyh6) and in the online sup
plement (https://interventionstoolbox.mpib-berlin.mpg.de).
Code availability
All code is available at OSF (https://osf.io/ejyh6).
References
1. Lazer, D. M. J. et al. The science of fake news. Science 359,
1094–1096 (2018).
2. Lewandowsky, S. et al. Technology and Democracy: Understanding
the Inluence of Online Technologies on Political Behaviour and
Decision Making JRC Science for Policy Report (Publications
Oice of the European Union, 2020).
3. Proposal for a Regulation of the European Parliament and of the
Council on a Single Market for Digital Services (Digital Services
Act) and Amending Directive 2000/31/EC (COM/2020/825 Final),
https://www.europarl.europa.eu/doceo/document/TA-9-2022-
0269_EN.html#title2 (European Parliament, 2020).
4. Lorenz-Spreen, P., Oswald, L., Lewandowsky, S. & Hertwig, R.
A systematic review of worldwide causal and correlational
evidence on digital media and democracy. Nat. Hum. Behav.
https://doi.org/10.1038/s41562-022-01460-1 (2022).
5. Kozyreva, A., Smillie, L. & Lewandowsky, S. Incorporating
psychological science into policy making. Eur. Psychol. 28,
206–224 (2023).
6. Lewandowsky, S. et al. Misinformation and the epistemic integrity
of democracy. Curr. Opin. Psychol. 54, 101711 (2023).
7. Rosen, G. Remove, Reduce, Inform: New Steps to Manage
Problematic Content, https://about.b.com/news/2019/04/
remove-reduce-inform-new-steps (Meta, 2019).
8. Douek, E. Governing online speech: from ‘posts-as-trumps’ to
proportionality and probability. Columbia Law Rev. 121, 759–834
(2021).
9. Kozyreva, A. et al. Resolving content moderation dilemmas
between free speech and harmful misinformation. Proc. Natl
Acad. Sci. USA 120, 2210666120 (2023).
10. Lewandowsky, S. et al. The Debunking Handbook 2020. Databrary
https://doi.org/10.17910/b7.1182 (2020).
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1050
Review article https://doi.org/10.1038/s41562-024-01881-0
11. Guess, A. M. et al. A digital media literacy intervention increases
discernment between mainstream and false news in the United
States and India. Proc. Natl Acad. Sci. USA 117, 15536–15545
(2020).
12. Basol, M., Roozenbeek, J. & Linden, S. Good news about bad
news: gamiied inoculation boosts conidence and cognitive
immunity against fake news. J. Cogn. https://doi.org/10.5334/
joc.91 (2020).
13. Roozenbeek, J., Linden, S., Goldberg, B., Rathje, S. &
Lewandowsky, S. Psychological inoculation improves resilience
against misinformation on social media. Sci. Adv. 8, 6254 (2022).
14. Fazio, L. Pausing to consider why a headline is true or false
can help reduce the sharing of false news. Harv. Kennedy Sch.
Misinformation Rev. https://doi.org/10.37016/mr-2020-009
(2020).
15. Pennycook, G. et al. Shifting attention to accuracy can reduce
misinformation online. Nature 592, 590–595 (2021).
16. Clayton, K. et al. Real solutions for fake news? Measuring the
eectiveness of general warnings and fact-check tags in reducing
belief in false stories on social media. Polit. Behav. 42, 1073–1095
(2020).
17. Pennycook, G. & Rand, D. G. The psychology of fake news. Trends
Cogn. Sci. 25, 388–402 (2021).
18. Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A. & Van Bavel, J. J.
Emotion shapes the diusion of moralized content in social
networks. Proc. Natl Acad. Sci. USA 114, 7313–7318 (2017).
19. Van Bavel, J. J. et al. Political psychology in the digital (mis)
information age: a model of news belief and sharing. Soc. Issues
Policy Rev. 15, 84–113 (2021).
20. Wineburg, S., Breakstone, J., McGrew, S., Smith, M. D. & Ortega, T.
Lateral reading on the open internet: a district-wide ield study in
high school government classes. J. Educ. Psychol. 114, 893–909
(2022).
21. Osborne, J. et al. Science Education in an Age of Misinformation
(Stanford Univ., 2022).
22. Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R. & Hertwig,
R. How behavioural sciences can promote truth, autonomy and
democratic discourse online. Nat. Hum. Behav. 4, 1102–1109
(2020).
23. Kozyreva, A., Lewandowsky, S. & Hertwig, R. Citizens versus the
internet: confronting digital challenges with cognitive tools.
Psychol. Sci. Public Interest 21, 103–156 (2020).
24. Ecker, U. K. H. et al. The psychological drivers of misinformation
belief and its resistance to correction. Nat. Rev. Psychol. 1, 13–29
(2022).
25. Roozenbeek, J., Culloty, E. & Suiter, J. Countering misinformation:
evidence, knowledge gaps, and implications of current
interventions. Eur. Psychol. 28, 189–205 (2023).
26. Pennycook, G., Binnendyk, J., Newton, C. & Rand, D. G.
A practical guide to doing behavioural research on fake news
and misinformation. Collabra Psychol. 7, 25293 (2020).
27. Wright, C. et al. Eects of brief exposure to misinformation
about e-cigarette harms on Twitter: a randomised controlled
experiment. BMJ Open 11, 045445 (2021).
28. Badrinathan, S. Educative interventions to combat
misinformation: evidence from a ield experiment in India. Am.
Polit. Sci. Rev. 115, 1325–1341 (2021).
29. Ziemer, C.-T. & Rothmund, T. Psychological underpinnings of
misinformation countermeasures. J. Media Psychol. https://doi.
org/10.1027/1864-1105/a000407 (2024).
30. Geers, M. et al. The online misinformation engagement
framework. Curr. Opin. Psychol. 55, 101739 (2023).
31. Hornsey, M. J. & Lewandowsky, S. A toolkit for understanding and
addressing climate scepticism. Nat. Hum. Behav. 6, 1454–1464
(2022).
32. Fasce, A. et al. A taxonomy of anti-vaccination arguments from a
systematic literature review and text modelling. Nat. Hum. Behav.
7, 1462–1480 (2023).
33. Blair, R. A. et al. Interventions to counter misinformation: lessons
from the Global North and applications to the Global South.
Curr. Opin. Psychol. 55, 101732 (2024).
34. IJzerman, H. et al. Use caution when applying behavioural science
to policy. Nat. Hum. Behav. 4, 1092–1094 (2020).
35. Twitter Comms. More reading—people open articles 40% more
often after seeing the prompt. X, https://web.archive.org/
web/20220804154748/; https://twitter.com/twittercomms/
status/1309178716988354561 (2020).
36. About Community Notes on X, https://help.twitter.com/en/
using-x/community-notes (accessed 16 February 2024).
37. Thaler, R. H. & Sunstein, C. R. Nudge: Improving Decisions about
Health, Wealth, and Happiness (Yale Univ. Press, 2008).
38. Thaler, R. H. & Sunstein, C. R. Nudge: The Final Edition
(Yale Univ. Press, 2021).
39. Pennycook, G. & Rand, D. G. Accuracy prompts are a replicable
and generalizable approach for reducing the spread of
misinformation. Nat. Commun. 13, 2333 (2022).
40. X Support. Sharing an article can spark conversation, so you may
want to read it before you Tweet it. X, https://twitter.com/
twittersupport/status/1270783537667551233 (2020).
41. Andı, S. & Akesson, J. Nudging away false news: evidence from a
social norms experiment. Digit. J. 9, 106–125 (2020).
42. Hertwig, R. & Grüne-Yano, T. Nudging and boosting: steering or
empowering good decisions. Perspect. Psychol. Sci. 12, 973–986
(2017).
43. Brodsky, J. E. et al. Improving college students’ fact-checking
strategies through lateral reading instruction in a general
education civics course. Cogn. Res. Princ. Implic. 6, 23 (2021).
44. Panizza, F. et al. Lateral reading and monetary incentives to spot
disinformation about science. Sci. Rep. 12, 5678 (2022).
45. Barzilai, S. et al. Misinformation is contagious: middle school
students learn how to evaluate and share information responsibly
through a digital game. Comput. Educ. 202, 104832 (2023).
46. Tay, L. Q., Hurlstone, M. J., Kurz, T. & Ecker, U. K. H. A comparison
of prebunking and debunking interventions for implied versus
explicit misinformation. Br. J. Psychol. 113, 591–607 (2022).
47. Lewandowsky, S. & Linden, S. Countering misinformation and fake
news through inoculation and prebunking. Eur. Rev. Soc. Psychol.
32, 348–384 (2021).
48. Gottfried, J. A., Hardy, B. W., Winneg, K. M. & Jamieson, K. H. Did
fact checking matter in the 2012 presidential campaign? Am.
Behav. Sci. 57, 1558–1567 (2013).
49. Huang, H. A war of (mis)information: the political eects of
rumors and rumor rebuttals in an authoritarian country. Br. J. Polit.
Sci. 47, 283–311 (2017).
50. Porter, E. & Wood, T. J. The global eectiveness of fact-checking:
evidence from simultaneous experiments in Argentina, Nigeria,
South Africa, and the United Kingdom. Proc. Natl Acad. Sci. USA
118, 2104235118 (2021).
51. Porter, E., Velez, Y. & Wood, T. J. Correcting COVID-19 vaccine
misinformation in 10 countries. R. Soc. Open Sci. 10, 221097 (2023).
52. Badrinathan, S. & Chauchard, S. ‘I don’t think that’s true, bro!’
Social corrections of misinformation in India. Int. J. Press Polit.
https://doi.org/10.1177/19401612231158770 (2023).
53. Arechar, A. A. et al. Understanding and combatting
misinformation across 16 countries on six continents. Nat. Hum.
Behav. 7, 1502–1513 (2023).
54. Oer-Westort, M., Rosenzweig, L. R. & Athey, S. Battling the
coronavirus 'infodemic' among social media users in Kenya and
Nigeria. Nat. Hum. Behav., https://doi.org/10.1038/s41562-023-
01810-7 (2024).
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1051
Review article https://doi.org/10.1038/s41562-024-01881-0
55. Ali, A. & Qazi, I. A. Countering misinformation on social media
through educational interventions: evidence from a randomized
experiment in Pakistan. J. Dev. Econ. 163, 103108 (2023).
56. Maertens, R., Roozenbeek, J., Basol, M. & Linden, S. Long-term
eectiveness of inoculation against misinformation: three
longitudinal experiments. J. Exp. Psychol. Appl. 27, 1–16 (2021).
57. Grady, R. H., Ditto, P. H. & Loftus, E. F. Nevertheless, partisanship
persisted: fake news warnings help briely, but bias returns with
time. Cogn. Res. Princ. Implic. 6, 52 (2021).
58. Paynter, J. et al. Evaluation of a template for countering
misinformation—real-world autism treatment myth debunking.
PLoS ONE 14, 0210746 (2019).
59. Ecker, U. K. H., Butler, L. H. & Hamby, A. You don’t have to tell a
story! A registered report testing the eectiveness of narrative
versus non-narrative misinformation corrections. Cogn. Res.
Princ. Implic. 5, 64 (2020).
60. Schmid, P. & Betsch, C. Eective strategies for rebutting science
denialism in public discussions. Nat. Hum. Behav. 3, 931–939
(2019).
61. Johnson, H. M. & Seifert, C. M. Sources of the continued inluence
eect: when misinformation in memory aects later inferences.
J. Exp. Psychol. Learn. Mem. Cogn. 20, 1420–1436 (1994).
62. Lewandowsky, S., Stritzke, W. G. K., Oberauer, K. & Morales, M.
Memory for fact, iction, and misinformation: the Iraq War 2003.
Psychol. Sci. 16, 190–195 (2005).
63. Guay, B., Berinsky, A. J., Pennycook, G. & Rand, D. How to think
about whether misinformation interventions work. Nat. Hum.
Behav. 7, 1231–1233 (2023).
64. Mosleh, M., Pennycook, G. & Rand, D. G. Field experiments on
social media. Curr. Dir. Psychol. Sci. 31, 69–75 (2021).
65. Aslett, K., Guess, A. M., Bonneau, R., Nagler, J. & Tucker, J. A. News
credibility labels have limited average eects on news
diet quality and fail to reduce misperceptions. Sci. Adv. 8,
eabl3844 (2022).
66. Carleton Athey, S., Cersosimo, M., Koutout, K. & Li, Z. Emotion-
versus Reasoning-Based Drivers of Misinformation Sharing:
A Field Experiment Using Text Message Courses in Kenya Stanford
University Graduate School of Business Research Paper
No. 4489759 (SSRN, 2023).
67. Almaatouq, A. et al. Beyond playing 20 questions with
nature: integrative experiment design in the social and
behavioral sciences. Behav. Brain Sci. https://doi.org/10.1017/
S0140525X22002874 (2022).
68. Cooper, H., Hedges, L. V. & Valentine, J. C. (eds) The Handbook of
Research Synthesis and Meta-analysis (Russell Sage Foundation,
2019).
69. Lakens, D. Calculating and reporting eect sizes to facilitate
cumulative science: a practical primer for t-tests and ANOVAs.
Front. Psychol. 4, 62627 (2013).
70. Pek, J. & Flora, D. B. Reporting eect sizes in original
psychological research: a discussion and tutorial. Psychol.
Methods 23, 208–225 (2018).
71. Sharp, C., Kaplan, R. M. & Strauman, T. J. The use of ontologies
to accelerate the behavioral sciences: promises and challenges.
Curr. Dir. Psychol. Sci. 32, 418–426 (2023).
72. Spadaro, G. et al. The Cooperation Databank: machine-readable
science accelerates research synthesis. Perspect. Psychol. Sci. 17,
1472–1489 (2022).
73. Cooper, H. & Patall, E. A. The relative beneits of meta-analysis
conducted with individual participant data versus aggregated
data. Psychol. Methods 14, 165–176 (2009).
74. Forscher, P. S. et al. The beneits, barriers, and risks of big-team
science. Perspect. Psychol. Sci. 18, 607–623 (2022).
75. Duckworth, A. L. & Milkman, K. L. A guide to megastudies. PNAS
Nexus 5, pgac214 (2022).
76. Hameiri, B. & Moore-Berg, S. L. Intervention tournaments: an
overview of concept, design, and implementation. Perspect.
Psychol. Sci. 17, 1525–1540 (2022).
77. Susmann, M., Fazio, L., Rand, D. G. & Lewandowsky, S. Mercury
Project Misinformation Intervention Comparison Study. OSF
https://doi.org/10.17605/OSF.IO/FE8C4 (2023).
78. Roozenbeek, J. et al. Susceptibility to misinformation is consistent
across question framings and response modes and better
explained by myside bias and partisanship than analytical
thinking. Judgm. Decis. Mak. 17, 547–573 (2022).
79. Reijula, S. & Hertwig, R. Self-nudging and the citizen choice
architect. Behav. Public Policy 6, 119–149 (2022).
80. Bak-Coleman, J. B. et al. Combining interventions to reduce the
spread of viral misinformation. Nat. Hum. Behav. 6, 1372–1380
(2022).
81. Bode, L. & Vraga, E. The Swiss cheese model for mitigating online
misinformation. Bull. At. Sci. 77, 129–133 (2021).
82. Milli, S., Carroll, M., Wang, Y., Pandey, S., Zhao, S. & Dragan, A.
Engagement, user satisfaction, and the ampliication of divisive
content on social media. Knight First Amend. Inst. https://perma.cc/
YUB7-4HMY (2024).
83. Willaert, T. A computational analysis of Telegram’s narrative
aordances. PLoS ONE 18, e0293508 (2023).
84. Pasquetto, I. V. et al. Tackling misinformation: what researchers
could do with social media data. Harv. Kennedy Sch.
Misinformation Rev. https://doi.org/10.37016/mr-2020-49 (2020).
85. Guess, A. M. et al. How do social media feed algorithms aect
attitudes and behavior in an election campaign? Science 381,
398–404 (2023).
Acknowledgements
We thank S. Vrtovec, F. Stock and A. Horsley for research assistance and
D. Ain for editing the manuscript and the online appendix. We also thank
J. van Bavel, W. Brady, Z. Epstein, M. Leiser, L. Oswald, J. Rozenbeek and
A. Simchon for their contributions during the workshop ‘Behavioral
interventions for promoting truth and democratic discourse in online
environments’. The study was funded by a grant from the Volkswagen
Foundation to R.H., S.L. and S.M.H. (project ‘Reclaiming individual
autonomy and democratic discourse online: how to rebalance human
and algorithmic decision making’). A.K., P.L.-S., R.H., S.L. and S.M.H.
also acknowledge funding from the EU Horizon project no. 101094752
‘Social media for democracy (SoMe4Dem)’. S.L. was supported by a
Research Award from the Humboldt Foundation in Germany and by an
ERC Advanced Grant (no. 101020961 PRODEMINFO) while this research
was conducted. U.K.H.E. was supported by an Australian Research
Council Future Fellowship (no. FT190100708). H.L. acknowledges
funding from the French Agence Nationale de la Recherche under the
Investissement d'Avenir program ANR-17-EURE-0010.
Author contributions
Conceptualization: A.K., P.L.-S., S.M.H., S.L., U.K.H.E. and R.H.
Visualization: A.K. and S.M.H. Supervision: P.L.-S., S.M.H., S.L., U.K.H.E.
and R.H. Writing—original draft: A.K., P.L.-S., U.K.H.E., M.G. and J.B.-C.
Writing—review and editing: A.K., P.L.-S., S.M.H., S.L., U.K.H.E. and
R.H. Coordinating authors: A.K., P.L.-S., S.M.H., S.L., U.K.H.E. and R.H.
Contributing authors: A.A., J.B.-C., S.B., M.B., A.J.B., C.B., J.C., L.K.F.,
M.G., A.M.G., H.H., H.L., R.M., F.P., G.P., D.G.R., S.R., J.R., P. Schmid, M.S.,
B.S.-T., P. Szewach, S.v.d.L. and S.W.
Competing interests
For studies included in the evidence overview, G.P., D.G.R. and A.J.B.
received research funding and research support through gifts from
Google and Meta. A.M.G. and A.A. received an unrestricted research
grant from Meta. L.K.F. received research funding from Meta. S.v.d.L.,
S.R. and S.L. received research funding from Google Jigsaw. S.W. and
Nature Human Behaviour | Voume 8 | June 2024 | 1044–1052 1052
Review article https://doi.org/10.1038/s41562-024-01881-0
M.S. received research funding from Google.org. All other authors
declare no competing interests.
Additional information
Supplementary information The online version contains
supplementary material available at
https://doi.org/10.1038/s41562-024-01881-0.
Correspondence and requests for materials should be addressed to
Anastasia Kozyreva.
Peer review information Nature Human Behaviour thanks Madalina
Vlasceanu and Kevin Aslett for their contribution to the peer review of
this work.
Reprints and permissions information is available at
www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard
to jurisdictional claims in published maps and institutional
ailiations.
Springer Nature or its licensor (e.g. a society or other partner) holds
exclusive rights to this article under a publishing agreement with
the author(s) or other rightsholder(s); author self-archiving of the
accepted manuscript version of this article is solely governed by the
terms of such publishing agreement and applicable law.
© Springer Nature Limited 2024
1Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany. 2School of Psychological Science & Public Policy
Institute, University of Western Australia, Perth, Western Australia, Australia. 3School of Psychological Science, University of Bristol, Bristol, UK.
4Department of Psychology, University of Potsdam, Potsdam, Germany. 5Department of Economics, Lahore University of Management Sciences,
Lahore, Pakistan. 6Craig Newmark Center, School of Journalism, Columbia University, New York, NY, USA. 7Department of Learning and
Instructional Sciences, University of Haifa, Haifa, Israel. 8Department of Psychology, University of Cambridge, Cambridge, UK. 9Department of
Political Science, Massachusetts Institute of Technology, Cambridge, MA, USA. 10Institute for Planetary Health Behaviour, University of Erfurt,
Erfurt, Germany. 11Bernhard Nocht Institute for Tropical Medicine, Hamburg, Germany. 12Melbourne Centre for Behaviour Change, University of
Melbourne, Melbourne, Victoria, Australia. 13Department of Psychology and Human Development, Vanderbilt University, Nashville, TN, USA.
14Department of Psychology, Humboldt University of Berlin, Berlin, Germany. 15Department of Politics and School of Public and International
Aairs, Princeton University, Princeton, NJ, USA. 16Department of Political Science, Ohio State University, Columbus, OH, USA. 17Departments of
Economics and Political Science, Instituto Tecnológico Autónomo de México, Mexico City, Mexico. 18Department of Experimental Psychology,
University of Oxford, Oxford, UK. 19IMT School for Advanced Studies Lucca, Lucca, Italy. 20Department of Psychology, Cornell University, Ithaca,
NY, USA. 21Department of Psychology, University of Regina, Regina, Saskatchewan, Canada. 22Sloan School of Management, Massachusetts
Institute of Technology, Cambridge, MA, USA. 23Department of Psychology, New York University, New York, NY, USA. 24 Department of Politics,
University of Exeter, Exeter, UK. 25Centre for Language Studies, Radboud University Nijmegen, Nijmegen, the Netherlands. 26Graduate School
of Education, Stanford University, Stanford, CA, USA. 27Department of Political Science, Northeastern University, Boston, MA, USA. 28Barcelona
Supercomputing Center, Barcelona, Spain. 29These authors jointly supervised this work: Anastasia Kozyreva, Philipp Lorenz-Spreen,
Stefan M. Herzog, Ullrich K. H. Ecker, Stephan Lewandowsky, Ralph Hertwig. e-mail: kozyreva@mpib-berlin.mpg.de
... It is important to note that by their nature, conspiracist claims are epistemically risky in that they do not hold the support of credible evidence , and -unlike scientific theorising -they are often unfalsifiable in the way they are constructed (see Keeley, 2019). But if this is the case, how are people so drawn to ideas that, for the most part, do not hold up to scrutiny? ...
... Over the past couple of years, research on psychological interventions tackling misinformation has boomed (Gwiaździński et al., 2022). If we look at it chronologically, there are two categories of possible interventions (Kozyreva et al., 2022;: interventions after the damage has been done (e.g., correcting conspiracy beliefs), and interventions that protect people before the damage has been done (e.g., media literacy training). ...
... Also in this second category, there is a range of educational interventions (incl. media literacy and science literacy training), boosts, and nudges that can be implemented to help users discern between accurate and inaccurate content, and remind them to consider the accuracy of the content when engaging with it (for two reviews, see Gwiaździński et al., 2022;Kozyreva et al., 2022). For example, accuracy nudges, pause-and-think interventions, warning labels, and added friction on social media platform UX design have shown to help reorient people towards sharing truthful news (Fazio, 2020;Gwiaździński et al., 2022;Pennycook et al., 2020). ...
... Notably, Kozyreva et al. (2022) included refutations in their "toolbox of interventions against online misinformation and manipulation" (p. 1). This "toolbox" provides ten useful interventions for science communicators and practitioners to use in their fight against misinformation. ...
... Of these ten, refutations appear across 40% (i.e., second most prevalent) of these strategies. Kozyreva et al. (2022) claimed that refutation strategies can be used to: debunk misconceptions, inoculate against misconceptions, rebut science denialism, and/or as warning or fact-checking labels. Because these functions are not mutually exclusive (i.e., a refutation could debunk a misconception for one individual and act as an inoculation for another), scientists have recommended policymakers and science communicators use refutations when communicating with the general public. ...
... Building on the efforts of previous meta-analyses and reviews (Kozyreva et al., 2022;Schroeder & Kucera, 2022;Zengilowski et al., 2021), we aimed to comprehensively examine the effects of refutation texts. The primary goal of this meta-analysis was to investigate the overall impact of refutation texts on learning outcomes (i.e., accurate and inaccurate beliefs) to see if our findings aligned with previous meta-analyses and systematic reviews (e.g., Guzzetti et al., 1993;Schroeder & Kucera, 2022;Tippett, 2010). ...
Article
Misinformation around scientific issues is rampant on social media platforms, raising concerns among educators and science communicators. A variety of approaches have been explored to confront this growing threat to science literacy. For example, refutations have been used both proactively as warning labels and in attempts to inoculate against misconceptions, and retroactively to debunk misconceptions and rebut science denialism. Refutations have been used by policy makers and scientists when communicating with the general public, yet little is known about their effectiveness or consequences. Given the interest in refutational approaches, we conducted a comprehensive, pre-registered meta-analysis comparing the effect of refutation texts to non-refutation texts on individuals’ misconceptions about scientific information. We selected 71 articles (53 published and 18 unpublished) that described 76 studies, 111 samples, and 294 effect sizes. We also examined 26 moderators. Overall, our findings show a consistent and statistically significant advantage of refutation texts over non-refutation texts in controlled experiments confronting scientific misconceptions. We also found that moderators neither enhanced nor diminished the impact of the refutation texts. We discuss the implications of using refutations in formal and informal science learning contexts and in science communications from three theoretical perspectives.
... A whole field of research focuses on testing such interventions in an experimental environment. For example, there is a group of researchers gathered around Anastasia Kozyreva, John Cook, Stephan Lewandowsky, and Folco Panizza who worked on the review of various cognitive tools designed to counter false information and online manipulation (Kozyreva et al. 2024). The 'Toolbox of Interventions Against Online Misinformation and Manipulation' (I will refer to this paper as the 'toolbox') is based on 42 scientific papers. ...
... As mentioned, nudges target human behaviour specifically. For example, a nudge called an Accuracy prompt encourages people to share fewer false headlines (Kozyreva et al. 2024;Pennycook and Rand 2022). Any intervention that "enlists human cognition, the environment, or both to help people strengthen existing competences or develop new ones" (Kozyreva et al. 2024, 8) is called a boost. ...
Article
Full-text available
This paper presents an easy-to-use and layperson-friendly general demarcation for recognising scientific claims. The research was inspired by Gerd Gigerenzer’s approach to heuristics, which consists of a few yes or no questions and various ideas from social epistemology, such as Longino’s venues, Goldman’s notion of cognitive expertise, Anderson’s hierarchy of experts, and Lackey’s concept of acquiring knowledge through others’ testimony. The whole procedure is divided into two tasks. The first establishes whether a given testimony source S is an expert by checking S’s h-index on the Google Scholar search engine. The second is to search Google Scholar for any documented evidence (e.g., reports and textbooks) that proves that there is a consensus among experts that S’s claim is correct.
... Existing online public health interventions to enhance COVID-19 vaccination have focused largely on combatting misinformation by engaging individuals presumed to be persuadable. A number of these, essentially individual-level techniques, have demonstrated some efficacy, including: accuracy prompts, debunking and rebuttals, friction, inoculation, lateral reading and verification, media-literacy interventions, social norms, source-credibility labels, and warning and fact-checking labels (Kozyreva et al., 2023). While useful, these techniques give limited consideration to group psychological processes that may shape responses on social media. ...
... Infodemiologists would then immediately post an Institutional Review Board-approved disclaimer and transparency statement identifying themselves as a member of the research team with a website link containing more details about the research and procedures for withdrawal. Next, the infodemiologist flexibly interspersed evidence-based communication techniques with motivational interviewing processes (Kozyreva et al., 2023;. They would continue the conversation until its natural conclusion. ...
Article
Full-text available
Objective: Circulating antivaccine tropes in online platforms are linked to low vaccination uptake, including for COVID-19 vaccines. Motivational interviewing (MI) is an effective intervention to increase vaccination rates. Community-oriented Motivational Interviewing (COMI) expands MI to online groups. We examine COMI processes—manifestations of change and sustain talk unique to group intervention in online spaces—as well as practitioner strategies to elicit and amplify cohesion and collective motivation. Method: Case examples are presented from comments on public, antivaccine Facebook and Twitter posts targeted toward Black and Latinx communities between April and October, 2021. Transcripts were coded deductively according to well-described antivaccine tropes. Emergent codes were also inductively identified. Results: 295 intervention transcripts were analyzed. 157 (53%) received engagement (reply/emoji). Speech indicative of increased relatedness and community orientation facilitated change talk. Manifestations of sustain (i.e., antivaccine) talk included “science denial” themes. Expressions of individualism were a novel form of sustain talk. Responses to sustain talk included contextualizing behavior amid shared values, providing context for true information, emphasizing common ground, and collective affirmations. Conclusions: Consistent with MI with groups, findings suggest change talk manifests in part through fostering group cohesion. Infodemiologists’ posts could potentially activate and motivate the group at large, influencing the occurrence and content of engagement among group members. Additionally, our findings revealed ways change talk and sustain talk manifest in online community settings. Strategies employed by trained, in-group peer practitioners explicitly seeking to foster group cohesion showed potential to mitigate sustain talk in this setting.
... A partir d'un design expérimental, l'étude 3, dont la réalisation est prévue au printemps 2023, visera à évaluer l'efficacité d'une intervention brève en ligne, et facilement implémentable à plus grande échelle, pour prévenir les effets de l'exposition à des fausses informations sur l'adhésion à des CC ainsi que sur les intentions comportementales de recours à la médecine conventionnelle ou non-conventionnelle en remplacement ou en complément de la chimiothérapie. Notons que plusieurs études (quasi-)expérimentales visant à lutter contre les effets des fausses Rapport scientifique Appel à Projets Cancéropôle Nord-Ouest -2022 6 informations en incitant à les participants à traiter les informations avec une plus grande prudence ou réflexivité ont été réalisé dans d'autres champs (e.g., adhésion à des CC générales, Bonetto et al., 2018 ; utilisation de fausses informations pour réaliser des exercices d'inférence, Ecker et al., 2010 ; partage de fausses informations en ligne concernant le COVID-19, Pennycook et al., 2020) et montrent des résultats intéressants (pour une revue voir Kozyreva et al., 2022). ...
Technical Report
Full-text available
[English abstract below] - Introduction - En oncologie, le suivi des recommandations médicales est un prédicteur important de la survie et de la qualité de vie des patients. Il a été observé chez des patients à bon pronostic que le recours aux médecines complémentaires et alternatives était associé à un refus des traitements conventionnels et à une survie moindre (Johnson et al., 2018a, 2018b). Identifier les facteurs psychosociaux qui prédisent l’attrait pour les médecines complémentaires et alternatives et le rejet de la médecine conventionnelle est primordial afin d’en limiter l’impact. Une étude corrélationnelle que nous avions réalisées en amont de ce projet (Fournier & Varet, 2024) pointait le rôle potentiellement important des croyances conspirationnistes (CC). - Objectifs et méthodologie - Toutes les études de ce projet ont été administrées en ligne, auprès d’une population générale (non atteinte de cancer), invitée à se mettre à la place d’un patient recevant un diagnostic de cancer. L’étude 1 (N = 313) reposait sur un design corrélationnel et visait à répliquer et approfondir les résultats de l’étude réalisée en amont. Les études 2a (N = 283) et 2b (N = 242) reposaient sur un design quasi-expérimental et visaient notamment à vérifier la causalité des relations entre variables observées précédemment. L’étude 3 (N prévisionnel = 348), qui sera réalisée au printemps 2023, reposera sur un design expérimental et visera à tester l’efficacité d’une intervention brève pour réduire les effets des CC observés dans les études précédentes. - Résultats, implications, perspectives - Les étude 1, 2a et 2b montrent que l’adhésion à des CC non-spécifiques à la santé prédit l’adhésion à des CC spécifiques à la chimiothérapie. Ces deux variables prédisent une moindre intention de recours à la chimiothérapie et une plus grande intention de recours aux médecines complémentaires et alternatives. L’étude 2b indique que l’exposition à un article propageant des CC sur la chimiothérapie, prédit, lorsqu’il est jugé crédible, une moindre intention de recours à la chimiothérapie et une plus grande intention de recours aux médecines complémentaires et alternatives. Un article niant ces CC prédit également, lorsqu’il est jugé peu crédible, une moindre intention de recours à la chimiothérapie et une plus grande intention de recours aux médecines complémentaires et alternatives. Les implications pratiques de ces résultats sont discutées. Ils montrent notamment que l’exposition à des fausses informations liées à la chimiothérapie sur le cancer peuvent impacter négativement les croyances et les intentions des individus face aux traitements oncologique, face à la médecine conventionnelle plus généralement, et face aux autorités et sources officielles au-delà du seul domaine de la santé. Par ailleurs, ils montrent que les actions visant à réduire les CC liées à la chimiothérapie peuvent avoir des effets contre-productifs lorsque la source n’est pas jugée crédible et invite à une prudence dans l’élaboration des communications de santé auprès du public. Sont également discutées les principales limites et perspectives, en particulier le développement, à moyen terme, d’un projet visant à tester la réplicabilité des effets observés auprès d’une population clinique. Cependant, l’intérêt de développement d’un projet intermédiaire de consolidation de ces observations est également envisagé à court terme. _____________________________________________________________________________________ [English abstract] - Introduction - In oncology, following medical recommendations is an important predictor of patient survival and quality of life. It has been observed in patients with a good prognosis that the use of complementary and alternative medicine was associated with refusal of conventional treatments and poorer survival (Johnson et al., 2018a, 2018b). Identifying the psychosocial factors that predict attraction to complementary and alternative medicine and rejection of conventional medicine is crucial in order to limit their impact. A correlational study we carried out prior to this project (Fournier & Varet, 2024) pointed to the potentially important role of conspiracy beliefs (CCs). - Aims and methodology - All the studies in this project were administered online to a general population (without cancer), who were asked to put themselves in the shoes of a patient diagnosed with cancer. Study 1 (N = 313) was based on a correlational design and aimed to replicate and extend the results of the previous study. Studies 2a (N = 283) and 2b (N = 242) were based on a quasi-experimental design and aimed in particular to verify the causality of the relationships between variables observed previously. Study 3 (N = 348), which will be carried out in spring 2023, will be based on an experimental design and will aim to test the effectiveness of a brief intervention in reducing the effects of CCs observed in the previous studies. - Results, implications and prospects - Studies 1, 2a and 2b show that adherence to non-health-specific CCs predicts adherence to chemotherapy-specific CCs. These two variables predict a lower intention to use chemotherapy and a higher intention to use complementary and alternative medicine. Study 2b indicates that exposure to an article propagating CCs on chemotherapy, when judged credible, predicts a lower intention to use chemotherapy and a higher intention to use complementary and alternative medicine. An article denying these CCs also predicts, when judged not very credible, a lower intention to use chemotherapy and a higher intention to use complementary and alternative medicine. The practical implications of these results are discussed. In particular, they show that exposure to false information relating to chemotherapy and cancer can have a negative impact on people's beliefs and intentions with regard to cancer treatments, conventional medicine more generally, and official authorities and sources beyond the health sector. In addition, they show that actions aimed at reducing chemotherapy-related CC can have counter-productive effects when the source is not deemed credible, and call for caution when developing health communications for the public. The main limitations and prospects are also discussed, in particular the development, in the medium term, of a project aimed at testing the replicability of the effects observed in a clinical population. However, the interest in developing an intermediate project to consolidate these observations is also envisaged in the short term.
... There is an abundance of digital interventions that aim to increase prosocial behavior online through various psychological techniques such as explicit transparency 1 , inoculation 2 , and mental friction 3,4 . The field of digital interventions for prosocial behavior is growing rapidly [5][6][7][8][9] and addressing deeply entrenched social problems such as radicalization 10 , bullying 11 , and misinformation 12 on online platforms. ...
Article
Full-text available
Digital interventions for prosocial behavior are increasingly being studied by psychologists. However, academic findings remain largely underutilized by practitioners. We present a practical review and framework for distinguishing three categories of digital interventions––proactive, interactive, and reactive––based on the timing of their implementation. For each category, we present digital, scalable, automated, and scientifically tested interventions and review their empirical evidence. We provide tips for applying these interventions and advice for successful collaborations between academic researchers and practitioners.
... When misinformation relates to important domains such as health, politics, or science, it can be harmful at a societal level (Bennett & Livingston, 2018;Bursztyn et al., 2020;Lewandowsky et al., 2017Lewandowsky et al., , 2023Lewandowsky et al., , 2024Loomba et al., 2021;Poland & Spier, 2010;Simonov et al., 2022;Tay et al., 2024;van der Linden et al., 2017). Therefore, finding the most effective strategies to counter misinformation is an important focus of contemporary research Ha et al., 2021;Kozyreva et al., 2022;Ziemer & Rothmund, 2024). One commonly recommended but underresearched approach is to discredit misinformation sources (Lewandowsky et al., 2020;Paynter et al., 2019). ...
Article
Full-text available
Misinformation often continues to influence people’s reasoning even after it has been corrected. Therefore, an important aim of applied cognition research is to identify effective measures to counter misinformation. One frequently recommended but hitherto insufficiently tested strategy is source discreditation, that is, attacking the credibility of a misinformation source. In two experiments, we tested whether immediate source discreditation could reduce people’s subsequent reliance on fictional event-related misinformation. In Experiment 1, the discreditation targeted a person source of misinformation, pointing to a conflict of interest. This intervention was compared with a commonly employed message-focused correction and a combination of correction and discreditation. The discreditation alone was effective, but less effective than a correction, with the combination of both most effective. Experiment 2 compared discreditations that targeted a person versus a media source of misinformation, pointing either to a conflict of interest or a poor track record of communication. Discreditations were effective for both types of sources, although track-record discreditations were less effective when the misinformation source was a media outlet compared to a person. Results demonstrate that continued influence of misinformation is shaped by social as well as cognitive factors and that source discreditation is a broadly applicable misinformation countermeasure.
Article
Experts consider misinformation a significant societal concern due to its associated problems like political polarization, erosion of trust, and public health challenges. However, these broad effects can occur independently of misinformation, illustrating a misalignment with the narrow focus of the prevailing misinformation concept. We propose using disagreement—conflicting attitudes and beliefs—as a more effective framework for studying these effects. This approach, for example, reveals the limitations of current misinformation interventions and offers a method to empirically test whether we are living in a post-truth era.
Article
Social scientists, journalists, and policymakers are increasingly interested in methods to mitigate or reverse the public’s beliefs in conspiracy theories, particularly those associated with negative social consequences, including violence. We contribute to this field of research using an artificial intelligence (AI) intervention that prompts individuals to reflect on the uncertainties in their conspiracy theory beliefs. Conspiracy theory believers who interacted with our “street epistemologist” chatbot subsequently showed weaker conviction in their conspiracy theory beliefs; this was also the case for subjects who were asked to reflect on their beliefs without conversing with an AI chatbot. We found that encouraging believers to reflect on their uncertainties can weaken beliefs and that AI-powered interventions can help reduce epistemically unwarranted beliefs for some believers.
Article
Full-text available
How can we induce social media users to be discerning when sharing information during a pandemic? An experiment on Facebook Messenger with users from Kenya (n = 7,498) and Nigeria (n = 7,794) tested interventions designed to decrease intentions to share COVID-19 misinformation without decreasing intentions to share factual posts. The initial stage of the study incorporated: (1) a factorial design with 40 intervention combinations; and (2) a contextual adaptive design, increasing the probability of assignment to treatments that worked better for previous subjects with similar characteristics. The second stage evaluated the best-performing treatments and a targeted treatment assignment policy estimated from the data. We precisely estimate null effects from warning flags and related article suggestions, tactics used by social media platforms. However, nudges to consider the accuracy of information reduced misinformation sharing relative to control by 4.9% (estimate = −2.3 percentage points, 95% CI = [−4.2, −0.35]). Such low-cost scalable interventions may improve the quality of information circulating online.
Article
Full-text available
There has been substantial scholarly effort to (a) investigate the psychological underpinnings of why individuals believe in misinformation, and (b) develop interventions that hamper their acceptance and spread. However, there is a lack of systematic integration of these two research lines. We conducted a systematic scoping review of empirically tested psychological interventions (N = 176) to counteract misinformation. We developed an intervention map and analyzed boosting, inoculation, identity management, nudging, and fact-checking interventions as well as various subdimensions. We further examined how these interventions are theoretically derived from the two most prominent psychological accounts for misinformation susceptibility: classical and motivated reasoning. We find that the majority of misinformation studies examined fact-checking interventions, are poorly linked to basic psychological theory and not geared towards reducing motivated reasoning. Based on this, we outline future research avenues for effective psychological countermeasures against misinformation.
Article
Full-text available
Research on online misinformation has evolved rapidly, but organizing its results and identifying open research questions is difficult without a systematic approach. We present the Online Misinformation Engagement Framework, which classifies people’s engagement with online misinformation into four stages: selecting information sources, choosing what information to consume or ignore, evaluating the accuracy of the information and/or the credibility of the source, and judging whether and how to react to the information (e.g., liking or sharing). We outline entry points for interventions at each stage and pinpoint the two early stages—source and information selection—as relatively neglected processes that should be addressed to further improve people’s ability to contend with misinformation.
Article
Full-text available
This paper offers an empirical investigation of the narrative profiles afforded by public, one-way messaging channels on Telegram. We define these narrative profiles in terms of the contribution of messages to a thread of narrative continuity, and test the double hypothesis that 1) Telegram channels afford diverse narrative profiles, corresponding with distinct vernacular uses of the platform’s features, and that 2) networks of Telegram channels sampled from thematically distinct seed channels lean towards distinct profiles. To this end, we analyse the textual contents of 2,724,187 messages from 492 public messaging channels spanning five thematic networks. Our computational method builds up the narrative profiles by scrolling down channels and classifying each message according to its narrative fit with the surrounding messages. We thus find that Telegram channels afford several distinct storytelling profiles, which tend to defy traditional notions of narrative coherence. We furthermore observe correspondences between the thematic orientations of channels and their narrative profiles, with a preference for disparate profiles in channels pertaining to conspiracy theories and far-right counterculture, a preference for coherent profiles in channels pertaining to cryptocurrencies, and mixed types in channels pertaining to disinformation about the war in Ukraine. These empirical observations thus inform our further theorization on how platform features allow users to construct and shape narratives online.
Article
Social media ranking algorithms typically optimize for users' revealed preferences, i.e., user engagement such as clicks, shares, and likes. Many have hypothesized that by focusing on users' revealed preferences, these algorithms may exacerbate human behavioral biases. In a pre-registered algorithmic audit, we found that, relative to a reverse-chronological baseline, Twitter's engagement-based ranking algorithm amplifies emotionally charged, out-group hostile content that users say makes them feel worse about their political out-group. Furthermore, we find that users do not prefer the political tweets selected by the algorithm, suggesting that the engagement-based algorithm underperforms in satisfying users' stated preferences. Finally, we explore the implications of an alternative approach that ranks content based on users' stated preferences and find a reduction in angry, partisan, and out-group hostile content, but also a potential reinforcement of pro-attitudinal content. Overall, our findings suggest that greater integration of stated preferences into social media ranking algorithms could promote better online discourse, though potential trade-offs also warrant further investigation.
Book
Drawing from many disciplines, the report adopts a behavioural psychology perspective to argue that “social media changes people’s political behaviour”. Four pressure points are identified and analysed in detail: the attention economy; choice architectures; algorithmic content curation; and mis/disinformation. Policy implications are outlined in detail.
Article
We investigated the effects of Facebook's and Instagram's feed algorithms during the 2020 US election. We assigned a sample of consenting users to reverse-chronologically-ordered feeds instead of the default algorithms. Moving users out of algorithmic feeds substantially decreased the time they spent on the platforms and their activity. The chronological feed also affected exposure to content: The amount of political and untrustworthy content they saw increased on both platforms, the amount of content classified as uncivil or containing slur words they saw decreased on Facebook, and the amount of content from moderate friends and sources with ideologically mixed audiences they saw increased on Facebook. Despite these substantial changes in users' on-platform experience, the chronological feed did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes during the 3-month study period.