ArticlePDF Available

Abstract

In the last decade there has been a proliferation of research on misinformation. One important aspect of this work that receives less attention than it should is exactly why misinformation is a problem. To adequately address this question, we must first look to its speculated causes and effects. We examined different disciplines (computer science, economics, history, information science, journalism, law, media, politics, philosophy, psychology, sociology) that investigate misinformation. The consensus view points to advancements in information technology (e.g., the Internet, social media) as a main cause of the proliferation and increasing impact of misinformation, with a variety of illustrations of the effects. We critically analyzed both issues. As to the effects, misbehaviors are not yet reliably demonstrated empirically to be the outcome of misinformation; correlation as causation may have a hand in that perception. As to the cause, advancements in information technologies enable, as well as reveal, multitudes of interactions that represent significant deviations from ground truths through people's new way of knowing (intersubjectivity). This, we argue, is illusionary when understood in light of historical epistemology. Both doubts we raise are used to consider the cost to established norms of liberal democracy that come from efforts to target the problem of misinformation.
https://doi.org/10.1177/17456916221141344
Perspectives on Psychological Science
1 –28
© The Author(s) 2023
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/17456916221141344
www.psychologicalscience.org/PPS
ASSOCIATION FOR
PSYCHOLOGICAL SCIENCE
The aim of this review is to answer the question, (Why)
is misinformation a problem? We begin the main review
with a discussion of definitions of “misinformation”
because this, in part motivated our pursuit to answer
this question. Incorporating evidence from many disci-
plines helps us to examine the speculated effects and
causes of misinformation, which give some indication
of why it might be a problem. Answers in the literature
reveal that advancements in information technology are
the commonly suspected primary cause of misinforma-
tion. However, the reviewed literature shows consider-
able divergence regarding the assumed outcomes of
misinformation. This may not be surprising given the
breadth of disciplines involved; researchers in different
fields observe effects from different perspectives. The
fact that so many effects of misinformation are reported
is not a concern as long as the direct causal link
between misinformation and the aberrant behaviors it
generates is clear. We emphasize that the evidence
provided by studies investigating this relationship is
weak. This exposes two issues: one that is empirical,
as to the effects of misinformation, and one that is
conceptual, as to the cause of the problem of misinfor-
mation. We argue that the latter issue has been over-
simplified. Uniting the two issues, we propose that the
alarm regarding the speculated relationship between
misinformation and aberrant societal behaviors appears
to be rooted in the increased opportunities through
advancements in information technology for people to
“go it alone”—that is, establish their own ways of know-
ing that increases deviations from ground truths.
We therefore propose our own conceptual lens from
which to understand the cause of concern that misin-
formation poses. It acknowledges modern information
1141344PPSXXX10.1177/17456916221141344Adams et al.Perspectives on Psychological Science
research-article2023
Corresponding Author:
Magda Osman, University of Cambridge, Centre for Science and Policy
Email: m.osman@jbs.cam.ac.uk
(Why) Is Misinformation a Problem?
Zoë Adams1, Magda Osman2,3,4 , Christos Bechlivanidis5,
and Björn Meder6,7
1Department of Linguistics, School of Languages, Linguistics and Film, Queen Mary University London;
2Centre for Science and Policy, University of Cambridge; 3Judge Business School, University of Cambridge;
4Leeds Business School, University of Leeds; 5Department of Experimental Psychology, University College
London; 6Department of Psychology, Health and Medical University, Potsdam, Germany; and 7Max Planck
Research Group iSearch, Max Planck Institute for Human Development, Berlin, Germany
Abstract
In the last decade there has been a proliferation of research on misinformation. One important aspect of this work
that receives less attention than it should is exactly why misinformation is a problem. To adequately address this
question, we must first look to its speculated causes and effects. We examined different disciplines (computer science,
economics, history, information science, journalism, law, media, politics, philosophy, psychology, sociology) that
investigate misinformation. The consensus view points to advancements in information technology (e.g., the Internet,
social media) as a main cause of the proliferation and increasing impact of misinformation, with a variety of illustrations
of the effects. We critically analyzed both issues. As to the effects, misbehaviors are not yet reliably demonstrated
empirically to be the outcome of misinformation; correlation as causation may have a hand in that perception. As to
the cause, advancements in information technologies enable, as well as reveal, multitudes of interactions that represent
significant deviations from ground truths through people’s new way of knowing (intersubjectivity). This, we argue, is
illusionary when understood in light of historical epistemology. Both doubts we raise are used to consider the cost to
established norms of liberal democracy that come from efforts to target the problem of misinformation.
Keywords
misinformation and disinformation, intersubjectivity, correlation versus causation, free speech
2 Adams et al.
technology (e.g., Internet, social media) but goes
beyond it to understand the roots of knowledge through
historical epistemology—the study of (primarily scien-
tific) knowledge, specifically concepts (e.g., objectivity,
belief), objects (e.g., statistics, DNA), and the develop-
ment of science (Feest & Sturm, 2011). The current situ-
ation is triggering alarm because it appears that the
process of knowing is undergoing a transition whereby
objectivity is second to intersubjectivity. Intersubjectiv-
ity, as we define it, is a coordination effort by two or
more people to interpret entities in the world through
social interaction. Owing to progress in information
technologies, intersubjectivity is seen as being in oppo-
sition to the way ground truths are established by tra-
ditional institutions. However, the two processes are not
necessarily at odds because, for instance, the scientific
endeavor is not itself devoid of intersubjective mecha-
nisms. Second, there is no clear evidence that intersub-
jectivity as a means for establishing truth is on the rise
outside of the fact that the Internet facilitates and
exposes more clearly the interactions people are having.
Critically, the causal connection between beliefs estab-
lished in an intersubjective manner and aberrant behav-
ior is also not established. In our concluding section,
we discuss whether the efforts to reduce misinformation
are proportionate to the actual or rather the perceived
scale of the problem of misinformation. We also propose
possible methodological avenues for future exploration
that might help to better expose causal relations between
misinformation and behavior.
Defining Misinformation and Other
Associated Terms
What is misinformation? Several definitions of “misinfor-
mation” refer to it as “false,” “inaccurate,” or “incorrect”
information (e.g., Qazvinian etal., 2011; Tan etal., 2015;
Van der Meer & Jin, 2019) and as the antonym to infor-
mation. By contrast, disinformation is false information
that is also described as being intentionally shared (e.g.,
Levi, 2018). Information shared for malicious ends—to
cause harm to an individual, organization, or country
(Wardle, 2020)—is malinformation and can be either
true (e.g., as with doxing, when private information is
publicly shared) or false. A close cousin of the term
disinformation is fake news (Anderau, 2021; Zhou &
Zafarani, 2021), a term made popular by the former U.S.
President Donald Trump, for which there are a variety
of examples that fall under this catchall term (Tandoc
etal., 2018). Similar to disinformation, “fake news” is
defined as information presented as news that is inten-
tional and verifiably false (Allcott & Gentzkow, 2017;
Anderau, 2021). This is different from satire, parody, and
propaganda. However, rumor, often discussed alongside
hearsay, gossip, or word of mouth, is perhaps the oldest
relative in the misinformation family, with research dat-
ing back decades (for discussion, see Allport & Postman’s
The Psychology of Rumor, 1947). A companion of human-
kind for millennia, a rumor is commonly defined as
“unverified and instrumentally relevant information state-
ments in circulation” (DiFonzo & Bordia, 2007, p. 31).
Yet these terms are used differently (e.g., Habgood-
Coote, 2019; Karlova & Lee, 2011; Scheufele & Krause,
2019). For example, Wu et al. (2019) employed “misin-
formation” as an umbrella term to include all false or
inaccurate information (unintentionally spread misin-
formation, intentionally spread misinformation, fake
news, urban legends, rumor). This is arguably why
Krause et al. (2022) concluded that misinformation has
become a catchall term with little meaning. It is perhaps
not surprising that these definitions are also contentious
and that debates continue among scholars. For exam-
ple, a point of discussion is the demarcation between
information and misinformation, with some scholars
arguing that this is a false dichotomy (e.g., Ferreira
etal., 2020; Haiden & Althuis, 2018; Marwick, 2018;
Tandoc etal., 2018). The problem with drawing a line
between the two is that it ignores what it means to
ascertain the truth. Krause et al. (2020) illustrated this
in their real-world example of COVID-19, in which the
efficacy of measures such as masks was initially
unknown. Osman et al. (2022) demonstrated this with
reference to the origins of the COVID-19 virus and the
problems with prematurely labeling what was conspir-
acy and what was a viable scientific hypothesis. A con-
tinuum approach is an alternative to reductionist
classifications into true or false information (e.g.,
Hameleers etal., 2021; Tandoc etal., 2018; Vraga &
Bode, 2020), but the problem of how exactly truth is
or ought to be established remains.
Some define truth by what it is not, rather than what
it is. Paskin (2018) argued that fake news has no fac-
tual basis, which implies that truth is equated simply
to facts. Other scholars make reference to ideas such
as accuracy and objectivity (Tandoc etal., 2018) as
well as to evidence and expert opinion (Nyhan &
Reifler, 2010). Regarding what is false, many scholars
define “disinformation” as intentional sharing of false
information (e.g., Fallis, 2015; Hernon, 1995; Shin
etal., 2018; Søe, 2017), but the problem is how to
determine intent (e.g., Karlova & Lee, 2011; Shu etal.,
2017). Given how fundamental the conceptual prob-
lems are, some have proposed that it is too early to
investigate misinformation (Avital etal., 2020; Habgood-
Coote, 2019).
For the purposes of this review, a commonly agreed-
upon definition is not necessary (or likely possible).
Instead, what we have done here is graphically represent
Perspectives on Psychological Science XX(X) 3
various conceptions of information, misinformation, and
other related phenomena (Fig. 1) with reference to some
examples. We also distill what we view as a shared
essential property of most of the definitions we have
discussed here: information that departs from established
ground truths. Three things of note: First, we make no
assertion about whether the information is unintention-
ally or deliberately designed to depart from ground truth;
we note only that it does. Second, criteria for determin-
ing ground truth are evidentially problematic given the
conceptual obstacles already mentioned, so we do not
attempt this. Rather, we return to discussing the issues
around this later in the review. Third, in our view the
essential property described is dynamic—what we refer
to as dynamic lensing of information. This is necessary
to reflect that, just as lensing is an optical property that
can distort and magnify light, the status of information
interpreted through various means (lenses) is liable to
shifts and over time diverges from, or converges to,
ground truth (for an example, see Fig. 1d).
The “Problem” of Misinformation
A review of the kind presented here has not yet been
conducted, and so we present this as a starting point
for future interdisciplinary reviews and meta-analyses
of the causes and effects of misinformation that could
extend or challenge the claims we are making here. To
address the title question, we concentrated on theoreti-
cal and empirical articles that explicitly reference the
effects of misinformation. We have tried to be compre-
hensive (if not exhaustive) by drawing on and synthesiz-
ing research across a wide range of disciplines (computer
science, economics, history, information science, jour-
nalism, law, media, politics, philosophy, psychology,
sociology). Our strategy involved searching for articles
on Google Scholar containing the terms “misinforma-
tion” and “fake news.” However, many of these articles
contained related terms such as “disinformation,
“rumor,” “posttruth,” “hoax,” “satire,” and “parody.
Because the effects of “misinformation” and related
terms were rarely referred to as a problem explicitly, we
initially did not add “problem” or its synonyms to our
search terms, and instead selected the most cited arti-
cles. To capture any specific references to the problem
of misinformation, however, we examined in the second
step both “misinformation” (and related terms) and
“problem” (and related terms, such as “crisis,” “trouble,
“obstacle,” “dilemma,” and “challenge”).
The time frame chosen depended on the term. Inter-
est in “fake news” boomed after 2016. Articles with
more than 50 citations served as a starting point (albeit
an arbitrary one) for our search between 2016 and 2022
(from a total of 25,300 search results); we allowed some
exceptions, such as the highly cited Conroy et al. (2015)
article on automatic detection of fake news. Research
on the term “misinformation” traditionally focused on
memory (e.g., Ayers & Reder, 1998; Frost, 2000) with
applications to legal settings, specifically eyewitness
studies (Wright etal., 2000). There are instances of
misinformation in the more general sense, as applied to
the news and the Internet, as early as Hernon’s (1995)
exploratory study. To make things manageable, we lim-
ited our search by examining the most cited articles
between 2000 and 2022 (from a total of 116,000 search
results). Most studies made reference to the effects of
misinformation or fake news in their introduction as
motivation for their research, or in the conclusion to
explain why their work has important implications. For
others, the aim of the research is to precisely examine
the effects of misinformation. In total, we carefully
inspected 149 articles, either because they made refer-
ences to the causes or to the effects of misinformation.
We observed that scholars broadly characterize the
effects of misinformation in two ways: societal-level
effects, which we group into four domains (media, poli-
tics, science, economics), and individual-level effects,
which are psychologically informed (cognitive, behav-
ioral). We reserve critical appraisals for the literature
on the psychological effects of misinformation because
they are fundamental to, and have direct implications
for, societal-level effects.
Societal-level effects of misinformation
We will begin with those articles in which the effects
of misinformation can be classified as general topical
areas that impact society: media, politics, science, and
economics.
Media. Determining the impact of misinformation for
media, particularly news media, depends on whether
readers can reliably distinguish between true and false
news and between biased (left- or right-leaning) and
objective content. Although some empirical studies show
that readers can distinguish between true and false news
(Barthel etal., 2016; Burkhardt, 2017; De Beer & Matthee,
2021; Posetti & Matthews, 2018; Shu etal., 2018; Waldman,
2018), Bryanov and Vziatysheva’s (2021) review indicates
that overall the evidence is mixed. Luo et al. (2022)
showed that news classified as misinformation can garner
increased attention, gauged by the number of social
media “likes” a post receives. This does not in turn imply
that the false content is judged to be credible; in fact,
there is a tendency to disbelieve both fake and true mes-
sages (Luo etal., 2022). Possible spillover effects such as
this one have been used to explain disengagement with
news media in general and distrust in traditional news
4
Completely
True
Mostly
True
Mostly
False
Completely
False
Space of (Un)Truthfulness
False
Fabricated
Inaccurate
Unverified
Biased
Partisan
True
Factual
Verifiable
Accurate
Objective
Impartial
MisinformationInformation
News Satire
Infomercial/
Infoadvertising
HighLow
Low High
Level of Facticity
Author’s Immediate Intention
to Deceive
Photo
Manipulation
News Fabrication Propaganda
News Parody
Ground Truth
Gossip Rumour
Disinformation
Misinformation
Malinformation
Settled
Controversial
Emerging
Heliocentric
Model
Piltdown
Man
Ground Truth
Level of Consensus
Defined Expertise
Perceived Bias
Expert Criteria
Amount of Evidence
Concrete and Observable
Universality
Evodemce Criteria
Hoax
ab
cd
Fig. 1. Different ways of conceptualizing and contextualizing information and misinformation. In (a), we show dichotomous distinctions between information and misinformation
using commonly discussed criteria from the literature (e.g., Levi, 2018; Qazvinian etal., 2011; Tan etal., 2015; Van der Meer & Jin, 2019). In (b) we show Hameleers, van der Meer,
and Vliegenthart’s (2021) continuum “space of (un)truthfulness” that characterizes mis- or disinformation by degrees of truth and falsehood. In (c) we illustrate Tandoc et al.’s (2018)
two-dimensional space, which is used to map what the authors refer to as examples of fake news, with additional examples of terms (in yellow) that we have attempted to map onto
the described dimensions. In (d), Vraga and Bode’s (2020) expertise and evidence are used as criteria for contextualizing misinformation, with two examples to illustrate claims that
were controversial at one time but over time became settled (either because they were verifiably true or verifiably false): Galileo’s 1632 challenge of the geocentric astronomical model
in favor of the heliocentric model (the Catholic Church did not officially pardon him for heresy until 1992), and Charles Dawson’s (1913) discovery of the “missing link” from jaw and
tooth remains in Piltdown (England) that were shown by Weiner, Oakley, and Clark (1953) to have been fabricated to simulate a fossilized skull. “Piltdown Man” was deemed a hoax
by the scientific community.
Perspectives on Psychological Science XX(X) 5
institutions (Altay et al., 2020; Axt et al., 2020; Fisher
et al., 2021; Greifeneder et al., 2021; Habgood-Coote,
2019; Levi, 2018; Lewis, 2019; Robertson & Mourão, 2020;
Shao etal., 2018; Shu etal., 2018; Steensen, 2019; Tandoc
etal., 2021; Tornberg, 2018; Van Heekeren, 2019; Wald-
man, 2018; Wasserman, 2020). For example, Axios, an
American news website, reported that between 2021 and
2022 there was a 50% drop in social-media interactions
with news articles, an 18% drop in unique visits to the five
top news sites, and a 19% drop in cable news prime-time
viewing (Rothschild & Fischer, 2022). Survey work, such
as the Edelman Trust Barometer (2019) and Gallup’s Con-
fidence in Institutions survey (2018), has shown declining
trust in news media and journalists. Consistent with this,
Wagner and Boczkowski’s (2019) in-depth interviews
indicated negative perceptions of the quality of current
news and distrust of news circulated on social media.
Duffy et al. (2019) suggested a positive outcome may be
that mistrusting news on social media will drive the pub-
lic back to traditional news sources. Along similar lines,
Wasserman (2020) claimed that misinformation provides
traditional news institutions with an opportunity to
rebuild their position as authoritative by emphasizing
verification of claims communicated to the public.
In short, declining trust in media, doubtful source cred-
ibility, and the blurring of the dichotomy between false
and true news are suspected effects of misinformation
that worry some (Kaul, 2012; Posetti & Matthews, 2018;
Steensen, 2019). A remedy for these effects is a more
nuanced appreciation of truth and a greater sense of how
claims are communicated in ways that allow for healthy
skepticism (Godler, 2020).
Politics. The overarching negative effect of misinforma-
tion comes from the argument that without an accurately
informed public, democracy cannot function (Kuklinski
et al., 2000), although there is some discussion about
whether this falls squarely back on the shoulders of news
media.1 Much of the evidence base is designed to show
how beliefs in misinformed claims impact the evaluation
of and support for particular policies (e.g., Allcott &
Gentzkow, 2017; Benkler etal., 2018; Dan et al., 2021;
Flynn etal., 2021; Fowler & Margolis, 2013; Garrett etal.,
2013; Greifeneder etal., 2020; Levi, 2018; Lewandowsky
& van der Linden, 2021; Marwick etal., 2022; Metzger
etal., 2021; Monti etal., 2019; Shao etal., 2018; Waldman,
2018). However, there is as yet no consensus on how to
precisely measure misinformation to determine its direct
effects on democratic processes (e.g., election voting,
public discourse on policies; Watts etal., 2021).
One of the strongest claimed effects of misinformation
is that it leads voters toward advocating policies that are
counter to their own interests, for example, the 2016 U.S.
election and the Brexit vote in the United Kingdom
(Bastos & Mercea, 2017; Cooke, 2017; Humprecht etal.,
2020; Levi, 2018; Monti etal., 2019; A. S. Ross & Rivers,
2018; Shu et al., 2018; Wagner & Boczkowski, 2019;
Weidner et al., 2019). Silverman & Alexander (2016)
found that the 20 top fake election-news stories gener-
ated more engagement on Facebook than the 20 top
election stories from 19 major news outlets combined.
Participants in Wagner and Boczkowski’s (2019) study
indicated how the reporting of the 2016 U.S. election
reduced their trust in media because of false news stories
associated with both political parties. Political polariza-
tion is also affected because false stories further divisions
between parties as well as reinforce support for one’s
own party (e.g., Axt etal., 2020; European Commission,
2018; Ribeiro etal., 2017; Sunstein, 2017; Vargo etal.,
2017; Waldman, 2018). However, evidence showing a
direct causal relationship between viewing fake news
and switching positions—and in turn changing the out-
come of elections and referenda—is lacking, and con-
sequently these claims have also been questioned
(Allcott & Gentzkow, 2017; Grinberg etal., 2019; Guess
etal., 2019). For the same reasons, drawing connections
between misinformation and political polarization has
been difficult, and similar challenges emerge for a vari-
ety of other alternative explanations that have been
proposed (Canen etal., 2021; Hebenstreit, 2022).
Although there currently is a strong interest in the
effects of misinformation on political issues, debates
surrounding the effects of misinformation are certainly
not new. There are documented examples as far back
as ancient Rome of how misinformation as rumor can
impact the reputations of political figures. As a result,
misinformation has been used as a form of social con-
trol to produce political outcomes, albeit with varying
success (e.g., Chirovici, 2014; Guastella, 2017). For
example, when Nicolae Ceaus
,escu led Romania in the
1960s and 1970s, positive rumors were used to help
establish support, but negative rumors—for example,
that Ceaus
,escu had periodic blood transfusions from
children—later had the opposite effect. Attempts by
mass media to censor such negative rumors failed, and
as resistance intensified, the government’s response was
harsh: a rumored genocide of 60,000 civilians in Timiso-
ara. The executions of Ceaus
,escu and his wife in 1989
were based on charges of genocide, among other things
(Chirovici, 2014).
As well as rumor, there are several examples of state
propaganda (e.g., Pomerantsev, 2015; Snyder, 2021),
alternatively referred to as disinformation campaigns.
These are also designed as a form of social control.
This runs counter to the “Millian” market of ideas, based
on Mill’s (1859) work proposing that only in a free
6 Adams et al.
market of ideas are we able to arrive at the truth
(Cohen-Almagor, 1997). In fact, these examples suggest
that deliberate as well as inadvertent processes disrupt
the possibility of the best ideas surfacing to the point
of consensus (Cantril, 1938). State propaganda and
rumors attempt to suppress public discourse, which
have been shown to negatively impact the populous,
particularly in drawing connections to the instigation
of violence, and influence voting behavior (e.g., Posetti
& Matthews, 2018).
Thus far, it is not clear what the remedies are for
addressing the effects of misinformation in the domain
of politics. What is clear is that there are strong claims
regarding the effects (e.g., polarization, disengagement
with democratic processes, hostility to political figures,
and violence) that underscore why misinformation is
perceived as a threat to democracy.
Science. Misinformation in science communication (Kahan,
2017) and around science policymaking (Farrell et al.,
2019) is purported to have an effect on public under-
standing of science (Lewandowsky etal., 2017). Two top-
ics that are frequently brought up in connection with the
negative effects of misinformation are health and anthro-
pogenic climate change.
COVID-19 has been at the epicenter of many misin-
formation studies examining how it has impacted atti-
tudes, beliefs, intentions, and behavior (Ecker etal.,
2022; Kouzy et al., 2020; Pennycook et al., 2020;
Roozenbeek etal., 2020; Vraga etal., 2020). For instance,
Tasnim et al. (2020) referred to reported chloroquine
overdoses in Nigeria following claims on the news that
it effectively treats the virus (chloroquine is a pharma-
cological treatment for malaria). Misinformation was
also claimed to have resulted in hoarding and panic
buying (G. Chua etal., 2021) as well as avoiding non-
pharmacological measures (e.g., handwashing, social
distancing; Shiina etal., 2020).
Researchers have also considered the effect of
misinformation on the zika (Ghenai & Mejova 2017;
Valecha etal., 2020) and ebola viruses (Jin etal., 2014).
Negative health behaviors include vaccine hesitancy,
which has been most notably related to associations
between the measles, mumps and rubella (MMR) vac-
cine and autism (Dixon etal., 2015; Flynn etal., 2021;
Kahan, 2017; Kirkpatrick, 2020; Lewandowsky etal.,
2012; Pluviano etal., 2022). The actual effects that are
measured vary from self-reported behaviors in response
to inaccurate health claims to views on trust in the
news media, politics, and general damage to democ-
racy. Studies examining whether changes are possible
suggest that belief updating can be achieved when
contrary information is presented in textual form
(Desai & Reimers, 2018) or through an interactive game
(Roozenbeek & van der Linden, 2019).
The effects of misinformation also extend to the
topic of anthropogenic climate change. It has been
proposed that misinformation causes differences of
opinion on the urgency of the issue (Bolsen & Shapiro,
2016; Cook, 2019; Farrell, 2019; McCright & Dunlap,
2011). Misinformation is also used to explain the stall-
ing of political action. This is either because of a lack
of public consensus on the issue (claimed to be
informed by misinformation), or because of resistance
to addressing it, again because of apparent false claims
(Benegal & Scruggs, 2018; Conway & Oreskes, 2012;
Cook etal., 2018; Flynn etal., 2021; Lewandowsky
etal., 2017; Maertens etal., 2020; van der Linden etal.,
2017; Y. Zhou & Shen, 2021). Moreover, the effects of
misinformation do not have an equal impact on recipi-
ents because anthropogenic climate change has been
perceived to be connected with particular ideological
and political positions (Cook etal., 2018; Elasser &
Dunlap, 2013; Farrell et al., 2019; Kormann, 2018;
McCright et al., 2016).
If misinformation is indeed the sole contributor to
these effects, rather than one of several factors, then fears
around the impact of misinformation on generating false
beliefs and motivating aberrant behaviors are justified
given the potential impact on human well-being. Some
of the empirical findings presented here have also
included interventions designed to address the effects of
misinformation, primarily centered on refocusing the way
people encounter false claims and improving their ability
to scrutinize claims to determine their truth status.
Economics. For some researchers, the economic cost of
misinformation is considered significant enough to war-
rant analysis (e.g., Howell etal., 2013). Financial implica-
tions are also considered, such as, for example, how
misinformation can disrupt the stability of markets (Kogan
etal., 2021; Levi, 2018; Petratos, 2021), as well as the cost
of attempting to debunk misinformation (Southwell &
Thorson, 2015), and the policing of misinformation
online in order to develop measures to limit public expo-
sure (Gradón, 2020). For example, Canada has spent
CAD$7 million to increase public awareness of misinfor-
mation (Funke, 2021) in an attempt, among other issues, to
stem the economic impact. Burkhardt (2017) focuses on
consumer behavior and how brands inadvertently propa-
gate misinformation and how that can present opportuni-
ties to increase profits. As a main goal to get consumers’
attention, Burkhardt proposes that advertisements appear-
ing alongside a piece of information that is true or false
can impact purchasing behavior. Given the appeal of gos-
sip and scandal, as illustrated by media outlets such as the
Perspectives on Psychological Science XX(X) 7
National Enquirer or Access Hollywood on TV, advertisers
can thus profit off sensationalized and potentially misin-
formed claims (e.g., Han etal., 2022). Another fundamen-
tal concern is that advertisements themselves contain
misinformation (Baker, 2018; Braun & Loftus, 1998; Glae-
ser & Ujhelyi, 2010; Hattori & Higashida, 2014; Rao, 2022;
Zeng etal., 2020, 2021).
There are also real-world examples of the economic
effects of misinformation, such as how brands fall vic-
tim to unsubstantiated claims. Berthon et al. (2018)
discuss how Pepsi’s stock fell by about 4% because a
story went viral about Pepsi’s CEO, Indra Nooyi, alleg-
edly telling Trump supporters to “take their business
elsewhere.” This story has been cited by other market-
ing researchers to emphasize the adverse effects of
misinformation (e.g., Talwar etal., 2019) on reputation
management—an industry estimated to be worth $9.5
billion alone (Cavazos, 2019), excluding indirect costs
for increasing trust and transparency. Another popular
example, which aligns with Levi’s (2018) observation
on market stability, is a tweet broadcast by the Associ-
ated Press in 2013. It claimed President Obama had
been injured in an explosion, which reportedly caused
the Dow Jones stock market index to drop 140 points
in 6 min (e.g., Liu & Wu, 2020; Zhou & Zafarani, 2021).
Further examples can be found in Cavazos’s (2019)
report for software company Cheq, which concluded
that misinformation is a $78 billion problem.
In combination, these examples often tie in with
effects reported elsewhere: For instance, increased repu-
tation management is a response to the fragile trust that
we noted in relation to the media. Thus, misinformation
has been claimed to impact market behavior, consumer
behavior, and brand reputation, which in turn has eco-
nomic and financial effects on business. When advertis-
ing sits alongside misinformation in news stories, or
when misinformation is embedded in advertisements,
both are claimed to facilitate profits, but it is also pos-
sible that such a juxtaposition limits profits because of
the reputational damage to businesses.
Individual-level effects
of misinformation
Establishing the fundamental effects of the way misinfor-
mation is processed psychologically and establishing its
influence on behavior have core implications for research
proposing how the general effects are expressed in soci-
ety (e.g., engagement with scientific concepts, democratic
processes, trust in news media, and economic factors).
Therefore, we critically consider the evidential support
for the effects of misinformation on cognition and behav-
ior. We review these effects separately also because we
think that a tacit inference underlying much of the current
debate seems to go like this: 1) Experimental evidence
shows that misinformation affects beliefs in various ways.
2) While there are few experimental studies examining
the direct behavioral consequences of beliefs influenced
by misinformation, we generally “know” that beliefs affect
behavior. Hence, we can conclude that misinformation
is a cause of aberrant behavior. Hence, we can conclude
that misinformation is a cause of aberrant behavior. We
are of the opinion that this line of reasoning is an over-
simplification that does not do justice to the complexity
of the problem and its serious implications for policymak-
ing. The presumed causal chain is also at odds with
research on the more complicated relations between
beliefs and behavior and different cognitive factors (e.g.,
Ajzen, 1991), which we discuss below in more detail.
Cognitive effects of misinformation. A large propor-
tion of research on misinformation has been dedicated to
examining the effects on cognition. One example is the
difficulty in revising beliefs when false claims are retracted
(debunking or continued influence effect; Chan et al.,
2017; Desai et al., 2020; Desai & Reimers, 2018; Ecker
et al., 2011; Garrett et al., 2013; Lewandowsky et al.,
2012; Newport, 2015; Nyhan & Reifler, 2010; Southwell &
Thorson, 2015; Walter & Murphy, 2018). Findings show
that presenting counterexamples (e.g., via causal expla-
nations) corrects false beliefs (e.g., Desai & Reimers,
2018; Guess & Coppock, 2018; Wood & Porter, 2018),
irrespective of group differences (e.g., age, gender, edu-
cation level, political affiliation; Roozenbeek & van der
Linden, 2019). Another option is to “inoculate” people
from misinformation (Lewandowsky & van der Linden,
2021; Roozenbeek & van der Linden, 2019). This involves
warning people in advance that they might be exposed
to misinformation and giving them a “weak dose,” allow-
ing people to produce their own cognitive “antibodies.”
However, for this and other debunking efforts, there is
also evidence of backfiring (increased skepticism to all
claims that are presented, or increased acceptance and
sharing of misinformation, as well as reduced scrutiny
and correction; e.g., Courchesne et al., 2021; Nyhan &
Reifler, 2010; Trevors & Duffy, 2020). In short, interven-
tion attempts to reduce belief in misinformed claims, as
defined by the researchers, can have unintended per-
verse effects that spill over in all manner of directions.
This likely indicates problems to do with the interven-
tions themselves as well as the nature of the claims that
are the subject of interventions.
Misinformation is claimed to have a competitive
advantage over accurate information in the attention
economy because it is not constrained by truth (Acerbi,
2019; Hills, 2019), so it is framed in sensationalist ways
to maximally capture attention (Acerbi, 2019). Hills
(2019) emphasized that the unprecedented quantities
8 Adams et al.
of information available today require increased cogni-
tive selection, which in turn can lead to adverse out-
comes. Because people’s information acquisition is
constrained by their selection processes, they tend to
seek out information that is consistent with existing
beliefs—negative, social, and predictive. Selecting and
sharing information in such a manner in turn can lead
to adverse effects. For example, preferentially seeking
out and sharing negative information can lead to the
social amplification of risks, and belief-consistent selec-
tion of information can lead to polarization. These pro-
cesses in turn shape the information ecosystem, leading,
for instance, to a proliferation of misinformation.
In addition, frequent exposure to misinformation is
claimed to hinder people’s general ability to distinguish
between true and false information (Barthel etal., 2016;
Burkhardt, 2017; Grant, 2004; Newman etal., 2019; Shu
etal., 2018; Tandoc etal., 2018). However, method-
ological concerns have been raised because of over-
interpretation of questionnaire responses as indicators
of stable beliefs informed by misinformation, as well
as the likelihood of pseudo-opinions (Bishop etal.,
1980)—especially as responses can reflect bad guesses
by participants in response to unfamiliar content (e.g.,
Pasek etal., 2015).
Truth, lies, and objectivity. Studies of deception and
lie detection provide important insights into the relation-
ship between truth and misinformation, as well as peo-
ple’s ability to detect the difference. Meta-analyses show
that people’s accuracy in lie detection is barely above
chance level (Bond & DePaulo, 2006; Hartwig & Bond,
2011, 2014). Even if people can use the appropriate
behavioral markers to make judgments about what is
true, objective relations between deception and behavior
tend to be weak. In other words, catching liars is hard
because the validity of behavioral cues to lying is so low.
Not only are people bad at telling the difference
between truth and lies in others, they also have a dis-
torted sense of their own immunity to bias and false-
hoods, referred to as the objectivity illusion (Robinson
etal., 1995; L. Ross, 2018). Essentially the illusion is
expressed in such a way that if “I” take a particular
stance on a topic (including beliefs, preferences,
choices), I will view this position as one that is objec-
tive. I can then appeal to objectivity as a persuasive
mechanism to convince others of my position, along
with supporting evidence that is supposedly unbiased.
If disagreement with me ensues, and my proposed posi-
tion is rejected, the rationalization is that the other side
is both unreasonable and irrational. This is a powerful
expression of a bias that is centered on justifying a
position with reference to objectivity without accepting
that it may also be liable to bias. Pronin et al. (2002)
called this a “biased blind spot.” Regardless of political
affiliation (e.g., Schwalbe etal., 2020), and even one’s
profession (e.g., scientists expert in reasoning from
evidence; Ceci & Williams, 2018), no one is immune
from the objectivity illusion.
Laying the effects of misinformation on distorting
cognition comes with a problem. The fundamental issue
is how to establish a normative rule to define criteria
that distinguish reliably between truth and falsehood,
to determine in turn whether or not people can ade-
quately distinguish between the two. But in a world in
which no obvious diagnostic truth criteria exist, the
clichés of how truth and deception or objectivity and
bias can be discriminated are by magnitudes stronger
than any normative rule.
Behavioral effects of misinformation. The most
commonly referenced behavioral effects pertain to health
behaviors in response to false claims (e.g., antivaccine
movements, speculated vaccine-autism link, genetically
modified mosquitos and the Zika virus, COVID-19;
Bode & Vraga, 2017; Bronstein etal., 2021; Galanis etal.,
2021; Gangarosa et al., 1998; Greene & Murphy, 2021;
Joslyn etal., 2021; Kadenko etal., 2021; Loomba etal.,
2021; Mug˘alog˘lu etal., 2022; van der Linden etal., 2020;
Van Prooijen etal., 2021; Xiao & Wong, 2020). The same
association has also been made between misinformation
associated with anthropogenic climate change and resis-
tance to adopting proenvironmental behaviors (Gimpel
etal., 2020; Soutter etal., 2020). The effects of misinfor-
mation on behavior extend to the rise in far-right plat-
forms (Z. Chen etal., 2021), religious extremism impacting
voting behavior (Das & Schroeder, 2021), disengagement
in political voting (Drucker & Barreras, 2005; Finetti etal.,
2020; Galeotti, 2020), intended voting behavior (Pantazi
et al., 2021), and advertising aligned with fake news
reports leading to increased consumer spending (Di
Domenico etal., 2021; Di Domenico & Visentin, 2020). A
further study examined the unconscious influences of
misinformation (specifically fake news) in a priming
study, demonstrating direct effects on the speed of tap-
ping responses (Bastick, 2021).
Another behavioral effect of misinformation is more
interpersonal and centers on information sharing. First,
there are those studies that examine the extent of shar-
ing behavior. Some researchers propose that this is
highly prevalent: Chadwick et al. (2018) found that
67.7% of respondents admitted sharing problematic
news on social media during the general election cam-
paign in the United Kingdom in June 2017. However,
others argue against claims that sharing is a cause for
concern (Altay etal., 2020; Grinberg etal., 2019; Guess
Perspectives on Psychological Science XX(X) 9
etal., 2019; Nelson & Taneja, 2018). Grinberg et al.
(2019) found that just 0.1% of Twitter users shared 80%
of misinformation during the 2016 U.S. election,
whereas Allen et al. (2020) estimated that misinforma-
tion comprises only 0.15% of Americans’ daily media
diet. Such estimates should be taken with a grain of
salt, because of the problematic basis on which these
estimates are made, but they do nevertheless imply that
for many citizens the signal-to-noise ratio is fairly high.
But why would people not share misinformation?
Altay et al. (2020) showed that people do not share
misinformation because it hurts their reputation and that
they would do so only if they were paid. This aligns with
the work of Duffy et al. (2019): They found that respon-
dents expressed regret when they shared news that later
turned out to be misinformation. In explaining sharing
behavior, studies suggest there are message-based char-
acteristics, such as its ability to spark discussion (X. Chen
etal., 2015), emotional appeal (Berger & Milkman, 2012;
Valenzuela etal., 2019), and thematic relevance to the
recipient (Berger & Milkman, 2012). Researchers have
also argued for social reasons to share misinformation,
that is, to entertain or please others (Chadwick etal.,
2018; X. Chen etal., 2015; Duffy etal., 2019; Pennycook
etal., 2021c), to express oneself (X. Chen etal., 2015),
to inform or help others (Duffy etal., 2019; Herrero-Diz
etal., 2020), to signal group membership (Osmundsen
etal., 2021), to achieve social validation (Waruwu etal.,
2021), and to address a fear of missing out (Talwar etal.,
2019). Studies show that these motivations can lead
people to pay less attention to the accuracy of informa-
tion because other factors play more of a salient role in
the sharing process beyond the content itself (Pennycook
etal., 2021). This reaffirms the fundamental misinforma-
tion/disinformation distinction: Sharing misinformation
involves unintentional deception driven by interactional
motivations, whereas disinformation stems from inten-
tional deception. Research indicates that there are several
motivations for sharing misinformation beyond the goal
of deliberately spreading false information to influence
others.
Last, there are individual differences that increase
the likelihood of sharing behavior and therefore lead
to adverse effects of misinformation. For instance, those
who believe that knowledge is stable and easy to
acquire (i.e., epistemologically naive) are more likely
to share online health rumors than those who believe
knowledge is fluid and hard to acquire (i.e., epistemo-
logically robust; Chua & Banerjee 2017). Another factor
that impacts one’s likelihood of sharing misinformation
is the need to instill chaos, which arises from social
marginalization and an antisocial disposition (Arcenaux
etal., 2021). From an ideological and age perspective,
conservatives are more likely to share misinformation
than liberals, and older generations are more likely to
share misinformation than younger age groups (Grin-
berg etal., 2019; Guess etal., 2019).
Beliefs, attitudes, intentions, and behavior. The logic
behind studies drawing a causal connection between
misinformation and behavior is that misinformation is
pivotal to motivating negative behaviors. In other words,
if claims that were factually inaccurate had not been
encountered, then the harmful behaviors would not have
occurred. There are two ways in which this presumed
causal relationship between misinformation and behavior
can be theorized. Either misinformation introduces new
false beliefs and attitudes, and these in turn motivate a
particular aberrant behavior, or misinformation reinforces
preexisting false beliefs and attitudes and strengthens
them enough to motivate a particular aberrant behavior
(Imhoff etal., 2022; Pennycook & Rand, 2021b; Van Bavel
etal., 2021).
Both of these hypotheses rest on a reliable relation-
ship between beliefs and attitudes and behavior. Since
Fishbein and Ajzen’s (1975) seminal work, psycholo-
gists have been interested in belief and attitude forma-
tion and with showing how it is associated with
intention and behavior. According to Ajzen’s theory of
planned behavior (Ajzen, 1991, 2012, 2020), the prin-
ciple of compatibility (Ajzen, 1988) requires an explicit
definition of the behavior, the target, the context in
which the behavior appears, and the time frame. From
this, it is then possible to apply an analysis that deter-
mines how each factor (behavior, target, context, time
frame) contributes to the target of interest. If this
approach is applied to the problem of misinformation,
we can examine its influence on a behavior of interest.
For example, after people encounter some misinforma-
tion on social media regarding climate change, when
it comes to food consumption (behavior), we could
predict that more meat is eaten (target), in a lunchtime
canteen (context), and observed within a few days of
encountering the misinformation (time frame). The
determinants of the intention to act in accordance with
the consumptive behavior involve beliefs and attitudes,
which in this case are negatively valenced. One can
then derive a belief index through the application of
an expectancy-value model to calculate the strength of
the belief (e.g., climate-change denial) multiplied by
the subjective evaluation (e.g., negative attitudes to
eating sustainably) and the outcome (e.g., not eating
sustainably in a lunchtime canteen setting). Critically,
there is a requirement to show how misinformation is
instrumental in generating the beliefs and attitudes that
can then lead to misbehaviors.
10 Adams et al.
With the exception of the unconscious priming study
(Bastick, 2021), none of the cited work examining the
association between misinformation (and fake news)
and behavior shows a causal link between the two.2
None of the evidence as yet has been able to reveal the
kind of relationships needed to reliably establish the
cause of behavior via changes in belief. Why are we
making this strong critique? Much of the empirical work
relies on self-reports of intentions or on judged willing-
ness or judged resistance to behave in particular ways,
or else demonstrates correlations between the circula-
tion of misinformation and aberrant behaviors. In other
words, they are subjective judgments about behavior,
not actual direct indicators of behavior. There are some
recent meta-analyses that have examined the impact of
misinformation on sharing intentions (Pennycook &
Rand, 2021b), people’s beliefs (Walter & Murphy, 2018),
and people’s worldviews (Walter & Tukachinsky, 2020),
as well as the impact of fact-checking on political
beliefs (Walter etal., 2020) and people’s misunderstand-
ing and behavioral intentions regarding health misin-
formation (Walter etal., 2021). Again, as yet, none of
this work has been able to draw a direct connection
between misinformation and specific measurable
behavioral effects, aside from intentions to and judg-
ments about willingness to behave in a particular way.
More generally, several meta-analytic studies exam-
ined the relationship between different types of beliefs,
attitudes, and intentions on behavior (e.g., Glasman &
Albarracín, 2006; Kim & Hunter, 1993; Kraus, 1995;
Sheeran etal., 2016; Webb & Sheeran, 2006; Zebregs
etal., 2015). On the whole, the reported effects of belief
on behavior suggest that there is a relationship, but
many authors note that their analyses are limited by the
fact that they measured behavioral intentions, not
behavior itself (Gimpel etal., 2020; Kim & Hunter, 1993;
Xiao & Wong, 2020). In addition, there are weak rela-
tionships between beliefs and intentions (Zebregs etal.,
2015), and when intentions and behaviors are examined
the effects can also be weak, with many other moderat-
ing intervening factors (e.g., personality, incentives,
goals, persuasiveness of communication) explaining this
weakness (Soutter etal., 2020; Webb & Sheeran, 2006).
The observed weak relations are consistent with the
results of research on more applied issues. For instance,
in health and risk communication it is widely accepted
that the mere provision of accurate information is typi-
cally not sufficient to induce behavioral change—raising
the question of why perceiving false information should
be sufficient to induce aberrant behavior.
We return to the issue regarding evidence for the
association between misinformation and aberrant
behaviors in the concluding section, and we will
address how combinations of experimental methods
could be used to better locate potential directional rela-
tionships between misinformation and behavior.
Causes of the Problem of Misinformation
The digital age seems rife with misinformation, which
in turn is alleged to lead to several profound societal
and individual problems. A comprehensive approach
to understanding the causes of these reported effects
is therefore required. Why exactly is misinformation a
problem, given all these apparent effects?
One of the most common explanations of the causes
of misinformation in its various forms relates to the
advances in technologies that produce and distribute
information. The information ecosystem (i.e., the tech-
nological infrastructure that enables the flow of infor-
mation across individuals and groups) is assumed to
be driving the problem of misinformation, because it
is the critical means by which people now all source
information, and it has itself been contaminated by
misinformation (Pennycook & Rand, 2021b; Shin etal.,
2018; Shu etal., 2016). There is nothing like the digital
landscape for quick and wide dissemination of misin-
formation (Celliers & Hattingh, 2020; Lazer etal., 2018;
Moravec etal., 2018; Tambiusco etal., 2015), and it is
said to have transformed consumers into producers of
information (Ciampaglia etal., 2015; Greifeneder etal.,
2020; Kaul, 2012; Marwick, 2018), and misinformation
(Bufacchi, 2020; Levi, 2018). Others emphasize that the
sheer volume of information that is now available
encourages sharing behavior through online networks
(Bessi et al., 2015) and leads to biased information-
selection processes with potentially adverse conse-
quences (Hills, 2019).
Also seen as facilitating the proliferation of misinfor-
mation are technological tools such as recommender
systems (Fernandez & Bellogín, 2020, 2021), Web plat-
forms (e.g., Han etal., 2022), and social media (Allcott
etal., 2019; Chowdhury etal., 2021; Durodolu & Ibenne,
2020; Pennycook & Rand, 2021a, 2021b; Valenzuela,
Halpern, etal., 2019). Also, social-media environments
allow swarms of bots to disseminate or obscure infor-
mation (Bradshaw & Howard, 2017). Moreover, these
can also be used to generate Sybil attacks (Asadian &
Javadi, 2018), when a single entity emulates the behav-
iors of multiple users and attempts to create problems
both for other users and the network itself. In some
cases, sybils are used to make individuals appear afflu-
ent to give the impression that their opinions are highly
endorsed by other social-media users, when in fact this
is artificially generated (Ferrara etal., 2016). Thus, not
only is the content artificially generated, but features
used to judge its reliability, popularity, and interest from
others are also artificially manipulated.
Perspectives on Psychological Science XX(X) 11
The digitization of media also enables producers of
(mis)information to access sophisticated and extremely
convincing tools of digital forgery, such as Deepfakes,
the digital alteration of an image or video that convinc-
ingly replaces the voice, the face, and the body of one
individual for another (Levi, 2018; Steensen, 2019;
Whyte, 2020). Even accurate news of actual events can
be distorted as successive users are adding their own
contexts and interpretations (Peck, 2020). Opaque algo-
rithmic curation that takes humans out of the loop aims
to maximize consumption and interactions, possibly
causing viral appeal to take precedence over truthful-
ness (Rader & Gray, 2015). However, technology also
offers ways to tackle misinformation. Bode and Vraga
(2017) showed how comments on social media are as
effective as algorithms in correcting misperceptions, as
well as contributing to the development of tools to track
misinformation (Shao etal., 2016).
The media landscape has certainly been fundamen-
tally changed by the Internet and social media, but
historians have also argued that although gossip is a
permanent and widespread feature of social exchanges
online, its transmission and diffusion still retain the
same core characteristics as pre-Internet, pretelecom-
munications, prewriting press (Darnton, 2009; Guas-
tella, 2017; Shibutani, 1966). For instance, Guastella
(2017) argued that information has always been dif-
fused through open-ended, chainlike transmission. This
hinders the ability to verify any single item of informa-
tion, because the source responsible is by its very
nature obscured. In other words, communication is
frequently disorderly and untraceable, whether the
issue at hand is a rumor in antiquity or in the present
day. Cultural historian Darnton (2009) offered a differ-
ent perspective but drew similar conclusions regarding
the role of technology in misinformation. He claimed
that we are not witnessing a change in the information
landscape but a continuation of the long-standing insta-
bility of texts. Information is no more unreliable today
than in the past, because news sources have never
corresponded to actual events.
According to these views, although new channels for
misinformation have emerged, the conditions in which
it is created and spread have not changed as much as
one might think. In fact, viewing misinformation
through a historical as well as a philosophical lens not
only sheds light on shifts regarding the medium of
information transmission but also informs how to view
the problem of misinformation in the main. We now
turn our attention to this research that explores why
misinformation may not be cause for alarm and the
scholars from different disciplines who share this
perspective.
Misinformation: unsounding the alarm?
Misinformation is not a new phenomenon (Allcott &
Gentzkow, 2017; De Beer & Matthee, 2021; Kopp etal.,
2018; Scheufele & Krause, 2019; Waldman, 2018). For
instance, fake news was recorded and disseminated
through early forms of writing on clay, stone, and papy-
rus so that leaders could maintain power and control
(Burkhardt, 2017).
Altay et al. (2021) deconstruct several alarmist miscon-
ceptions about the problem of misinformation. One is
that misinformation engulfs the Internet, specifically
social media, and as a result falsehoods spread faster than
the truth. For example, according to a BuzzFeed report
(2016), the top 20 fake news stories on Facebook resulted
in nine million instances of engagement (likes, com-
ments, shares) between August and November 2016.
However, if all 1.5 billion Internet users engaged with
one piece of content a week, these 9 million instances
would represent only 0.042% of all engagements (Watts
& Rothschild, 2017). Also, fact-checking and science-
based evidence were found to be retweeted more than
false information during the pandemic (Pulido etal.,
2020). If, as some have claimed, misinformation has
always existed as an inherent feature of human society
(Acerbi, 2019; Allcott & Gentzkow, 2017; Nyhan, 2020;
Pettegree, 2014), then the current focus on its presence
online may reflect methodological convenience (i.e., it
can be measured more easily; Tufekci, 2014), overlooking
misinformation on television, newspapers, and the radio.
Another misconception is conflating the volume of
content engagement with content belief. Reasons for
engaging with misinformation are numerous, from
expressing sarcasm (Metzger etal., 2021) to informing
others (Duffy etal., 2019): Thus, inferring acceptance
from consumption can exaggerate the negative effects
of misinformation (Wagner & Boczkowski, 2019). Con-
sumption of information is informed by prior beliefs
(Guess etal., 2019, 2021), which suggests that it is not
strictly instrumental in the generation of new false
beliefs. People are not necessarily misinformed but may
simply be uninformed; this is also the point at which
pseudo-opinions can emerge (Bishop etal., 1980), so
this is an important distinction (Scheufele & Krause,
2019). Luskin & Bullock (2011) observed that 90% of
surveys lack a “don’t know” response, which increases
the likelihood of an incorrect answer by 9 percent.
These misperceptions suggest that concerns around
misinformation exceed what can be inferred from the
available evidence, and that the causal link between
misinformation and behaviors may be exaggerated
(Scheufele etal., 2021). This is also why, as Krause
et al. (2022) argued, we are not in an “infodemic.” This
12 Adams et al.
term, frequently used during the COVID-19 pandemic,
refers to an alleged surplus of false information (World
Health Organization, 2022). Our complex information
ecology presents a challenge for disentangling the rela-
tionship between misinformation and behavior, and
science itself grapples with the volatility of the evidence
base as it develops (Osman etal., 2022); determining
what constitutes misinformation is therefore frequently
akin to shifting sand.
From a more historical perspective, Scheufele et al.
(2021) showed that the worry regarding the current
circulation of misinformation neglects examples predat-
ing the advent of digitization of information. The emer-
gence of communication studies in the United States in
the 1920s is perceived to be the result of concerns over
the aberrant influence of the media (Wilson, 2015). New
media technologies were seen as responsible for the
growing disconnect between what people believed and
the real world in that era (Lippman, 1922). Panics also
arose in reaction to the arrival of telegraphy in the early
19th century (Van Heekeren, 2019).3 Before that, after
the invention of the printing press in 1440 granted open
access to knowledge, concerns from the Catholic
Church resulted in the 1560 publication of the Index
of Forbidden Books. In all such attempts to sound the
alarm, and then to address the alarm around technolo-
gies offering greater access of information to the popu-
lace, failure ensued—because ideas continue to
circulate even once speech is restricted (Berkowitz,
2021). Only when we consider misinformation through
a historical lens can we learn that the current situation
is arguably preferable, although admittedly still prob-
lematic, compared with previous information eras (Van
Heekeren, 2019), because, as explained earlier, tradi-
tional media can learn from the past and focus on veri-
fication and fact-checking as a way to reassert their
authority. Furthermore, digital media has devised means
of increasing the likelihood of self-correction (e.g.,
debunking and fact-checking websites).
The discussion in this section presents obstacles for
the argument that technological advances lead to mis-
information which, in turn, has a range of negative
ramifications. Nonetheless, if this position is pursued,
one might ask how, then, can we make sense of why
misinformation is perceived to be a problem? We
address this question with reference to historical epis-
temology, which is concerned with the process of
knowing. We argue that knowledge is currently guided
by the concept of objectivity and its associated objects,
such as statistics and DNA, but that this is a recent
development; knowledge has previously been guided
by other concepts, such as subjectivity relying on rheto-
ric. Thus, how people arrive at ground truth is not as
consistent as one might assume. This suggests that the
worries around misinformation are rooted in a per-
ceived shifting away from norms of objectivity and
empirical facts.
It is helpful to begin with the emergence of rhetorike
the art of public speaking (Sloane, 2001). As Aristotle’s
Art of Rhetoric dictates, rhetoric was believed to commu-
nicate, rather than discover, truth through ethos (trustwor-
thiness), logos (logic), and pathos (emotion). Thus, the
primary vehicle for both ancient philosophy and rhetoric
was orality which, according to Guastella (2017), meant
that information was engulfed by the disorderly and unsta-
ble flow of time. Chirovici’s (2014) work on rumor
throughout the ages also revealed people’s tendency to
favor stirring the listener’s imagination over veracity.
This is exemplified in Grant’s (2004) research, which
showed how historical writing in antiquity contained
numerous instances of misinformation. For example,
Roman historian Titus Livy was known for embellishing
his accounts to such an extent that scholars debated
whether he should be regarded as a novelist or a his-
torian. According to Grant, Titus Livy was motivated to
build his depictions of events on blatantly fictitious
legends. The upshot of the oral tradition was a series
of historical accounts that were incomplete, untrust-
worthy, or entirely false.
During the Middle Ages, the process of establishing
knowledge transformed with the emergence of the
modern fact (e.g., Shapiro, 2000; Poovey, 1998; Wootton,
2015). Before the 13th century, written records regard-
ing one’s financial affairs were kept secret and stored
away in locked chests with other important documents
such as heirlooms, prayers, and IOUs (Poovey, 1998).
However, Poovey (1998) observed that double-entry
bookkeeping by merchants was a catalyst for the devel-
opment of a new “epistemological unit,” otherwise
known as the modern fact. This had two repercussions:
(a) Private information became a vehicle for public
knowledge, and (b) the status of numbers was elevated
such that they were now positively associated with
accuracy and precision, rather than negatively associ-
ated with supernaturalism and symbolism. Wootton
(2015) also referred to the critical role of double-entry
bookkeeping in the mathematization of the world. The
value of numerical descriptions produced new stan-
dards of evidence that focused on accurately measuring
natural phenomena and precisely characterizing objects
for practical use (Winchester, 2018).
The arrival of the printing press solidified a shift in
the meaning of knowing (Wootton, 2015): Facts occu-
pied the new cornerstone of knowledge and became
synonymous with truth. The assumption that facts have
always been integral to knowledge is evidenced by
Perspectives on Psychological Science XX(X) 13
their use as a reference point for ground truth among
those attempting to manage and detect misinformation
(e.g., Hui etal., 2018; Rashkin etal., 2017; Shao etal.,
2016; Tambuscio et al., 2015). Once facts became
embedded in public discourse, the role of rhetoric
faded. This is best reflected in the Royal Society’s new
motto in 1663 (nullius in verba, meaning “take nobody’s
word for it”) (Wootton, 2015). As Bender, J., &
Wellberry, D. (1990). succinctly observed, “Rhetoric
drowned in a sea of ink.” By the 18th century, knowl-
edge stemmed from an objective relationship between
an individual and the natural world in the form of
measurements and facts. Crucially, knowledge could
now be widely communicated to an increasingly literate
public.
This historical and philosophical research on misin-
formation, as well as studies by researchers from other
disciplines (e.g., Altay etal., 2021; Berkowitz, 2021;
Krause etal., 2022; Van Heekeren, 2019), are distinct
from the work reviewed earlier. This alternative position
regards misinformation as neither new or especially
concerning; it is treated as part and parcel of a variety
of linguistic and communication styles that impact
every aspect of the way people encounter information.
So if we take the position that the problem is not novel
and the way we use information to establish ground
truth is never entirely stable, then what else could be
causing the current concern over misinformation? The
work of those reviewed in this section (particularly
Altay etal., 2021; Krause etal., 2022) lays the founda-
tions for us to propose why we believe misinformation
is perceived to be a problem.
Intersubjectivity
The heart of our argument. If we condense the criti-
cal issues of misinformation into the following, then
academic literature, traditional news media (e.g., BBC,
Reuters, CNN) and public policy (e.g., the European
Commission, the World Health Organization, the World
Economic Forum) are sounding the alarm because the
digital information ecosystem allows everyone to gener-
ate and distribute misinformation. Technological advances
have not only increased access to better quality informa-
tion but also created a corrupt ecosystem that enables a
greater supply of poor information—which in turn has
even greater potential for negative impact on behavior
offline.
We pose the following argument: Information tech-
nological advances over the past few decades increased
the transparency of interactions between people. Sim-
ply observing people interact with each other online is
not an issue in and of itself. However, by exposing
these interactions, current information systems pave the
way for a simple and erroneous inference. The vast
volume of interactions must also mean there are more
opportunities for people to deviate from objectivity,
because they can more easily coordinate beliefs about
the world with one another (intersubjectively). In other
words, there is a perception that intersubjectivity is
emerging as a new way of knowing, and this threatens
established (though historically fairly young) norms of
objectivity. This leads to a tension between ground
truths that are established through traditional institu-
tions and agents that are typically considered authori-
ties presiding over ground truths, and the processes
(e.g., via social media) used to establish knowledge
outside of these institutions. The problem of misinfor-
mation as conceptualized by those sounding the alarm
is therefore the perceived misalignment between lay-
people’s coordination efforts for interpreting entities
and mechanisms for objectively establishing these
entities.
We argue that the currently available evidence does
not support a clear link between beliefs generated or
reinforced through misinformation and aberrant behav-
ior. The dynamics of belief formation are far more com-
plicated. In addition, even if intersubjectivity does
replace objectivity as the primary way of knowing,
research demonstrates that our relationship with epis-
temic concepts (e.g., objectivity, belief, probability) and
objects (e.g., metrology, statistics, rhetoric) has been
dynamic throughout history. Desiring a pre-internet era
suggests recovery from such current shifts in knowing
to some point in time where misinformation was less
of a problem, but when was that? Last, some coordina-
tion between the scientific community is required to
determine what evidence meets the criteria of objectiv-
ity, because it does not happen on its own outside of
group consensus. In the next section we expand on
what we mean by intersubjectivity and how it can be
a useful conceptual device for understanding why there
is such alarm associated with misinformation.
Tensions between intersubjectivity and objectivity.
We define intersubjectivity as a coordination effort
between two or more people to interpret entities in the
world (ideas, events, people, observations) through social
interaction. Our definition of intersubjectivity is informed by
prior definitions proposed in philosophy (Schuetz, 1942),
economics (Kaufmann, 1934), psychology (W. James,
1908), psychoanalysis (Bernfeld, 1941), psycholinguistics
(Rommetveit, 1979), and rhetoric (Brummett, 1976). The
core idea of intersubjectivity, according to Brummett (1976),
is that meaning arises from our interactions. Specifically, he
argued that there is no objective reality in that sensations,
14 Adams et al.
perceptions, beliefs, and experiences are meaningless.
Rather, it is our experiences with other people that imbue
these sensations, perceptions, beliefs, and other constructs
of knowing with meaning—namely emotional valence and
moral judgments.
How does this contrast with objectivity? According
to Reiss and Sprenger (2020), observations are objective
if they are (a) based on publicly observable phenom-
ena, (b) free from bias, and (c) accurate representations
of the world. There is not only consensus on how these
observations should be made and interpreted, but this
information and the ways by which it is obtained are
also made visible for scrutiny (e.g., in the form of aca-
demic publications). This signals that the information
has met the necessary standards while simultaneously
reinforcing them. Crucially, if these observations do
meet the criteria, they are regarded as contributing to
knowledge. Objectivity converts observations into facts,
which have become synonymous with ground truth,
and this functions to reduce uncertainty about the
world. Indeed, the fact-as-truth sense developed from
the fact-as-occurrence sense (Poovey, 1998).
Intersubjectivity as a candidate for why misinfor-
mation is a big problem. So, why is intersubjectivity
supposedly replacing objectivity, and why is this a cause
for concern with respect to misinformation? Regarding
the first question, social media is inherently about con-
necting people, so the number of visible interactions is
potentially vast. Regarding the second question, the con-
cern is that intersubjectivity is free from the formal rules
of generating objective truths. In short, it looks as though
more people are going it alone in establishing knowl-
edge, and this has now been better exposed through
advanced information technologies. Anyone can now
participate because the affordances encourage intersub-
jective knowledge production and sharing. Intersubjec-
tivity hampers the influence of facts in helping people
accurately understand the world, leading to an uncon-
trollable breeding ground for misinformation, which then
produces more aberrant behaviors.
In addition, intersubjectivity can appear to be a good
candidate for generating misinformation, and there is
research suggesting that social relationships and inter-
actions, rather than objective methods, are at the fore-
ground of knowledge. Kahan (2017) drew on Sherman
and Cohen’s (2002) identity-protective cognition, which
argues that culture is both cognitively and normatively
prior to fact. He uses this theory to explain his findings
that people are more likely to hold misconceptions if
those misconceptions are consistent with their values.
Similarly, Oyserman and Dawson (2020) argued that
during the United Kingdom referendum in 2016, people
used simple identity-based reasoning rather than com-
plex information-based reasoning to inform their voting
decision. According to Margolin’s (2021) theory of
informative fictions, there are two types of informa-
tion in any exchange: property information (object-
focused) and character information (agent-focused).
Misinformation therefore occurs when people priori-
tize character information despite the negative impact
on property information. These ideas are also reflected
in Pennycook et al.’s (2021c) study, which found that
although people care about the accuracy of the infor-
mation that they share with others, signaling political
affiliation is more important because of the social-
media context.
Perceived intersubjectivity as a candidate for why
misinformation is a big problem. We each have a
vast network of social ties, each with its own strengths
(strong/weak) and valences (positive/negative). This
complicates which claims are believed and for how long,
for two reasons. First, our relationships to others are
not static but are susceptible to transformation during an
interaction. Second, if a claim is circulating widely,
whether it is in a community or on a news website, then
it becomes available for negotiation across a multitude of
interactions with individuals. There are two conse-
quences: First, an individual’s likelihood of believing a
claim is an aggregate of their interactions; and second,
the interpretation of an entity (e.g., observation of the
world) is constantly in flux. These consequences make
intersubjectivity a candidate for explaining why there is
so much alarm about the proliferation and consequences
of misinformation. But, on the other hand, intersubjectiv-
ity also helps to expose why this perception is potentially
illusory.
Knowledge can emerge from an interaction, but an
interaction or sharing of false information does not
equate to evidence that knowledge has been negatively
impacted in a fundamental way. Conflating the diffusion
of information with its adoption is problematic because
engagement, through likes or shares, does not auto-
matically mean belief (Altay etal., 2021). The billions
of interactions we see online (Boyd & Crawford, 2012),
whether they contain misinformation or not, may seem
concerning but do not necessarily reflect all agents’
inner state of mind (Bryanov & Vziatysheva, 2021;
Fletcher & Nielsen, 2019; Wagner & Boczkowski, 2019).
Intersubjectivity does not subscribe to the transpar-
ency of objectivity. Although scientific methods are
explicitly designed to distinguish between competing
hypotheses and, ideally, between truth and falsehood,
intersubjectivity lacks this normative appeal. However,
this does not mean that social-negotiation processes of
Perspectives on Psychological Science XX(X) 15
generation and transmission cannot have their own
corrective mechanisms, and they may draw attention
to consequential, and potentially false, claims that
require scrutiny. An example of this is the Reddit thread
ChangeMyView, where individuals post their argument
about a claim and ask users to change their minds (e.g.,
“Influencers are not only pointless, but causing active
harm to society”; “Statistics is much more valuable than
trigonometry and should be the focus in schools”). On
occasions, this seems to be successful, as indicated by
the individual’s responses to the proposed counterargu-
ments, and research has been conducted into the lan-
guage of persuasion in this thread (e.g., Musi, 2018;
Priniski & Horne, 2018; Wei etal., 2016).
Intersubjectivity in public debate, especially in the
absence of simple ground truths, can resemble features
of scientific discourse (Brummett, 1976; Trafimow &
Osman, 2022). Statistics, which are widely regarded
today as a window to ground truth, were initially
doubted in the early 19th century precisely because
they were treated as another tool of rhetoric masquer-
ading as irrefutable, legitimate evidence (Coleman,
2018). Thus, the emergence and eventual domination
of statistics incorporated an element of coordination
and negotiation among scientists to establish statistics
as a valid representation of, and inference mechanism
for, observations of the world.
Although one might be tempted to consider objectiv-
ity and intersubjectivity as mutually exclusive, they both
feature in the scientific selection of ideas (Heylighen,
1997). A recent example occurred when various hypoth-
eses regarding the origins of the COVID-19 pandemic
were ruled out because they were initially regarded by
many in the scientific community as conspiratorial;
these theories were later recognized as valid hypoth-
eses (Osman et al., 2022). This demonstrates how
knowledge and claims can be in flux when they are
interpreted by different agents. It also highlights the
difficulties of being objective on a topic when the evi-
dence is accumulating in real time and when political
factors try to steer what constitutes legitimate and ille-
gitimate investigation. Moreover, there is a historical
precedent for censorship when authorities or experts
(scholars, journalists, politicians) assert themselves in
order to address the destabilization of epistemological
foundations, especially when it presents an existential
as well as a political threat (Berkowitz, 2021). Such
strategies therefore serve to maintain not just power
but perceived order in the world. Intersubjectivity and
objectivity, although different, can and have coexisted
in the past, which offers reassurance to those who are
concerned that intersubjectivity is causing chaos in the
form of misinformation (Krause etal., 2022).
Last, there needs to be a consistent way in which
deviations from ground truth are managed. If research-
ers see the present situation as more problematic than
ever before, this indicates that to them pre-Internet
society was preferable and had recovered from past
shifts in the value of epistemic concepts. In other
words, if the present is contrasted with the past to
highlight the current problem of misinformation, then
there needs to be an acknowledgment that past shifts,
which caused great concern at the time, were not as
problematic as those of the present day. Just because
past shifts have not been problematic, this does not
mean that future shifts will also be nonproblematic. It
is just that there is a lack of evidence that objectivity
has been displaced by intersubjectivity to the extent
that it is the current default mode of knowing, and that
misinformation arising from this way of knowing
impacts behavior. Although we do not deny the pos-
sibility that misinformation may be a serious problem
at some point, it seems premature to reach this diag-
nosis at the moment.
Conclusion
In reviewing the perceived problem of misinformation,
we found that many disciplines agree that the issue is
not novel, but that modern technology generates
unprecedented quantities of misinformation. This exac-
erbates its potential to cause harm both on the indi-
vidual level and the societal level. The hyperconnectivity
of today’s information and (social) media landscape is
seen as facilitating the generation and distribution of
misinformation. In turn, this leads to a perceived
increase in establishing worldviews intersubjectively,
which deviates from the epistemic objectivity heralded
by traditional institutions and gatekeepers of knowledge
and truth. We have proposed our own analysis on the
fundamentals of the problem of misinformation, and
through the lens of intersubjectivity we have provided
a conceptual framework to argue two essential points.
First, today’s informational ecosystem is the mecha-
nism by which we can observe transparently the mul-
titude of interactions that it hosts. Seeing the volume
of daily interactions in, for instance, social-media net-
works, leads to the inference that more people are
deviating from truth, because they can better coordinate
their own subjective knowledge of the world outside
of established facts—that is, intersubjectively. We argue
that simply because technology is increasing the num-
ber of visible interactions does not necessarily suggest
that there is anything profoundly new about the status
of epistemic objects. The interactions people have
when sharing what they think and feel is not equivalent
16 Adams et al.
to evidence that epistemic objects in and of themselves
have changed. This, we propose, explains why misin-
formation is viewed as an existential threat. Further, in
our view, the current evidence is also not sufficient to
justify this conclusion because correlation is often
viewed as causation.
Second, historical epistemology exposes what is
often ignored: The generation of knowledge is a process
of establishing conventions for the best way to arrive
at ground truths. We make this point from a realist, not
a relativist, perspective. There are objective facts. None-
theless, historical epistemology shows that whereas
there is an absence of stable diagnostic criteria for
ground truth, the mechanism by which objective facts
are acquired, characterized, and communicated has
changed over the course of human history. This, we
propose, explains why misinformation cannot be exam-
ined without the recognition that distortion affects any
act of communicating truth.
What does future work on
misinformation need to consider?
Future work needs to address three key issues. First, it
must show that misinformation in a particular context
of interest has the widespread potential to establish or
significantly strengthen related beliefs in a considerable
manner. This is important because the Internet and
social media may seem rife with misinformation, but
reliable estimates on its prevalence and impact on
recipients are hard to come by (e.g., Allen etal., 2020;
Altay etal., 2021). Thus, precisely mapping the (social)
media landscape to gauge the extent of the problem is
an important first step in understanding its potential
impact on individuals’ beliefs and public discourse. Of
course, even if the sheer amount of misinformation is
small compared with legitimate information, it might
still have a severe impact on people’s beliefs and atti-
tudes, as well as on their evaluation of evidence. Again,
however, it can be debated to what extent misinforma-
tion influences the public’s beliefs on a large scale (e.g.,
British Royal Society, 2022). Thus, the jury is still out
regarding the severity of the problem.
A second key issue concerns the role of misinforma-
tion in societally harmful behaviors and whether mis-
information constitutes a major factor in individuals’
actions. The assumption is that there is a direct causal
link between the prevalence and consumption of mis-
information and subsequent harmful behaviors. To date,
however, this link has not been sufficiently demon-
strated, and further research is required into this issue
(see below).
Third, when evidence for misinformation leading to
harmful behavior at the individual and societal level
can be reliably shown, a critical question is what tools
and strategies are best suited to address the problem.
This issue resonates with broader discussions surround-
ing behavior-change techniques in general (Hertwig
& Grüne-Yanoff, 2017; Osman etal., 2020), as well as
in digital environments (British Royal Society, 2022;
Kozyreva etal., 2020), and the limited efficacy of these
techniques in generating reliable behavioral change
(e.g., Osman etal., 2020; Trafimow & Osman, 2022). In
addition, organizations can resort to regulatory mecha-
nisms that serve as more substantial approaches to
addressing misinformation. We discuss those in more
detail later in this section.
Approaches to investigating causal links
between misinformation and behavior
A key question for science, policymaking, and public
debate is the nature and strength of the relation between
misinformation and behavior. Sometimes a simple
causal chain of the form misinformation beliefs
aberrant behavior is presumed. This is despite (a) a
weak empirical basis; (b) deep conceptual problems
arising from neglecting the dynamics of belief genera-
tion (as well as the changing status of what is deemed
by consensus of various authorities as illegitimate and
legitimate claims; e.g., Osman etal., 2022); and (c)
complex relations between beliefs and behavior.
Another line of argument to support the presumed
causal relation between misinformation and aberrant
behavior is to refer to major events (e.g., the speculated
role of mis- and disinformation spread before the Janu-
ary 6, 2021, riots at the U.S. Capitol). We emphasize the
importance of documenting and carefully analyzing
such cases, as well as their relevance to academic,
political, and public debate. At the same time, we think
that such events do not constitute, on their own, a suf-
ficiently robust evidence base for guiding policymak-
ing. Just as cases of adverse side effects after a
vaccination require further examination and call for
large-scale studies to assess the magnitude of the prob-
lem, we call for more systematic research on the rela-
tion between the consumption of misinformation and
resulting behaviors. This is also of importance to gauge
the severity of the problem, identify key mechanisms
and vulnerable areas, and take appropriate measures
to limit its consequences.
For misinformation to be the cause of aberrant
behavior, it needs to either reinforce or introduce false
beliefs. Consequently, future investigations could draw
from established methods examining belief conviction
(especially of false beliefs) and choice blindness (Hall
etal., 2013). In the clinical domain, work has shown
how individual differences account for strength in
Perspectives on Psychological Science XX(X) 17
maintaining false and delusional beliefs in light of con-
trary evidence (e.g., Combs et al., 2006). This is
informed by deconstructing measures of beliefs to
examine the association between the belief and convic-
tion in it. Beliefs are treated as multifactorial, which in
combination with how they are used helps researchers
to develop measurement tools of belief conviction
(Abelson, 1988). Items examine strength of beliefs,
length of time the beliefs have been held, frequency of
thoughts, personal importance of the beliefs, and per-
sonal concern in the beliefs, as well as willingness to
commit personal time in pursuit of those beliefs (e.g.,
Conviction in Delusional Beliefs Scale, Combs etal.,
2006; Brown Assessment of Beliefs Scale, Eisen etal.,
1988). Work has demonstrated that combining these
into metrics of conviction can be useful predictors of
individual differences in the durability of attitudes and
beliefs over time, whether they are false or true beliefs
(Lecci, 2000). Why is this important? Using measure-
ment tools that expand the way beliefs are character-
ized is one important way to gauge the extent to which
encountering misinformation should be a cause for
worry. Given that, even if, by association, people hold
false beliefs, they may have next to no conviction in
them. But when they do have conviction, this is a
predictive factor as to whether they will later act on
the false beliefs they hold.
By extension, the choice-blindness paradigm is
another way of examining conviction, and it has often
been used to examine strength in policies that persuade
people to affiliate with and vote for particular political
parties (e.g., Hall etal., 2013; Strandberg etal., 2019).
The paradigm presents people with various political
issues (e.g., gun control, abortion, immigration, tax
reform), and asks them to indicate where on a scale
they align (e.g., from strongly for tax increases on gas
to strongly against tax increases on gas). Then through
sleight of hand, some responses on the scale are
changed to the opposite of the participant’s original
position. Participants are then asked who they would
vote for, while also reviewing their (doctored) positions
on political issues. The paradigm reveals that people
often do not correct the change, but when they do this
is predicted by the extremity of their initial positions.
This paradigm offers a way to show when some beliefs
that appear to be significant are in fact fragile and can
be easily flipped in a simple manipulation, but when
others are in fact stable, particularly when they are
extreme. The choice-blindness paradigm is another way
to detect the extent to which people are discerning
about their own beliefs and the conviction in them.
Taken together with work from studies on belief con-
viction in the clinical domain, this suggests efforts to
examine the link between misinformation and behavior
should concentrate on specific groups that already hold
extreme beliefs because they are already likely to be
stable over time (e.g., Stille etal., 2017). They are likely
to be held with conviction—that is, they are already
liable to be acted upon. A strong test of the causal link
between misinformation and behavior is to expose the
extent to which new misinformation that elaborates on
or reinforces extreme false beliefs then generates aber-
rant behaviors. From this it would be possible to deter-
mine whether a new unit of misinformation is the
tipping point to instigating aberrant actions, indepen-
dent of the propensity to act on already-held beliefs
with conviction—beliefs that by necessity need to be
held stably over time to motivate behavior.
Social-level approaches to investigating the problem
of misinformation focus on large-scale units of behavior
(e.g., markets, distrust in traditional news media, polar-
ization, voting behavior) to show how misinformation
has aggregate effects. It is even harder, then, to avoid
overly interpreting correlation as causation at such a
broad level. One approach would be multidisciplinary,
building on historical, sociological, linguistic, and phil-
osophical disciplines to consider how the accumulation
of particular claims shape the collective narratives of
the populace. Some of these narratives could be mis-
informed. Correspondingly, methodological and theo-
retical approaches in cultural evolution, which investigate
the mechanism of cultural transmission (Cavalli–Sforza
& Feldman, 1981), could simulate the evolution of
beliefs across populations over time (Efferson etal.,
2020; McKay & Dennett, 2009; Norenzayan & Atran,
2004). Adopting such approaches would help to offer
a first pass at investigating mechanisms at group level
that shape and change how some beliefs (false or oth-
erwise) are preserved and strengthen over time and
why others fail to survive. An agreed set of behaviors
would need to be mapped onto this to determine the
relationship between the transmission of beliefs (false
or otherwise) across populations and corresponding
changes in behavior over the same period, isolated from
other potential contributing factors (e.g., economic,
geopolitical). Again, we would need to acknowledge
that this would not determine how any single misinfor-
mation claim can directly cause a change in behavior,
but it would help to show how, on aggregate, claims
of a particular kind become popular and how, on aggre-
gate, they contribute to the formation of narratives that
people in their day-to-day lives use to frame their
beliefs about the world. The analysis of the formation
of narratives could compare those formed in news
media or on social-media platforms with those formed
in day-to-day, face-to-face interactions in social gather-
ings. It is likely that political, economic, and social
narratives will, over time, depart from each other or
18 Adams et al.
converge on news media, social media, and social inter-
actions offline.
Regulatory responses
to misinformation
Possible tools to address misinformation include
restricting freedom of speech through governmental or
private institutions (e.g., removing content or banning
people and institutions from social-media platforms),
using fact-checking tools and related means to debunk
misinformation, and implementing strategies to “inocu-
late” people against the influence of misinformation.
Nonetheless, many of these tactics, such as content-
moderator jobs, require interpretation and agreement
as to what distinguishes legitimate information from
misinformation (Heldt, 2019). Moreover, to what extent
are such methods justifiable? That measures are ade-
quate and effective is of particular relevance because
the foundation of an open and free society is diversity
in thought, worldviews, and values. Naturally, this
implies mutual respect for opinions and actions even
one disagrees with them and even if they conflict with
scientific consensus—if people hold the belief that the
Earth is flat or if they avoid walking under a leaning
ladder to prevent bad luck, they have the right to do
so. So what are the criteria for holding illegitimate
beliefs, without, for instance, reference to criminal acts?
Freedom of speech and expression is a universal
human right in many countries, but it rarely, if ever, is
absolute. Society operates within common limitations
of free speech and expression that include defamation,
threat, incitement, libel, and matters of public order or
national security. Typically, justifications of these limita-
tions refer to the harm principle (Mill, 1859): the prem-
ise that freedom of expression and actions of an
individual can only be rightfully restricted to prevent
harm to others. Tacitly, the harm principle also under-
scores much of the current debate on misinformation:
the assumption that the surge of misinformation is, at
least probabilistically, a cause of actions that are harm-
ful to others and society. Thus, a critical question is to
what extent does the generation and distribution of
misinformation, when linked to societally harmful
actions, justify regulatory measures and legislative
actions that limit freedom of speech on top of the cur-
rent restrictions that exist?
When it comes to dealing with scientific misinforma-
tion (as opposed to strictly illegal content, such as child
pornography or violent extremist propaganda), there
is both doubt about the effectiveness of such measures
and worries that they could backfire, for instance by
further increasing distrust toward governmental regula-
tions or scientific institutions (British Royal Society,
2022). Indeed, prohibiting the expression of an idea does
not eliminate its darker influence on knowledge, and it
shows how some authorities think they are worthier of
expressing themselves than others are (Berkowtiz, 2021).
Controlling the flow of information assumes that tradi-
tional institutions are generally immune from making
errors, and this in turn can have negative effects on citi-
zens’ level of trust in them.
Outside of our own analysis of the problem of mis-
information, even if we subscribe to the view that it is
an existential threat, any regulatory action that is taken
to address the problem still needs to be informed by
robust evidence. For this reason, we considered the
issues that research needs to address and which meth-
odological approaches could offer new insights into
the problem of misinformation. Our position is that any
regulatory interventions should aim at empowering
people and helping them to navigate both traditional
and online information landscapes without posing the
risk of eroding the foundations of an open and demo-
cratic society.
Finally, an issue not addressed in this review is the
moral imperative that underpins much of the concern
around misinformation. The logic goes something like
this: There is truth, however defined, and deviating
from truth is morally reprehensible, given the potential
such deviation has for creating aberrant behaviors. A
deeper analysis is required about the new moral land-
scape we face that encourages simple categorizations
as to who is virtuous for believing a particular view
and who is bad for believing otherwise. More to the
point, if the moral imperative is used to justify punitive
actions, then are the grounds for this based on having
a false belief, or acting in an immoral way? Aberrant
behaviors may indeed be informed by false beliefs
(among many other factors), but we maintain that false
beliefs are often not stable, and even when they are,
they do not inevitably lead to aberrant behavior.
Transparency
Action Editor: Klaus Fiedler
Editor: Klaus Fiedler
Declaration of Conflicting Interests
The author(s) declared that there were no conflicts of
interest with respect to the authorship or the publication
of this article.
ORCID iD
Magda Osman https://orcid.org/0000-0003-1480-6657
Notes
1. On the basis of Jones (2016) findings, Axt et al. (2020) claimed
that democratic damage stems from media distrust, which sug-
gests that political consequences are perceived as secondary.
Perspectives on Psychological Science XX(X) 19
The logic of the argument here is that misinformation leads to
distrust in news media, and that in turn leads to disengage-
ment in democratic and political processes. In fact, even the
terms “misinformation” and “fake news” are claimed to promote
antidemocratic ideology (e.g., Habgood-Coote, 2019; Levi, 2018)
because introducing the idea that false claims are circulated in
traditional news media is sufficient cause for generating mistrust
and potential disengagement in democratic processes.
2. It is worth highlighting that through the priming paradigm
that Bastick (2021) used, any potential association between mis-
information (in their case, fake news) and behavior amounted
to finger-tapping behaviors. So it would be hard to infer from
this what the effect of fake news is on aberrant behaviors, given
that the behavioral measure here was simply finger tapping,
which in and of itself is not aberrant.
3. The arrival of a global telegraph network using cables, wires,
and relay stations connected people from around the world.
However, in 1858, three days after the first successful test of
the cable that linked North America and Europe, an article was
published in the New York Times: “So far as the influence of the
newspaper upon the mind and morals of the people is con-
cerned, there can be no rational doubt that the telegraph has
caused vast injury.” In 1961, an article in the Times argued that
people “mourn the good old times when mails came by steamer
twice a month” (LaFrance, 2014).
References
Abelson, R. P. (1988). Conviction. American Psychologist,
43(4), 267–275.
Acerbi, A. (2019). Cognitive attraction and online misinforma-
tion. Palgrave Communications, 5(1), 1–7.
Ajzen, I. (1988). Attitudes, personality, and behavior. Open
University Press.
Ajzen, I. (1991). The theory of planned behavior. Organi-
zational Behavior and Human Decision Processes, 50(2),
179–211.
Ajzen, I. (2012). The theory of planned behavior. In P. A. M.
Lange, A. W. Kruglanski, & E. T. Higgins (Eds.), Handbook
of theories of social psychology (Vol. 1, pp. 438–459). SAGE.
Ajzen, I. (2020). The theory of planned behavior: Frequently
asked questions. Human Behavior and Emerging Tech-
nologies, 2(4), 314–324.
Allcott, H., & Gentzkow, M. (2017). Social media and fake news
in the 2016 election. Journal of Economic Perspectives,
31(2), 211–236.
Allcott, H., Gentzkow, M., & Yu, C. (2019). Trends in the
diffusion of misinformation on social media. Research &
Politics, 6(2), 2053168019848554.
Allen, J., Howland, B., Mobius, M., Rothschild, D., & Watts, D. J.
(2020). Evaluating the fake news problem at the scale
of the information ecosystem. Science Advances, 6(14),
Article eaay3539.
Allport, G., & Postman, L. (1947). The psychology of rumor.
Henry Holt and Company.
Altay, S., Berriche, M., & Acerbi, A. (2021). Misinformation
on misinformation: Conceptual and methodological chal-
lenges. https://psyarxiv.com/edqc8
Altay, S., Hacquin, A.-S., & Mercier, H. (2020). Why do so
few people share fake news? It hurts their reputation.
New Media & Society, 24(6), 1303–1324. https://doi
.org/10.1177/1461444820969893
Anderau, G. (2021). Defining fake news. KRITERION–Journal
of Philosophy, 35(3), 197–215.
Arceneaux, K., Gravelle, T. B., Osmundsen, M., Petersen, M. B.,
Reifler, J., & Scotto, T. J. (2021). Some people just want
to watch the world burn: The prevalence, psychol-
ogy and politics of the ‘Need for Chaos.Philosophical
Transactions of the Royal Society B, 376(1822), Article
20200147.
Asadian, H., & Javadi, H. H. S. (2018). Identification of Sybil
attacks on social networks using a framework based on
user interactions. Security and Privacy, 1(2), Article e19.
Avital, M., Baiyere, A., Dennis, A., Gibbs, J., & Te’eni, D.
(2020). Fake news: What is it and why does it matter?
In The Academy of Management Annual Meeting 2020:
Broadening Our Sight.
Axt, J., Landau, M., & Kay, A. (2020). The psychological appeal
of fake-news attributions. Psychological Science, 31(7),
848–857.
Ayers, M. S., & Reder, L. M. (1998). A theoretical review of
the misinformation effect: Predictions from an activation-
based memory model. Psychonomic Bulletin & Review,
5(1), 1–21.
Baker, S. (2018, December 12). Subscription traps and decep-
tive free trials scam millions with misleading ads and fake
celebrity endorsements. Better Business Bureau. https://
www.bbb.org/globalassets/local-bbbs/st-louis-mo-142/
stlouismo142/studies/bbbstudy-free-trial-offers-and-sub
scription-traps.pdf
Barometer, E. T. (2019, February). Edelman trust barometer
global report. November, 2 2021.
Barthel, M., Mitchell, A., & Holcomb, J. (2016). Many
Americans believe fake news is sowing confusion. Pew
Research Center’s Journalism Project. https://policycom
mons.net/artifacts/618138/many-americans-believe-fake-
news-is-sowing-confusion/1599054/
Bastick, Z. (2021). Would you notice if fake news changed
your behavior? An experiment on the unconscious effects
of disinformation. Computers in Human Behavior, 116,
Article 106633.
Bastos, M. T., & Mercea, D. (2017). The Brexit botnet and user-
generated hyperpartisan news. Social Science Computer
Review, 37(1), 38–54.
Bender, J., & Wellbery, D. (Eds.). (1990). The ends of rhetoric:
History, theory, practice. Stanford University Press.
Benegal, S. D., & Scruggs, L. A. (2018). Correcting misinfor-
mation about climate change: The impact of partisan-
ship in an experimental setting. Climatic Change, 148,
61–80.
Benkler, Y., Faris, R., & Roberts, H. (2018). Network propa-
ganda: Manipulation, disinformation, and radicalization
in American politics. Oxford University Press.
Berger, J., & Milkman, K. L. (2012). What makes online con-
tent viral. Journal of Marketing Research, 49(2), 192–205.
Berkowitz, E. (2021). Dangerous ideas: A brief history
of censorship in the West, from ancients to fake news.
Westbourne Press.
Bernfeld, S. (1941). The facts of observation in psychoanaly-
sis. The Journal of Psychology, 12(2), 289–305.
20 Adams et al.
Berthon, P., Treen, E., & Pitt, L. (2018). How truthiness, fake
news and post-fact endanger brands and what to do about
it. Brands and Fake News, 10(1), 18–24.
Bessi, A., Coletto, M., Davidescu, G. A., Scala, A., Caldarelli, G.,
& Quattrociocchi, W. (2015). Science vs conspiracy:
Collective narratives in the age of misinformation. PLOS
ONE, 10(2), Article e0118093.
Bishop, G. F., Oldendick, R. W., Tuchfarber, A. J., & Bennett,
S. E. (1980). Pseudo-opinions on public affairs. Public
Opinion Quarterly, 44(2), 198–209.
Bode, L., & Vraga, E. (2017). See something, say something:
Correction of global health misinformation on social media.
Health Communication, 33(9), 1131–1140.
Bolsen, T., & Shapiro, M. A. (2016). The US news media,
polarization on climate change, and pathways to effective
communication. Environmental Communication, 12(2),
149–163.
Bond, C. F., Jr., & DePaulo, B. M. (2006). Accuracy of decep-
tion judgments. Personality and Social Psychology Review,
10, 214–234.
Boyd, D., & Crawford, K. (2012). Critical questions for big
data: Provocations for a cultural, technological, and
scholarly phenomenon. Information, Communication &
Society, 15(5), 662–679.
Bradshaw, S., & Howard, P. N. (2017). Troops, trolls and trou-
blemakers: A global inventory of organized social media
manipulation (Working Paper No. 2017.12, p. 37). Project
on Computational Propaganda. http://comprop.oii.ox.ac
.uk/2017/07/17/troops-trolls-and-trouble-makers-a-glo
balinventory-of-organized-social-media-manipulation/
Braun, K. A., & Loftus, E. F. (1998). Advertising’s misinforma-
tion effect. Applied Cognitive Psychology, 12(6), 569–591.
British Royal Society. (2022). The online information envi-
ronment: Understanding how the internet shapes people’s
engagement with scientific information. https://royalso
ciety.org/topics-policy/projects/online-information-
environment
Bronstein, M., Kummerfeld, E., MacDonald, A., III, &
Vinogradov, S. (2021). Investigating the impact of anti-
vaccine news on SARS-CoV-2 vaccine intentions (SSRN
3936927).
Brummett, B. (1976). Some implications of “process” or “inter-
subjectivity”: Postmodern rhetoric. Philosophy & Rhetoric,
9(1), 21–51.
Bryanov, K., & Vziatysheva, V. (2021). Determinants of indi-
viduals’ belief in fake news: A scoping review determi-
nants of belief in fake news. PLOS ONE, 16(6), Article
e0253717.
Bufacchi, V. (2020). Truth, lies and tweets: A consensus the-
ory of post-truth. Philosophy & Social Criticism, 47(3),
347–361.
Burkhardt, J. (2017). History of fake news. Library Technology
Reports, 53(8), 5–9.
Buzzfeed (2016). This analysis shows how viral fake elec-
tion news stories outperformed real news on facebook.
https://www.buzzfeednews.com/article/craigsilverman/
viral-fake-election-news-outperformed-real-news-on-
facebook
Canen, N. J., Kendall, C., & Trebbi, F. (2021). Political parties
as drivers of US polarization: 1927-2018 (No. w28296).
National Bureau of Economic Research.
Cantril, H. (1938). Propaganda analysis. The English Journal,
27(3), 217–221.
Cavalli–Sforza, L. L., & Feldman, M. W. (1981). Cultural trans-
mission and evolution: A quantitative approach. Princeton
University Press.
Cavazos, R. (2019). The economic cost of bad actors on the
internet. CHEQ. https://s3.amazonaws.com/media.media
post.com/uploads/EconomicCostOfFakeNews.pdf
Ceci, S. J., & Williams, W. M. (2018). Who decides what
is acceptable speech on campus? Why restricting free
speech is not the answer. Perspectives on Psychological
Science, 13(3), 299–323.
Celliers, M., & Hattingh, M. (2020). A systematic review on fake
news themes reported in the literature. In M. Hattingh,
M. Matthee, H. Smuts, I. Pappas, Y. K. Dwivedi, & M.
Mäntymäki (Eds.), Responsible design, implementation
and use of information and communication technology.
I3E 2020. Lecture Notes in Computer Science (Vol. 12067,
pp. 223–234). Springer. https://doi.org/10.1007/978-3-
030-45002-1_19
Chadwick, A., Vaccari, C., & O’Loughlin, B. (2018). Do tabloids
poison the well of social media? Explaining democrati-
cally dysfunctional news sharing. New Media & Society,
11, 4255–4274.
Chan, M.-P. S., Jones, C. R., Hall Jamieson, K., & Albarracín,
D. (2017). Debunking: A meta-analysis of the psycho-
logical efficacy of messages countering misinformation.
Psychological Science, 28(11), 1531–1546.
Chen, X., Sin, S.-C. J., Theng, Y.-L., & Lee, C. S. (2015).
Why students share misinformation on social media:
Motivation, gender, and study-level differences. The
Journal of Academic Librarianship, 41, 583–592.
Chen, Z., Meng, X., & Yu, W. (2021). Depolarization in the
rise of far-right platforms? A moderated mediation model
on political identity, misinformation belief and voting
behavior in the 2020 US Presidential Election. In 2021
International Association for Media and Communica-
tion Research Conference (IAMCR 2021): Rethinking
Borders and Boundaries.
Chirovici, E. (2014). Rumors that change the world: A history
of violence and discrimination. Lexington Books.
Chowdhury, N., Khalid, A., & Turin, T. C. (2021). Understanding
misinformation infodemic during public health emergen-
cies due to large-scale disease outbreaks: A rapid review.
Journal of Public Health, 1–21.
Chua, A. Y. K., & Banerjee, S. (2017). To share or not to share:
The role of epistemic belief in online health rumours.
International Journal of Medical Informatics, 108, 36–41.
Chua, G., Yuen, K. F., Wang, X., & Wong, Y. D. (2021). The
determinants of panic buying during COVID-19. Inter-
national Journal of Environmental Research and Public
Health, 18(6), Article 3247.
Ciampaglia, G. L., Flammini, A., & Menczer, F. (2015). The
production of information in the attention economy.
Scientific Reports, 5, Article 9542.
Perspectives on Psychological Science XX(X) 21
Cohen-Almagor, R. (1997). Why tolerate? Reflections on the
Millian truth principle. Philosophia, 25(1–4), 131–152.
Coleman, S. (2018). The elusiveness of political truth: From
the conceit of objectivity to intersubjective judgment.
European Journal of Communication, 33(2), 157–171.
Combs, D. R., Adams, S. D., Michael, C. O., Penn, D. L., Basso,
M. R., & Gouvier, W. D. (2006). The conviction of delu-
sional beliefs scale: Reliability and validity. Schizophrenia
Research, 86(1–3), 80–88.
Conroy, N. K., Rubin, V. L., & Chen, Y. (2015). Automatic
deception detection: Methods for finding fake news.
Proceedings of the Association for Information Science
Technology, 52(1), 1–4.
Conway, E. M., & Oreskes, N. (2012). Merchants of doubt: How
a handful of scientists obscured the truth on issues from
tobacco smoke to global warming. Bloomsbury.
Cook, J. (2019). Understanding and countering misinfor-
mation about climate change. In I. E. Chiluwa & S. A.
Samoilenko (Eds.), Handbook of research on deception,
fake news, and misinformation online (pp. 281–306).
Hershey, PA: IGI Global.
Cook, J., Ellerton, P., & Kinkead, D. (2018). Deconstructing
climate misinformation to identify reasoning errors.
Environmental Research Letters, 13(2), Article 024018.
Cooke, N. (2017). Posttruth, truthiness, and alternative facts:
Information behavior and critical information consump-
tion for a new age. The Library Quarterly: Information,
Community, Policy, 87(3), 211–221.
Courchesne, L., Ilhardt, J., & Shapiro, J. N. (2021). Review of
social science research on the impact of countermeasures
against influence operations. Harvard Kennedy School
Misinformation Review.
Dan, V., Paris, B., Donovan, J., Hameleers, M., Roozenbeek,
J., van der Linden, S., & von Sikorski, C. (2021). Visual
mis-and disinformation, social media, and democracy.
Journalism & Mass Communication Quarterly, 98(3),
641–664.
Darnton, R. (2009). The case for books: Past, present, and
future. Public Affairs.
Das, A., & Schroeder, R. (2021). Online disinformation in
the run-up to the Indian 2019 election. Information,
Communication & Society, 24(12), 1762–1778.
Dawson, C., & Woodward, A. S. (1913). On the discovery
of a Palaeolithic human skull and mandible in a flint-
bearing gravel overlying the Wealden (Hastings Beds)
at Piltdown, Fletching (Sussex). Quarterly Journal of the
Geological Society, 69(1–4), 117–123.
De Beer, D., & Matthee, M. (2021). Approaches to identify
fake news: A systematic literature review. In T. Antipova
(Ed.), Integrated science in digital age 2020 (pp. 13–22).
Springer.
Desai, S., Pilditch, T., & Madsen, J. (2020). The rational contin-
ued influence of misinformation. Cognition, 205, Article
104453.
Desai, S., & Reimers, S. (2018). Some misinformation is more
easily countered: An experiment on the continued influ-
ence effect. Annual Meeting of the Cognitive Science
Society.
Di Domenico, G., Sit, J., Ishizaka, A., & Nunan, D. (2021). Fake
news, social media and marketing: A systematic review.
Journal of Business Research, 124, 329–341.
Di Domenico, G., & Visentin, M. (2020). Fake news or true
lies? Reflections about problematic contents in marketing.
International Journal of Market Research, 62(4), 409–417.
DiFonzo, N., & Bordia, P. (2007). Rumors influence: Toward a
dynamic social impact theory of rumor. In A. R. Pratkanis
(Ed.), The science of social influence: Advances and future
progresses (pp. 271–295). Psychology Press.
Dixon, G. N., McKeever, B. W., Holton, A. E., Clarke, C., &
Eosco, G. (2015). The power of a picture: Overcoming
scientific misinformation by communicating weight-of-
evidence with visual exemplars. Journal of Communication,
65, 639–659.
Drucker, E., & Barreras, R. (2005). Studies of voting behavior
and felony disenfranchisement among individuals in the
criminal justice system in New York, Connecticut, and
Ohio. The Sentencing Project.
Duffy, A., Tandoc, E., & Ling, R. (2019). Too good to be true,
too good not to share: The social utility of fake news.
Information, Communication & Society, 23(13), 1965–1979.
Durodolu, O. O., & Ibenne, S. K. (2020). The fake news
infodemic vs information literacy. Library Hi Tech News,
37(7), 13–14.
Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio,
L. K., Brashier, N., Kendeou, P., Vraga, E., & Amazeen,
M. A. (2022). The psychological drivers of misinformation
belief and its resistance to correction. Nature Reviews
Psychology, 1(1), 13–29.
Ecker, U. K. H., Lewandowsky, S., Swire, B., & Chang, D. (2011).
Correcting false information in memory: Manipulating the
strength of misinformation encoding and its retraction.
Psychonomic Bulletin & Review, 18(3), 570–578.
Efferson, C., McKay, R., & Fehr, E. (2020). The evolution of
distorted beliefs vs. mistaken choices under asymmetric
error costs. Evolutionary Human Sciences, 2, 1–24.
Eisen, J. L., Phillips, K. A., Baer, L., Beer, D. A., Atala, K. D.,
& Rasmussen, S. A. (1998). The Brown Assessment of
Beliefs Scale: Reliability and validity. American Journal
of Psychiatry, 155(1), 102–108.
Elsasser, S. W., & Dunlap, R. E. (2013). Leading voices in the
denier choir: Conservative columnists’ dismissal of global
warming and denigration of climate science. American
Behavioral Scientist, 57(6), 754–776.
European Commission. (2018). A multi-dimensional approach
to disinformation. European Commission (Directorate-
General for Communications Networks, Content and
Technology).
Fallis, D. (2015). What is disinformation? Library Trends,
63(3), 401–426.
Farrell, J. (2019). The growth of climate change misinforma-
tion in US philanthropy: Evidence from natural language
processing. Environmental Research Letters, 14(3), Article
034013.
Farrell, J., McConnell, K., & Brulle, R. (2019). Evidence-based
strategies to combat scientific misinformation. Nature
Climate Change, 9, 191–195.
22 Adams et al.
Feest, U., & Sturm, T. (2011). What (good) is historical epis-
temology? Editors’ introduction. Erkenntnis, 75, 285–302.
Fernández, M., & Bellogín, A. (2020). Recommender systems
and misinformation: The problem or the solution? http://
oro.open.ac.uk/72186/1/2020_Recys_ohars_workshop
.pdf
Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A.
(2016). The rise of social bots. Communications of the
ACM, 59(7), 96–104.
Ferreira, G. B., & Borges, S. (2020). Media and misinformation
in times of COVID-19: How people informed themselves
in the days following the Portuguese declaration of the
state of emergency. Journalism and Media, 1(1), 108–121.
Finetti, H., Ramirez, J., & Dwyre, D. (2020). The impact
of ex-felon disenfranchisement on voting behavior.
https://www.researchgate.net/profile/Hayley-Finetti/publi
cation/344362006_The_Impact_of_Ex-Felon_Disen
franchisement_on_Voting_Behavior/links/5f6c59b0
299bf1b53eedd4ab/The-Impact-of-Ex-Felon-Disenfranchi
sement-on-Voting-Behavior.pdf
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention,
and behavior: An introduction to theory and research.
Addison-Wesley.
Fisher, C., Flew, T., Park, S., Lee, J. Y., & Dulleck, U. (2021).
Improving trust in news: Audience solutions. Journalism
Practice, 15(10), 1497–1515.
Fletcher, R., & Nielsen, R. K. (2019). Generalised scepticism:
How people navigate news on social media. Information,
Communication & Society, 22(12), 1751–1769.
Flynn, A. W., Domínguez, S., Jr., Jordan, R., Dyer, R. L., &
Young, E. I. (2021). When the political is professional:
Civil disobedience in psychology. American Psychologist,
76(8), 1217.
Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The nature and ori-
gins of misperceptions: Understanding false and unsup-
ported beliefs about politics. Political Psychology, 38(1),
127–150.
Fowler, A., & Margolis, M. (2013). The political consequences
of uninformed voters. Electoral Studies, 30, 1–11.
Frost, P. (2000). The quality of false memory over time: Is
memory for misinformation ‘‘remembered’’ or ‘‘known’’?
Psychonomic Bulletin & Review, 7(3),531–536
Funke, D. (2021). Global responses to misinformation and
populism. In H. Tumber & S. Waisbord (Eds.), The
Routledge companion to media disinformation and popu-
lism (pp. 449–458). Routledge.
Galanis, P. A., Vraka, I., Siskou, O., Konstantakopoulou, O.,
Katsiroumpa, A., Moisoglou, I., & Kaitelidou, D. (2021).
Predictors of parents’ intention to vaccinate their children
against the COVID-19 in Greece: A cross-sectional study.
medRxiv. https://doi.org/10.1101/2021.09.27.21264183
Galeotti, A. E. (2020). Political disinformation and voting
behavior: Fake news and motivated reasoning. Notizie di
Politeia, 142, 64–85.
Gangarosa, E. J., Galazka, A. M., Wolfe, C. R., Phillips, L. M.,
Miller, E., Chen, R. T., & Gangarosa, R. E. (1998). Impact of
anti-vaccine movements on pertussis control: The untold
story. The Lancet, 351(9099), 356–361.
Garrett, R. K., Nisbet, E. C., & Lynch, E. K. (2013). Undermining
the corrective effects of media-based political fact checking?
The role of contextual cues and naive theory. Journal of
Communication, 63(4), 617–637.
Ghenai, A., & Mejova, Y. (2017). Catching Zika fever: Appli-
cation of crowdsourcing and machine learning for track-
ing health misinformation on Twitter. arXiv preprint
arXiv:1707.03778.
Gimpel, H., Graf, V., & Graf-Drasch, V. (2020). A comprehen-
sive model for individuals’ acceptance of smart energy
technology–A meta-analysis. Energy Policy, 138, Article
111196.
Glaeser, E. L., & Ujhelyi, G. (2010). Regulating misinformation.
Journal of Public Economics, 94(3–4), 247–257.
Glasman, L. R., & Albarracín, D. (2006). Forming attitudes
that predict future behavior: A meta-analysis of the atti-
tude-behavior relation. Psychological Bulletin, 132(5),
778–822.
Godler, Y. (2020). Post-post-truth: An adaptationist theory
of journalistic verism. Communication Theory, 30(2),
169–187.
Gradón, K. (2020). Crime in the time of the plague: Fake news
pandemic and the challenges to law-enforcement and
intelligence community. Society Register, 4(2), 133–148.
Grant, M. (2004). Greek and Roman historians: Information
and misinformation. Routledge.
Greene, C. M., & Murphy, G. (2021). Quantifying the effects of
fake news on behavior: Evidence from a study of COVID-
19 misinformation. Journal of Experimental Psychology:
Applied, 27(4), 773–784.
Greifeneder, F., Notarnicola, C., & Wagner, W. (2021). A
machine learning-based approach for surface soil mois-
ture estimations with google earth engine. Remote Sensing,
13(11), 2099.
Greifeneder, R., Jaffé, M. E., Newman, E. J., & Schwarz, N.
(eds.). (2020). What is New and True 1 about Fake News?
In The psychology of fake news (pp. 1–8). Routledge.
Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B.,
& Lazer, D. (2019). Fake news on Twitter during the 2016
US presidential election. Science, 363(6425), 374–378.
Guastella, G. (2017). Word of mouth: Fama and its personi-
fications in art and literature in ancient Rome. Oxford
University Press.
Guess, A., Aslett, K., Tucker, J., Bonneau, R., & Nagler, J.
(2021). Cracking open the news feed: Exploring what
US Facebook users see and share with large-scale plat-
form data. Journal of Quantitative Description: Digital
Media, 1. http://doi.org/10.51685/jqd.2021.006
Guess, A., & Coppock, A. (2018). Does counter-attitudinal
information cause backlash? Results from three large
survey experiments. British Journal of Political Science,
50(4), 1497–1515.
Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think:
Prevalence and predictors of fake news dissemination
on Facebook. Science Advances, 5(1), Article eaau4586.
Habgood-Coote, J. (2019). Stop talking about fake news!
Inquiry, 62(9–10), 1033–1065.
Haiden, L., & Althuis, J. (2018). The definitional challenges
of fake news. In International Conference on Social
Computing, Behavior-Cultural Modeling, and Prediction
and Behavior Representation in Modeling and Simulation,
Washington, DC.
Perspectives on Psychological Science XX(X) 23
Hall, L., Strandberg, T., Pärnamets, P., Lind, A., Tärning, B., &
Johansson, P. (2013). How the polls can be both spot on
and dead wrong: Using choice blindness to shift politi-
cal attitudes and voter intentions. PLOS ONE, 8(4), Article
e60554. https://doi.org/10.1371/journal.pone.0060554
Hameleers, M., van der Meer, T., & Vliegenthart, R. (2021).
Civilised truths, hateful lies? Incivility and hate speech
in false information – evidence from fact-checked state-
ments in the US. Information, Communication and
Society, 25(11), 1596–1613. https://doi.org/10.1080/136
9118X.2021.1874038
Han, C., Kumar, D., & Durumeric, Z. (2022). On the infra-
structure providers that support misinformation websites.
In Proceedings of the International AAAI Conference on
Web and Social Media (Vol. 16, pp. 287–298).
Hartwig, M., & Bond, C. F., Jr. (2011). Why do lie-catchers
fail? A lens model meta-analysis of human lie judgments.
Psychological Bulletin, 137(4), 643–659.
Hartwig, M., & Bond, C. F., Jr. (2014). Lie detection from mul-
tiple cues: A meta-analysis. Applied Cognitive Psychology,
28(5), 661–676.
Hattori, K., & Higashida, K. (2014). Misleading advertising
and minimum quality standards. Information Economics
and Policy, 28, 1–14.
Hebenstreit, J. (2022). Voter polarisation in Germany:
Unpolarised Western but polarised Eastern Germany?
German Politics. https://doi.org/10.1080/09644008.2022
.2056595
Heldt, A. (2019). Let’s meet halfway: Sharing new respon-
sibilities in a digital age. Journal of Information Policy,
9(1), 336–369.
Hernon, P. (1995). Disinformation and misinformation through
the internet: Findings of an exploratory study. Government
Information Quarterly, 12(2), 133–139.
Herrero-Diz, P., Conde-Jiménez, J., & Reyes de Cózar, S.
(2020). Teens’ motivations to spread fake news on WhatsApp.
Social Media + Society, 6(3), 1–14. https://doi.org/10
.1177/2056305120942879
Hertwig, R., & Grüne-Yanoff, T. (2017). Nudging and boosting:
Steering or empowering good decisions. Perspectives on
Psychological Science, 12(6), 973–986.
Heylighen, F. (1997). Objective, subjective and intersubjec-
tive selectors of knowledge. Evolution and Cognition,
3(1), 63–67.
Hills, T. (2019). The dark side of information proliferation.
Perspectives on Psychological Science, 14(3), 323–330.
Howell, L. (2013). Global risks 2013. Geneva: World Economic
Forum. Retrieved from http://reports.weforum.org/
global-risks-2013/risk-case-1/digital-wildfires-in-a-hyper
connected-world/
Hui, P.-M., Shao, C., Flammini, A., Menczer, F., & Ciampaglia, G.
(2018). The Hoaxy misinformation and fact-checking dif-
fusion network. In Twelfth International AAAI Conference
on Web and Social Media. AAAI Press.
Humprecht, E., Esser, F., & Van Aelst, P. (2020). Resilience
to online disinformation: A framework for cross-national
comparative research. The International Journal of Press/
Politics, 25(3), 493–516.
Imhoff, R., Zimmer, F., Klein, O., António, J. H., Babinska, M.,
Bangerter, A., . . . Van Prooijen, J. W. (2022). Conspiracy
mentality and political orientation across 26 countries.
Nature Human Behaviour, 6(3), 392–403.
James, W. (1890). The Principles of Psychology. Henry Holt
and Company.
Jin, F., Wang, W., Zhao, L., Dougherty, E., Cao, Y., Chang-Tien, L.,
& Ramakrishnan, N. (2014). Misinformation propagation
in the age of Twitter. Computer, 47(12), 90–94.
Jones, J. M. (2018). U.S. media trust continues to recover from
2016 low. Gallup. https://news.gallup.com/poll/243665/
media-trust-continues-recover-2016-low.aspx
Joslyn, S., Sinatra, G. M., & Morrow, D. (2021). Risk percep-
tion, decision-making, and risk communication in the
time of COVID-19. Journal of Experimental Psychology:
Applied, 27(4), 579–583.
Kadenko, N. I., van der Boon, J. M., Kaaij, J., Kobes, W. J.,
Mulder, A. T., & Sonneveld, J. J. (2021). Whose agenda
is it anyway? The effect of disinformation on COVID-19
vaccination hesitancy in the Netherlands. In International
Conference on Electronic Participation (pp. 55–65).
Springer.
Kahan, D. M. (May 24, 2017). Misconceptions, Misinformation,
and the Logic of Identity-Protective Cognition (Cultural
Cognition Project Working Paper Series No. 164, Yale
Law School, Public Law Research Paper No. 605, Yale
Law & Economics Research Paper No. 575). Available at
SSRN: https://ssrn.com/abstract=2973067 or http://dx.doi
.org/10.2139/ssrn.2973067
Karlova, N., & Lee, J. H. (2011). Notes from the under-
ground city of disinformation: A conceptual investiga-
tion. Proceedings of the American Society for Information
Science, 48(1), 1–9.
Kaufmann, F. (1934). The concept of law in economic science.
The Review of Economic Studies, 1(2), 102–109.
Kaul, V. (2012). Changing paradigms of media landscape in
the digital age. Mass Communication and Journalism, 2.
Kim, M. S., & Hunter, J. E. (1993). Relationships among atti-
tudes, behavioral intentions, and behavior: A meta-anal-
ysis of past research, part 2. Communication Research,
20(3), 331–364.
Kirkpatrick, A. W. (2020). The spread of fake science: Lexical
concreteness, proximity, misinformation sharing, and
the moderating role of subjective knowledge. Public
Understanding of Science, 30(1), 55–74.
Kogan, S., Moskowitz, T. J., & Niessner, M. (2021). Social
media and financial news manipulation. https://ssrn
.com/abstract=3237763
Kopp, C., Korb, K., & Mills, B. (2018). Information-theoretic
models of deception: Modelling cooperation and diffu-
sion in populations exposed to “fake news.PLOS ONE,
13(11), Article e0207383. https://doi.org/10.1371/journal
.pone.0207383
Kormann, C. (2018). Scott Pruitt’s crusade against “secret sci-
ence” could be disastrous for public health. The New
Yorker. https://www.newyorker.com/science/elements/
scott-pruitts-crusade-against-secret-science-could-be-
disastrous-for-public-health
24 Adams et al.
Kouzy, R., Jaoude, J. A., Kraitem, A., El Alam, M. B., Karam, B.,
Adbib, E., Zarka, J., Traboulsi, C., Akl, E. W., & Baddour,
K. (2020). Coronavirus goes viral: Quantifying the COVID-
19 misinformation epidemic on Twitter. Cureus, 12(3),
Article e7255.
Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). Citizens
versus the internet: Confronting digital challenges with
cognitive tools. Psychological Science in the Public
Interest, 21(3), 103–156.
Kraus, S. J. (1995). Attitudes and the prediction of behavior: A
meta-analysis of the empirical literature. Personality and
Social Psychology Bulletin, 21(1), 58–75.
Krause, N., Freiling, I., Beets, B., & Brossard, D. (2020). Fact-
checking as risk communication: The multi-layered risk
of misinformation in times of COVID-19. Journal of Risk
Research, 23(7–8), 1052–1059.
Krause, N. M., Freiling, I., & Scheufele, D. A. (2022). The
“infodemic” infodemic: Toward a more nuanced under-
standing of truth-claims and the need for (not) combatting
misinformation. The ANNALS of the American Academy of
Political and Social Science, 700(1), 112–123.
Kuklinski, J. H., Quirk, P. J., Jerit, J., Schwieder, D., & Rich, R. F.
(2000). Misinformation and the currency of democratic
citizenship. Journal of Politics, 62(3), 790–816.
LaFrance A. (2014). In 1858, people said the telegraph was
‘too fast for the truth’. The Atlantic.
Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J.,
Greenhill, K. M., Menczer, F., Nydan, B., Pennycook, G.,
Rothschild, D., Schudson, M., Sloman, S. A., Thorson, E. A.,
Watts, D. J., & Zittran, J. L. (2018). The science of fake
news: Addressing fake news requires a multidisciplinary
effort. Science, 359(6380), 1094–1096.
Lecci, L. (2000). An experimental investigation of odd beliefs:
Individual differences in non-normative belief conviction.
Personality and Individual Differences, 29(3), 527–538.
Levi, L. (2018). Real “fake news” and fake “fake news.” First
Amendment Law Review, 16, 232.
Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond
misinformation: Understanding and coping with the “post-
truth” era. Journal of Applied Research in Memory and
Cognition, 6(4), 353–369.
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz,
N., & Cook, J. (2012). Misinformation and its correc-
tion: Continued influence and successful debiasing.
Psychological Science, 13(3), 106–131.
Lewandowsky, S., & van der Linden, S. (2021). Countering misin-
formation and fake news through inoculation and prebunk-
ing. European Review of Social Psychology, 32(2), 348–384.
Lewis, S. C. (2019). Lack of trust in the news media, institu-
tional weakness, and relational journalism as a potential
way forward. Journalism, 20(1), 44–47.
Lippman, W. (1922). Public Opinion. Harcourt, Brace and
Company.
Liu, Y., & Brook Wu, Y.-F. (2020). FNED: A deep network
for fake news early detection on social media. ACM
Transactions on Information Systems, 38(3), 1–33.
Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K., &
Larson, H. J. (2021). Measuring the impact of COVID-19
vaccine misinformation on vaccination intent in the UK
and USA. Nature Human Behaviour, 5(3), 337–348.
Luo, M., Hancock, J. T., & Markowitz, D. M. (2022). Credibility
perceptions and detection accuracy of fake news head-
lines on social media: Effects of truth-bias and endorse-
ment cues. Communication Research, 49(2), 171–195.
Luskin, R. C., & Bullock, J. G. (2011). “Don’t know” means
“don’t know”: DK responses and the public’s level of polit-
ical knowledge. The Journal of Politics, 73(2), 547–557.
Maertens, R., Anseel, F., & van der Linden, S. (2020).
Combatting climate change misinformation: Evidence for
longevity of inoculation and consensus messaging effects.
Journal of Environmental Psychology, 70, Article 101455.
https://doi.org/10.1016/j.jenvp.2020.101455
Margolin, D. B. (2021). Theory of informative fictions: A char-
acter-based approach to false news and other misinforma-
tion. Communication Theory, 31(4), 714–736.
Marwick, A. (2018). Why do people share fake news? A
sociotechnical model of media effects. Georgetown Law
Technology Review, 2(2), 474–512.
Marwick, A., Clancy, B., & Furl, K. (2022). Far-Right online
radicalization: A review of the literature. The Bulletin of
Technology and Public Life.
McCright, A. M., & Dunlap, R. E. (2011). The politicization of
climate change and polarization in the American public’s
views of global warming, 2001–2010. The Sociological
Quarterly, 52(2), 155–194.
McCright, A. M., Dunlap, R. E., & Marquart-Pyatt, S. T. (2016).
Political ideology and views about climate change in the
European Union. Environmental Politics, 25(2), 338–358.
McKay, R. T., & Dennett, D. C. (2009). Our evolving beliefs
about evolved misbelief. Behavioral and Brain Sciences,
32(6), 541–561.
Metzger, M., Flanagin, A., Mena, P., Jiang, S., & Wilson, C.
(2021). From dark to light: The many shades of sharing
misinformation online. Media and Communication, 9(1),
134–143.
Mill, J. S. (1859). On liberty. Oxford University.
Miró-Llinares, F., & Aguerri, J. C. (2021). Misinformation
about fake news: A systematic critical review of empirical
studies on the phenomenon and its status as a ‘threat.
European Journal of Criminology. https://doi.org/
10.1177/1477370821994059
Monti, F., Frasca, F., Eynard, D., Mannion, D., & Bronstein,
M. M. (2019). Fake news detection on social media using
geometric deep learning. arXiv preprint. arXiv:1902.06673.
Moravec, P., Minas, R., & Dennis, A. R. (2018). Fake news
on social media: People believe what they want to believe
when it makes no sense at all (Kelly School of Business
Research Paper No. 18-87). https://ssrn.com/abstract=
3269541
Mug˘alog˘ lu, E. Z., Kaymaz, Z., Mısır, M. E., & Laçin-S¸ims¸ek, C.
(2022). Exploring the role of trust in scientists to explain
health-related behaviors in response to the COVID-19
pandemic. Science & Education, 31(5), 1281–1309.
Musi, E. (2018). How did you change my view? A corpus-
based study of concessions’ argumentative role. Discourse
Studies, 20(2), 270–288.
Perspectives on Psychological Science XX(X) 25
Nelson, J. L., & Taneja, H. (2018). The small, disloyal fake
news audience: The role of audience availability in fake
news consumption. New Media & Society, 20(10), 3720–
3737.
Newman, N., Fletcher, R., Kalogeropoulos, A., & Nielsen, R. K.
(2019). Reuters Institute digital news report 2019. https://
reutersinstitute.politics.ox.ac.uk/sites/default/files/2019-
06/DNR_2019_FINAL_0.pdf
Newport, F. (2015). In U.S., percentage saying vaccines are
vital dips slightly. http://www.gallup.com/poll/181844/
percentage-saying-vaccines-vital-dips-slightly.aspx
Norenzayan, A., & Atran, S. (2004). Cognitive and emotional
processes in the cultural transmission of natural and
nonnatural beliefs. In M. Schaller & C. Crandall (Eds.),
The psychological foundations of culture (pp. 149–169).
Mahwah, NJ: Lawrence Erlbaum.
Nyhan, B. (2020). Facts and myths about misperceptions.
Journal of Economic Perspectives, 34(3), 220–236.
Nyhan, B., & Reifler, J. (2010). When corrections fail: The
persistence of political misperceptions. Political Behavior,
32(2), 303–330.
Osman, M., Adams, Z., Meder, B., Bechlivanidis, C., Verduga,
O., & Strong, C. (2022). People’s understanding of the
concept of misinformation. Journal of Risk Research,
25(10), 1239–1258.
Osman, M., McLachlan, S., Fenton, N., Neil, M., Löfstedt, R.,
& Meder, B. (2020). Learning from behavioral changes
that fail. Trends in Cognitive Sciences, 24(12), 969–980.
Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A., &
Petersen, M. B. (2021). Partisan polarization is the pri-
mary psychological motivation behind political fake news
sharing on Twitter. American Political Science Review,
115(3), 999–1015.
Oyserman, D., & Dawson, A. (2020). Your fake news, our facts:
Identity-based motivation shapes what we believe, share
and accept. In R. Greifender, M. Jaffe, E. J. Newman, & N.
Schwarz (Eds.), The psychology of fake news: Accepting,
sharing and correcting misinformation. Psychology Press.
Pantazi, M., Hale, S., & Klein, O. (2021). Social and cognitive
aspects of the vulnerability to political misinformation.
Political Psychology, 42, 267–304.
Pasek, J., Sood, G., & Krosnick, J. A. (2015). Misinformed
about the Affordable Care Act? Leveraging certainty
to assess the prevalence of misperceptions. Journal of
Communication, 65(4), 660–673.
Paskin, D. (2018). Real or fake news: Who knows? The Journal
of Social Media in Society, 7(2), 252–273.
Peck, A. (2020). A problem of amplification: Folklore and fake
news in the age of social media. Journal of American
Folklore, 133(529), 329–351.
Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A., Eckles, D.,
& Rand, D. (2021c). Shifting attention to accuracy can
reduce misinformation online. Nature, 592(7855), 590–592.
Pennycook, G., McPhetres, J., Zhang, Y., Lu, J., & Rand, D.
(2020). Fighting COVID-19 misinformation on social
media: Experimental evidence for a scalable accuracy-
nudge intervention. Psychological Science, 31(7), 770–780.
Pennycook, G., & Rand, D. G. (2021a). Accuracy prompts are
a replicable and generalizable approach for reducing the
spread of misinformation. Nature Communications, 13,
Article 2333.
Pennycook, G., & Rand, D. G. (2021b). The psychology of
fake news. Trends in Cognitive Sciences, 25(5), 388–402.
Petratos, P. (2021). Misinformation, disinformation, and fake
news: Cyber risks to business. Business Horizons, 64(6),
763–774.
Pettegree, A. (2014). The invention of news: How the world
came to know about itself. Yale University Press.
Pluviano, S., Watt, C., Pompéia, S., Ekuni, R., & Della Sala, S.
(2022). Forming and updating vaccination beliefs: Does
the continued effect of misinformation depend on what
we think we know? Cognitive Processing, 23, 367–378.
Pomerantsev, P. (2015). Authoritarianism goes global (II): The
Kremlin’s information war. Journal of Democracy, 26(4),
40–50.
Poovey, M. (1998). A history of the modern fact: Problems
of knowledge in the sciences of wealth and society. The
University of Chicago Press.
Posetti, J., & Matthews, A. (2018). A short guide to the history
of ‘fake news’ and disinformation. International Center
for Journalists. https://www.icfj.org/news/short-guide-
history-fake-news-and-disinformation-new-icfj-learning-
module
Priniski, J. H., & Horne, Z. (2018, July 25–28). Attitude change
on reddit’s change my view. In Proceedings of the 40th
Annual Conference of the Cognitive Science Society,
Madison (pp. 2279–2284). Cognitive Science Society.
Pronin, E., Lin, D., & Ross, L. (2002). The bias blind spot:
Perceptions of bias in self and others. Personality and
Social Psychology Bulletin, 28, 369–381.
Pulido, C. M., Villarejo-Carballido, B., Redondo-Sama, G., &
Gomez, A. (2020). COVID-19 infodemic: More retweets
for science-based information on coronavirus than for
false information. International Sociology, 35(4), 377–392.
Qazvinian, V., Rosengren, E., Radev, D., & Mei, Q. (2011).
Rumor has it: Identifying misinformation in microblogs. In
Proceedings of the 2011 Conference on Empirical Methods
in Natural Language Processing (pp. 1589–1599).
Rader, E., & Gray, R. (2015). Understanding user beliefs
about algorithmic curation in the Facebook news feed.
In Proceedings of the 33rd Annual ACM Conference on
Human Factors in Computing Systems (pp. 173–182).
Rao, A. (2022). Deceptive claims using fake news advertising:
The impact on consumers. Journal of Marketing Research,
59(3), 534–554.
Rashkin, P., Choi, E., Jang, J., Volkova, S., & Choi, Y. (2017).
Truth of varying shades: Analyzing language in fake news
and political fact-checking. In Proceedings of the 2017
Conference in Empirical Methods in Natural Language
Processing, Copenhagen, Denmark (pp. 2931–2937).
Reiss, J., & Sprenger, J. (2020). Scientific objectivity. In The
Stanford Encyclopedia of Philosophy. https://plato.stanford
.edu/cgi-bin/encyclopedia/archinfo.cgi?entry=scientific-
objectivity
Ribeiro, M. H., Calais, P. H., Almeida, V. A. F., & Meira, W.,
Jr. (2017). “Everything I disagree with is #FakeNews”:
Correlating political polarization and spread of misinfor-
mation. arXiv preprint arXiv:1706.05924.
26 Adams et al.
Robertson, C. T., & Mourão, R. R. (2020). Faking alternative
journalism? An analysis of presentations of “fake news”
sites. Digital Journalism, 8(8), 1011–1029.
Robinson, R. J., Keltner, D., Ward, A., & Ross, L. (1995).
Actual versus assumed differences in construal: “Naive
realism” in intergroup perception and conflict. Journal of
Personality and Social Psychology, 68(3), 404–417.
Rommetveit, R. (1979). On the architecture of intersubjectiv-
ity. In R. Rommetveit & R. M. Blekar (Eds.), Studies of lan-
guage, thought and verbal communication (pp. 58–75).
Academic Press.
Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman,
A. L. J., Recchia, G., van der Bles, A. M., & van der Linden,
S. (2020). Susceptibility to misinformation about COVID-
19 around the world. Royal Society Open Science, 7(10),
Article 201199. https://doi.org/10.1098/rsos.201199
Roozenbeek, J., & van der Linden, S. (2019). The fake news
game: Actively inoculating against the risk of misinforma-
tion. Journal of Risk Research, 22(5), 570–580.
Ross, A. S., & Rivers, D. J. (2018). Discursive deflection:
Accusation of “fake news” and the spread of mis- and
disinformation in the tweets of President Trump. Social
Media + Society, 4(2), 1–12.
Ross, L. (2018). From the fundamental attribution error to
the truly fundamental attribution error and beyond: My
research journey. Perspectives on Psychological Science,
13(6), 750–769.
Rothschild, N., & Fischer, S. (2022, July 12). News engage-
ment plummets as Americans tune out. Axios. Retrieved
from https://www.axios.com/2022/07/12/news-media-
readership-ratings-2022
Scheufele, D. A., & Krause, N. M. (2019). Science audiences,
misinformation, and fake news. Proceedings of the
National Academy of Sciences, USA, 116(16), 7662–7669.
Scheufele, D. A., Krause, N. M., & Freiling, I. (2021).
Misinformed about the “infodemic?” Science’s ongoing
struggle with misinformation. Journal of Applied Research
in Memory and Cognition, 10(4), 522–526.
Schuetz, A. (1942). Scheler’s theory of intersubjectivity
and the general thesis of the alter ego. Philosophy and
Phenomenological Research, 2(3), 323–347.
Schwalbe, M. C., Cohen, G. L., & Ross, L. D. (2020). The objec-
tivity illusion and voter polarization in the 2016 presi-
dential election. Proceedings of the National Academy of
Sciences, USA, 117(35), 21218–21229.
Shao, C., Ciampaglia, G., Flammini, A., & Menczer, F. (2016).
Hoaxy: A platform for tracking online misinforma-
tion. In WWW ’16 Companion: Proceedings of the 25th
International Conference Companion on World Wide Web
(pp. 745–750). https://doi.org/10.1145/2872518.2890098
Shao, C., Hui, P.-M., Wang, L., Jiang, X., Flammini, A., Mentzer,
F., & Ciampaglia, G. (2018). Anatomy of online misin-
formation network. PLOS ONE, 13(4), Article e0196087.
https://doi.org/10.1371/journal.pone.0196087
Sheeran, P., Maki, A., Montanaro, E., Avishai-Yitshak, A.,
Bryan, A., Klein, W. M., & Rothman, A. J. (2016). The
impact of changing attitudes, norms, and self-efficacy on
health-related intentions and behavior: A meta-analysis.
Health Psychology, 35(11), 1178–1188.
Sherman, D. K., & Cohen, G. L. (2002). Accepting threatening
information: Self-affirmation and the reduction of defen-
sive biases. Current Directions in Psychological Science,
11, 119–123.
Shibutani, T. (1966). Improvised news: A sociological study of
rumor. Bobbs-Merrill.
Shiina, A., Niitsu, T., Kobori, O., Idemoto, K., Hashimoto, T.,
Sasaki, T., & Iyo, M. (2020). Relationship between per-
ception and anxiety about COVID-19 infection and risk
behaviors for spreading infection: A national survey in
Japan. Brain, Behavior, and Immunity - Health, 6, Article
100101.
Shin, J., Jian, L., Driscoll, K., & Bar, F. (2018). The diffusion
of misinformation on social media: Temporal pattern,
message and source. Computers in Human Behavior, 83,
278–287.
Shu, K., Sliva, A., Wang, S., Tang, J., & Liu, H. (2017). Fake
news detection on social media: A data mining perspec-
tive. ACM SIGKDD explorations newsletter, 19(1), 22–36.
Shu, K., Wang, S., & Liu, H. (2018, April). Understanding user
profiles on social media for fake news detection. In 2018
IEEE Conference on Multimedia Information Processing
and Retrieval (MIPR) (pp. 430–435). IEEE.
Silverman, C., & Alexander, L. (2016). How teens in the
Balkans are duping Trump supporters with fake news.
Buzzfeed News, 3, 874–888.
Sloane, T. (2001). Encyclopaedia of rhetoric. Oxford University
Press.
Snyder, T. (2021). On tyranny: Twenty lessons from the twen-
tieth century [graphic ed.]. Random House.
Søe, S. (2017). Algorithmic detection of misinformation
and disinformation: Gricean perspectives. Journal of
Documentation, 74(2), 309–332.
Southwell, B. G., & Thorson, E. A. (2015). The prevalence, con-
sequences and remedy of misinformation in mass media
systems. Journal of Communication, 65(4), 589–595.
Soutter, A. R. B., Bates, T. C., & Mõttus, R. (2020). Big Five
and HEXACO personality traits, proenvironmental atti-
tudes, and behaviors: A meta-analysis. Perspectives on
Psychological Science, 15(4), 913–941.
Steensen, S. (2019). Journalism’s epistemic crisis and its solu-
tion: Disinformation, datafication and source criticism.
Journalism, 20(1), 185–189.
Stille, L., Norin, E., & Sikström, S. (2017). Self-delivered
misinformation – Merging the choice blindness and
misinformation effect paradigms. PLOS ONE, 12(3),
Article e0173606. https://doi.org/10.1371/journal.pone
.0173606
Strandberg, T., Björklund, F., Hall, L., Johansson, P., &
Pärnamets, P. (2019, July). Correction of manipulated
responses in the choice blindness paradigm: What are
the predictors? In CogSci (pp. 2884–2890).
Sunstein, C. (2017). #Republic: Divided democracy in the age
of social media. Princeton University Press.
Talwar, S., Dhir, A., Kaur, P., Zafar, N., & Alrasheedy, M.
(2019). Why do people share fake news? Associations
between the dark side of social media use and fake news
sharing behaviour. Journal of Retailing and Consumer
Services, 51, 72–82.
Perspectives on Psychological Science XX(X) 27
Tambuscio, M., Ruffo, G., Flammini, A., & Menczer, F.
(2015). Fact-checking effect on viral hoaxes: A model
of misinformation spread in social networks. In WWW
’15 Companion: Proceedings of the 24th International
Conference on World Wide Web (pp. 977–982). https://
doi.org/10.1145/2740908.2742572
Tan, A. S., Lee, C. J., & Chae, J. (2015). Exposure to health
(mis)information: Lagged effects on young adults’
health behaviors and potential pathways. Journal of
Communication, 65(4), 674–698.
Tandoc, E. C., Jr., Duffy, A., Jones-Jang, S. M., & Pin, W. G.
W. (2021). Poisoning the information well? The impact of
fake news on news media credibility. Journal of Language
and Politics, 20(5), 783–802.
Tandoc, E. C., Jr., Lim, Z. W., & Ling, R. (2018). Defining “fake
news.” Digital Journalism, 6(2), 137–153.
Tasnim, S., Hossain, M. M., & Mazumder, H. (2020). Impact
of rumours and misinformation on COVID-19 in social
media. Journal of Preventive Medicine and Public Health,
53, 171–174.
Tornberg, P. (2018). Echo chambers and viral misinformation:
Modeling fake news as complex contagion. PLOS ONE,
13(9), Article e0203958. https://doi.org/10.1371/journal
.pone.0203958
Trafimow, D., & Osman, M. (2022). Barriers to converting
applied social psychology to bettering the human condi-
tion. Basic and Applied Social Psychology, 44(1), 1–11.
Trevors, G., & Duffy, M. C. (2020). Correcting COVID-19 mis-
conceptions requires caution. Educational Researcher,
49(7), 538–542.
Tufekci, Z. (2014). Big questions for social media big data:
Representativeness, validity and other methodological
pitfalls. Proceedings of the Eighth International AAAI
Conference on Weblogs and Social Media, 8(1), 505–514.
https://doi.org/10.1609/icwsm.v8i1.14517
Valecha, R., Volety, T., Rao, H. R., & Kwon, K. H. (2020).
Misinformation sharing on Twitter during Zika: An inves-
tigation of the effect of threat and distance. IEEE Internet
Computing, 25(1), 31–39.
Valenzuela, S., Bachmann, I., & Bargsted, M. (2019). The per-
sonal is the political? What do WhatsApp users share and
how it matters for news knowledge, polarization and par-
ticipation in Chile. Digital Journalism, 9(1), 1–21.
Valenzuela, S., Halpern, D., Katz, J. E., & Miranda, J. P. (2019).
The paradox of participation versus misinformation:
Social media, political engagement, and the spread of
misinformation. Digital Journalism, 7(6), 802–823.
Van Bavel, J. J., Harris, E. A., Pärnamets, P., Rathje, S., Doell,
K. C., & Tucker, J. A. (2021). Political psychology in the
digital (mis)information age: A model of news belief and
sharing. Social Issues and Policy Review, 15(1), 84–113.
van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E.
(2017). Inoculating the public against misinformation
about climate change. Global Challenges, 1, Article
1600008.
van der Linden, S., Roozenbeek, J., & Compton, J. (2020).
Inoculating against fake news about COVID-19. Frontiers
in Psychology, 11, Article 566790. https://doi.org/10.3389/
fpsyg.2020.566790
Van der Meer, T., & Jin, Y. (2019). Seeking formula for
misinformation treatment in public health crises: The
effects of corrective information type and source. Health
Communication, 35(5), 560–575.
Van Heekeren, M. (2019). The curative effect of social media
on fake news: A historical re-evaluation. Journalism
Studies, 21(3), 306–318.
van Prooijen, J.-W., Etienne, T. W, Kutiyski, Y., & Krouwel,
A. P. M. (2021). Conspiracy beliefs prospectively pre-
dict health behavior and well-being during a pandemic.
Psychological Medicine. Advance online publication.
https://doi.org/10.1017/S0033291721004438
Vargo, C. J., Guo, L., & Amazeen, M. A. (2017). The agenda-
setting power of fake news: A Big Data analysis of the
online media landscape from 2014 to 2016. New Media &
Society, 20, 2028–2049.
Vraga, E. K., & Bode, L. (2020). Defining misinformation
and understanding its bounded nature: Using expertise
and evidence for describing misinformation. Political
Communication, 37(1), 136–144.
Vraga, E. K., Tully, M., & Bode, L. (2020). Empowering users
to respond to misinformation about Covid-19. Media and
Communication (Lisboa), 8(2), 475–479.
Wagner, M. C., & Boczkowski, P. J. (2019). The reception of
fake news: The interpretations and practices that shape
the consumption of perceived misinformation. Digital
Journalism, 7(7), 870–885.
Waldman, A. (2018). The marketplace of fake news. University
of Pennsylvania Journal of Constitutional Law, 20, 845–870.
Walter, N., Brooks, J. J., Saucier, C. J., & Suresh, S. (2021).
Evaluating the impact of attempts to correct health mis-
information on social media: A meta-analysis. Health
Communication, 36(13), 1776–1784.
Walter, N., Cohen, J., Holbert, R. L., & Morag, Y. (2020). Fact-
checking: A meta-analysis of what works and for whom.
Political Communication, 37(3), 350–375.
Walter, N., & Murphy, S. T. (2018). How to unring the bell: A
meta-analytic approach to correction of misinformation.
Communication Monographs, 85(3), 423–441.
Walter, N., & Tukachinsky, R. (2020). A meta-analytic examina-
tion of the continued influence of misinformation in the face
of correction: How powerful is it, why does it happen, and
how to stop it? Communication Research, 47(2), 155–177.
Wardle, C. (2020). Journalism and the new information eco-
system: Responsibilities and challenges. In M. Zimdars &
K. McLeod (Eds.), Fake news: Understanding media and
misinformation in the digital age (pp. 71–86). MIT Press.
Waruwu, B. K., Tandoc, E. C., Jr., Duffy, A., Kim, N., & Ling, R.
(2021). Telling lies together? Sharing news as a form of social
authentication. New Media & Society, 23(9), 2516–2533.
Wasserman, H. (2020). Fake news from Africa: Panics, politics
and paradigms. Journalism, 21(1), 3–16.
Watts, D. J., & Rothschild, D. M. (2017, Dec. 5). Don’t blame
the election on fake news. Blame it on the media.
Columbia Journalism Review. https://www.cjr.org/analy
sis/fake-news-media-election-trump.php
Watts, D. J., Rothschild, D. M., & Mobius, M. (2021). Measuring
the news and its impact on democracy. Proceedings of
the National Academy of Sciences, 118(15), e1912443118.
28 Adams et al.
Webb, T. L., & Sheeran, P. (2006). Does changing behavioral
intentions engender behavior change? A meta-analysis of
the experimental evidence. Psychological Bulletin, 132(2),
249–268.
Wei, Z., Liu, Y., & Li, Y. (2016). Is this post persuasive?
Ranking argumentative comments in the online forum. In
Proceedings of the 54th Annual Meeting of the Association
for Computational Linguistics (pp. 195–200).
Weidner, K., Beuk, F., & Bal, A. (2019). Fake news and the
willingness to share: A schemer schema and confirma-
tory bias perspective. Journal of Product and Brand
Management, 29(2), 180–187. https://doi.org/10.1108/
JPBM-12-2018-2155
Weiner, J. S., & Oakley, K. P. (1953). THE SOLUTION OF.
Geology, 2(3).
Whyte, C. (2020). Deepfake news: AI-enabled disinformation
as a multi-level public policy challenge. Journal of Cyber
Policy, 5(2), 199–217.
Wilson, K. H. (2015). The national and cosmopolitan dimen-
sions of disciplinarity: Reconsidering the origins of com-
munication studies. Quarterly Journal of Speech, 101(1),
244–257.
Winchester, S. (2018). Exactly: How precision engineers cre-
ated the modern world. William Collins.
Wood, T., & Porter, E. (2018). The elusive backfire effect: Mass
attitudes’ steadfast factual adherence. Political Behaviour,
41, 135–163.
Wootton, D. (2015). The invention of science: A new history
of the scientific revolution. Penguin Books.
World Health Organization. (2022). Infodemic. https://www
.who.int/health-topics/infodemic#tab=tab_1
Wright, D. B., Self, G., & Justice, C. (2000). Memory conformity:
Exploring misinformation effects when presented by another
person. British Journal of Psychology, 91, 2189–2202.
Wu, L., Morstatter, F., Carley, K., & Liu, H. (2019). Misinformation
in social media: Definition, manipulation and detection.
ACM SIKDD Explorations Newsletter, 21(2), 80–90.
Xiao, X., & Wong, R. M. (2020). Vaccine hesitancy and per-
ceived behavioral control: A meta-analysis. Vaccine,
38(33), 5131–5138.
Zebregs, S., van den Putte, B., Neijens, P., & de Graaf, A.
(2015). The differential impact of statistical and narra-
tive evidence on beliefs, attitude, and intention: A meta-
analysis. Health Communication, 30(3), 282–289.
Zeng, E., Kohno, T., & Roesner, F. (2020). Bad news: Clickbait
and deceptive ads on news and misinformation websites.
In Workshop on Technology and Consumer Protection.
(ConPro). IEEE, New York, NY.
Zeng, E., Kohno, T., & Roesner, F. (2021). What makes a “bad”
ad? User perceptions of problematic online advertising.
In Proceedings of the 2021 CHI Conference on Human
Factors in Computing Systems (pp. 1–24).
Zhou, X., & Zafarani, R. (2021). Fake news: A survey of
research, detection methods and opportunities. ACM
Computing Surveys, 53(5), 1–40.
Zhou, Y., & Shen, L. (2021). Confirmation bias and the
persistence of misinformation on climate change.
Communication Research, 49(4), 500–523.
... Before we scrutinize two of the main arguments used to support the minimizing position, we note several points of general agreement. First, we concur that misinformation is not a new problem (e.g., Adams et al., 2023;Freiling et al., 2023;Scheufele et al., 2021). However, the scale and complexity of the problem have increased due to the evolution of the information environment and ensuing changes to incentives, transmission speeds, and the structure of online and offline media and communication networks, as well as emergence of the "post-truth" phenomenon (e.g., Capilla, 2021;Lasser et al., 2023;Terren & Borge-Bravo, 2021;Vosoughi et al., 2018). ...
... The second major claim put forward by critics is that misinformation does not have meaningful causal impacts on beliefs or behaviors (e.g., Adams et al., 2023;Altay, Berriche, & Acerbi, 2023;Freiling et al., 2023). Causality is, of course, a vexing issue throughout science. ...
... Even if misinformation consumption is voluntary, it does not follow that this consumption is generally harmless. critics have claimed that no problematic behaviors have been "reliably demonstrated empirically to be the outcome of misinformation" (Adams et al., 2023(Adams et al., , p. 1436, that the relationship between misinformation and undesirable behaviors is merely "speculated" (ibid.), and that "with the exception of [an] unconscious priming study, [no] work examining the association between misinformation … and behavior shows a causal link between the two" (p. 1445). ...
Article
Full-text available
Recent academic debate has seen the emergence of the claim that misinformation is not a significant societal problem. We argue that the arguments used to support this minimizing position are flawed, particularly if interpreted (e.g., by policymakers or the public) as suggesting that misinformation can be safely ignored. Here, we rebut the two main claims, namely that misinformation is not of substantive concern (a) due to its low incidence and (b) because it has no causal influence on notable political or behavioral outcomes. Through a critical review of the current literature, we demonstrate that (a) the prevalence of misinformation is nonnegligible if reasonably inclusive definitions are applied and that (b) misinformation has causal impacts on important beliefs and behaviors. Both scholars and policymakers should therefore continue to take misinformation seriously.
... The problems determining what exactly the effects of misinformation are call for care when making statements about how serious and widespread a problem scientific misinformation is when it comes to people's beliefs and particularly to their behaviors. Poor information environments are not, after all, a recent problem (Adams et al., 2023;Scheufele et al., 2021). ...
... Measuring the effectiveness of these corrections is not without methodological challenges. But given the problems we have just discussed, it should be unsurprising that evidence of effectiveness is mixed (Adams et al., 2023;Ecker et al., 2022;Fernández-Roldán et al., 2023;van der Linden, 2022). Hence, even assuming that the misinformation narrative is correct, these strategies might not be very effective in correcting scientific misinformation. ...
Chapter
Full-text available
Science is our most reliable producer of knowledge. Nonetheless, a significant amount of evidence shows that pluralities of members of publics question a variety of accepted scientific claims as well as policies and recommendation informed by the scientific evidence. Scientific misinformation is considered to play a central role in this state of affairs. In this paper, I challenge the emphasis on misinformation as a primary culprit on two grounds. First, the phenomenon of misinformation is far less clear than what much discussion about the topic would lead one to believe. The evidence regarding the amount of misinformation that exits as well as its role in people's harmful behaviors is at best conflicting and at worst completely useless. Second, the prominence given to misinformation and its harms on people's behaviors disregards the role of values in policymaking and treats scientific information as if it were the only information necessary to make policy decisions. At a minimum, these problems call for caution regarding the emphasis on this phenomenon. After all, if the problem is incorrectly diagnosed, the solutions that are being offered to address the problem of misinformation are bound to at best inadequate and at worst dangerous.
... However, what constitutes "proper" historical research itself has evolved over time, shaped by changing epistemological frameworks and institutional authorities. The privileging of certain forms of knowledge production over others reflects broader historical shifts in how societies establish and validate truth claims (Adams, Osman, Bechlivanidis, Meder, 2023). In other words, accepted historical scholarship in the past may be pseudohistorical now, reflecting the historically contingent nature of authoritative knowledge itself. ...
... In other words, social media users and influencers work for sensemaking, attach meaning to places, and sometimes augment reality. Thus, the intersubjectivity provided by social media platforms may also have the power to increase or decrease the misleading effects of the subjective evaluations on a topic (Adams et al., 2023)-say, for example, a travel experience to a historical part of a city-by widely accumulating likes or dislikes as well as the explanatory comments and hashtags about that place (Wang & Alasuutari, 2017). Therefore, social media can be recognized as a sensemaking tool, in Weick's (1995) perspective. ...
Article
Full-text available
The paper aims to analyze and make visible the intertwined layers of a palimpsest territory, such as the historical bazaar of Kemeraltı in Izmir, Turkey, through the lens of architecture students concerning their perceptions of producing Instagrammable visual data to influence and attract the prospective visitors and to compensate the lack of interest in the bazaar as mentioned by local institutions. Through this analysis, we also aim to conduct a methodological experiment that recognizes social media as a cognitive tool by combining digital and physical representations. The study encompasses a one-month workshop for students of the visualization in an elective architecture course. The technique of theme-based cognitive mapping both in the space of places and the space of flows in Manuel Castells’ sense was utilized to investigate how the students perceive the historical and socio-cultural qualities of the region through different realms. The workshop briefs were accompanied by Instagram hashtag research and the design of visual journals, consisting of the photographs and videos taken by the students to share their influencer/sensemaker routes specific to the selected themes. The students tailored various influencer/sensemaker roles and generated place-based scenarios in combination with the themes. Ultimately, Kemeraltı’s multifaceted genius could be reflected on cognitive and sensory grounds through digital and physical cognitive maps, social media journals, and analyses. It was observed that the students could integrate the intersubjective character of the readily presented data on social media into the subjective and authentic character of the data produced mindfully on the site.
... In an Italian context, Cantarella et al. (2023) show a causal effect between exposure to false information and support for populist parties, suggesting that such exposure could explain at least part of the populist vote (nonetheless explained mostly by self-selection). Yet, the nature and strength of the relationship between misinformation and behavior remain a contentious issue (Adams et al. 2023;Curini and Pizzimenti 2020). ...
Article
News consumption and voting behavior are interlinked and particularly important in elections where traditional political cleavages are not easily applicable. This relationship becomes more complex and uncertain in contexts of low trust in the news media and high levels of misinformation circulating in different news ecosystems. In this study, we test an indirect path between differentiated news media consumption and voting choices, mediated by belief in misinformation, and moderated by news media trust. Our data come from a two-wave panel survey of 1,332 respondents, conducted in Chile before and after the 2022 Constitutional Referendum, a political event that captured international attention after a constitutional proposal was rejected in a process initiated with high public support. Our analyses found that news media consumption significantly affected voting preferences in the referendum, not only indirectly through the acceptance of misinformation, but also directly, suggesting that news organizations might act, intentionally or not, as soundboards of misinformation. These findings suggest that countries with enough press freedom to rely on the news media to be informed but also a high concentration of ownership, topics, and angles covered, might become fertile soil for misinformation to spread in the form of professional news coverage, instead of fabricated, easy-to-spot fake pieces circulating in dubious websites or on social media.
... Forged content is a long-standing security threat (Piva, 2013;Rocha, Scheirer, Boult, & Goldenstein, 2011). Examples of fraudulent content include synthesized faces on fake passports (Robertson et al., 2018), manipulated photos in journalism (Hadland, Cambell, & Lambert, 2015), misinformation in news (Adams, Osman, Bechlivanidis, & Meder, 2023), and false evidence presented in court (Amerini et al., 2013). Evidently, humans have trouble differentiating real content from forgeries (Nightingale, Wade, & Watson, 2017;Sanders, Ueda, Yoshikawa, & Jenkins, 2019;Schetinger, Oliveira, da Silva, & Carvalho, 2017). ...
... Organization, 2022; cf. Adams et al., 2023;Altay et al., 2023). The perceived threat of misinformation has mobilized behavioral scientists to understand relevant cognitive processes and minimize its impact. ...
Article
Full-text available
The standard method for addressing the consequences of misinformation is the provision of a correction in which the misinformation is directly refuted. However, the impact of misinformation may also be successfully addressed by introducing or bolstering alternative beliefs with opposite evaluative implications. Six preregistered experiments clarified important processes influencing the impact of bypassing versus correcting misinformation via negation. First, we find that, following exposure to misinformation, bypassing generally changes people’s attitudes and intentions more than correction in the form of a simple negation. Second, this relative advantage is not a function of the depth at which information is processed but rather the degree to which people form attitudes or beliefs when they receive the misinformation. When people form attitudes when they first receive the misinformation, bypassing has no advantage over corrections, likely owing to anchoring. In contrast, when individuals focus on the accuracy of the statements and form beliefs, bypassing is significantly more successful at changing their attitudes because these attitudes are constructed based on expectancy-value principles, while misinformation continues to influence attitudes after correction. Broader implications of this work are discussed.
Book
Misinformation can be broadly defined as information that is inaccurate or false according to the best available evidence, or information whose validity cannot be verified. It is created and spread with or without clear intent to cause harm. There is well-documented evidence that misinformation persists despite fact-checking and the presentation of corrective information, often traveling faster and deeper than facts in the online environment. Drawing on the frameworks of social judgment theory, cognitive dissonance theory, and motivated information processing, the authors conceptualize corrective information as a generic type of counter-attitudinal message and misinformation as attitude-congruent messages. They then examine the persistence of misinformation through the lens of biased responses to attitude-inconsistent versus -consistent information. Psychological inoculation is proposed as a strategy to mitigate misinformation.
Article
Misinformation represents an evolutionary paradox: despite its harmful impact on society, it persists and evolves, thriving in the information-rich environment of the digital age. This paradox challenges the conventional expectation that detrimental entities should diminish over time. The persistence of misinformation, despite advancements in fact-checking and verification tools, suggests that it possesses adaptive qualities that enable it to survive and propagate. This paper explores how misinformation, as a blend of truth and fiction, continues to resonate with audiences. The role of narratives in human history, particularly in the evolution of Homo narrans, underscores the enduring influence of storytelling on cultural and social cohesion. Despite the increasing ability of individuals to verify the accuracy of sources, misinformation remains a significant challenge, often spreading rapidly through digital platforms. Current behavioral research tends to treat misinformation as completely irrrational, static, finite entities that can be definitively debunked, overlooking their dynamic and evolving nature. This approach limits our understanding of the behavioral and societal factors driving the transformation of misinformation over time. The persistence of misinformation can be attributed to several factors, including its role in fostering social cohesion, its perceived short-term benefits, and its use in strategic deception. Techniques such as extrapolation, intrapolation, deformation, cherry-picking, and fabrication contribute to the production and spread of misinformation. Understanding these processes and the evolutionary advantages they confer is crucial for developing effective strategies to counter misinformation. By promoting transparency, critical thinking, and accurate information, society can begin to address the root causes of misinformation and create a more resilient information environment.
Book
Full-text available
This book examines the shape, composition, and practices of the United States political media landscape. It explores the roots of the current epistemic crisis in political communication with a focus on the remarkable 2016 U.S. president election culminating in the victory of Donald Trump and the first year of his presidency. The authors present a detailed map of the American political media landscape based on the analysis of millions of stories and social media posts, revealing a highly polarized and asymmetric media ecosystem. Detailed case studies track the emergence and propagation of disinformation in the American public sphere that took advantage of structural weaknesses in the media institutions across the political spectrum. This book describes how the conservative faction led by Steve Bannon and funded by Robert Mercer was able to inject opposition research into the mainstream media agenda that left an unsubstantiated but indelible stain of corruption on the Clinton campaign. The authors also document how Fox News deflects negative coverage of President Trump and has promoted a series of exaggerated and fabricated counter narratives to defend the president against the damaging news coming out of the Mueller investigation. Based on an analysis of the actors that sought to influence political public discourse, this book argues that the current problems of media and democracy are not the result of Russian interference, behavioral microtargeting and algorithms on social media, political clickbait, hackers, sockpuppets, or trolls, but of asymmetric media structures decades in the making. The crisis is political, not technological.
Article
Full-text available
Alarmist narratives about online misinformation continue to gain traction despite evidence that its prevalence and impact are overstated. Drawing on research examining the use of big data in social science and reception studies, we identify six misconceptions about misinformation and highlight the conceptual and methodological challenges they raise. The first set of misconceptions concerns the prevalence and circulation of misinformation. First, scientists focus on social media because it is methodologically convenient, but misinformation is not just a social media problem. Second, the internet is not rife with misinformation or news, but with memes and entertaining content. Third, falsehoods do not spread faster than the truth; how we define (mis)information influences our results and their practical implications. The second set of misconceptions concerns the impact and the reception of misinformation. Fourth, people do not believe everything they see on the internet: the sheer volume of engagement should not be conflated with belief. Fifth, people are more likely to be uninformed than misinformed; surveys overestimate misperceptions and say little about the causal influence of misinformation. Sixth, the influence of misinformation on people’s behavior is overblown as misinformation often “preaches to the choir.” To appropriately understand and fight misinformation, future research needs to address these challenges.
Chapter
While there is overwhelming scientific agreement on climate change, the public has become polarized over fundamental questions such as human-caused global warming. Communication strategies to reduce polarization rarely address the underlying cause: ideologically-driven misinformation. In order to effectively counter misinformation campaigns, scientists, communicators, and educators need to understand the arguments and techniques in climate science denial, as well as adopt evidence-based approaches to neutralizing misinforming content. This chapter reviews analyses of climate misinformation, outlining a range of denialist arguments and fallacies. Identifying and deconstructing these different types of arguments is necessary to design appropriate interventions that effectively neutralize the misinformation. This chapter also reviews research into how to counter misinformation using communication interventions such as inoculation, educational approaches such as misconception-based learning, and the interdisciplinary combination of technology and psychology known as technocognition.
Chapter
New perspectives on the misinformation ecosystem that is the production and circulation of fake news. What is fake news? Is it an item on Breitbart, an article in The Onion, an outright falsehood disseminated via Russian bot, or a catchphrase used by a politician to discredit a story he doesn't like? This book examines the real fake news: the constant flow of purposefully crafted, sensational, emotionally charged, misleading or totally fabricated information that mimics the form of mainstream news. Rather than viewing fake news through a single lens, the book maps the various kinds of misinformation through several different disciplinary perspectives, taking into account the overlapping contexts of politics, technology, and journalism. The contributors consider topics including fake news as “disorganized” propaganda; folkloric falsehood in the “Pizzagate” conspiracy; native advertising as counterfeit news; the limitations of regulatory reform and technological solutionism; Reddit's enabling of fake news; the psychological mechanisms by which people make sense of information; and the evolution of fake news in America. A section on media hoaxes and satire features an oral history of and an interview with prankster-activists the Yes Men, famous for parodies that reveal hidden truths. Finally, contributors consider possible solutions to the complex problem of fake news—ways to mitigate its spread, to teach students to find factually accurate information, and to go beyond fact-checking. ContributorsMark Andrejevic, Benjamin Burroughs, Nicholas Bowman, Mark Brewin, Elizabeth Cohen, Colin Doty, Dan Faltesek, Johan Farkas, Cherian George, Tarleton Gillespie, Dawn R. Gilpin, Gina Giotta, Theodore Glasser, Amanda Ann Klein, Paul Levinson, Adrienne Massanari, Sophia A. McClennen, Kembrew McLeod, Panagiotis Takis Metaxas, Paul Mihailidis, Benjamin Peters, Whitney Phillips, Victor Pickard, Danielle Polage, Stephanie Ricker Schulte, Leslie-Jean Thornton, Anita Varma, Claire Wardle, Melissa Zimdars, Sheng Zou
Article
Massive amounts of misinformation flood social media like Twitter and Facebook. Digital misinformation includes articles about hoaxes, conspiracy theories, fake news, and other misleading claims. This content has been alleged to disrupt the public debate, leading to questions about its impact on the real world. A number of research questions have been formulated around the ways misinformation spreads, who are its main purveyors, and whether fact-checking efforts can be helpful at mitigating its diffusion. Here we release a large longitudinal dataset from Twitter, consisting of retweeted messages with links to misinformation and fact-checking articles. These data have been collected using Hoaxy (hoaxy.iuni.iu.edu), an open social media analytics platform whose goal is to provide a comprehensive picture of how digital misinformation spreads and competes with fact-checking efforts. The released dataset contains over 20 million retweets, spanning the period from May 2016 to the end of 2017. We provide basic statistics about the data and the associated diffusion networks.
Article
In this paper, we analyze the service providers that power 440 misinformation and hate sites, including hosting platforms, domain registrars, CDN providers, DDoS protection companies, advertising networks, donation processors, and e-mail providers. We find that several providers are disproportionately responsible for serving misinformation websites, most prominently Cloudflare. We further show that misinformation sites disproportionately rely on several popular ad networks and payment processors, including RevContent and Google DoubleClick. When misinformation websites are deplatformed by hosting providers, DDoS protection services, and registrars, sites nearly always resurface through alternative providers. However, anecdotally, we find that sites struggle to remain online when mainstream monetization channels are severed. We conclude with insights for infrastructure providers and researchers working to stem the spread of misinformation and hate content.
Article
We examine an undercover SEC investigation into the manipulation of financial news on social media. While fraudulent news had a direct positive impact on retail trading and prices, revelation of the fraud by the SEC announcement resulted in significantly lower retail trading volume on all news, including legitimate news, on these platforms. For small firms, volume declined by 23.5% and price volatility dropped by 1.3%. We find evidence consistent with concerns of fraud causing the decline in trading activity and price volatility, which we interpret through the lens of social capital, and attempt to rule out alternative explanations. The results highlight the indirect consequences of fraud and its spillover effects that reduce the social network’s impact on information dissemination, especially for small, opaque firms.
Article
People may cling to false facts even in the face of updated and correct information. The present study confronted misconceptions about the measles, mumps and rubella vaccine and a novel, fictitious Zika vaccine. Two experiments are reported, examining misconceptions as motivated by a poor risk understanding (Experiment 1, N = 130) or the exposure to conspiracy theories (Experiment 2, N = 130). Each experiment featured a Misinformation condition, wherein participants were presented with fictitious stories containing some misinformation (Experiment 1) and rumours focused on conspiracy theories (Experiment 2) that were later retracted by public health experts and a No misinformation condition, containing no reference to misinformation and rumours. Across experiments, participants were more hesitant towards vaccines when exposed to stories including vaccine misinformation. Notwithstanding, our results suggest a positive impact of a trusted source communicating the scientific consensus about vaccines. Zika virus represents a particular case showing how missing information can easily evolve into misinformation. Implications for effective dissemination of information are discussed.
Article
Scholarship on (mis)information does not easily translate into recommendations for policy-makers and policy influencers who wish to judge the accuracy of science-related truth claims. This is partly due to much of this literature being based on lab experiments with captive audiences that tell us little about the durability or scalability of any potential intervention in the real world. More importantly, the “accuracy” of scientific truth claims is much more difficult to define than many scholars in this space acknowledge. Uncertainties associated with the nature of science, sociopolitical climates, and media systems introduce compounding error in assessments of claim accuracy. We, therefore, need a more nuanced understanding of misinformation and disinformation than those often present in discussions of the “infodemic.” Here, we propose a new framework for evaluating science-related truth claims and apply it to real-world examples. We conclude by discussing implications for research and action on (mis)information, given that distinguishing between true and false claims is not as easy as it is sometimes purported to be.