ArticlePDF Available

Abstract

Recent years have seen a surge in research on why people fall for misinformation and what could be done about it. Drawing on a framework that conceptualizes truth judgments of true and false information as a signal-detection problem, the current article identifies three inaccurate assumptions in the public and scientific discourse about misinformation: (1) people are bad at discerning true from false information; (2) partisan bias is not a driving force in judgments of misinformation; (3) gullibility to false information is the main factor underlying inaccurate beliefs. Counter to these assumptions, we argue that (1) people are quite good at discerning true from false information; (2) partisan bias in responses to true and false information is pervasive and strong; and (3) skepticism against belief-incongruent true information is much more pronounced than gullibility to belief-congruent false information. These conclusions have significant implications for person-centered misinformation interventions to tackle inaccurate beliefs.
in press, Current Directions in Psychological Science 1
Debunking Three Myths about Misinformation
Bertram Gawronski, Lea S. Nahon, Nyx L. Ng
University of Texas at Austin
Recent years have seen a surge in research on why people fall for misinformation and what could be done about it. Drawing
on a framework that conceptualizes truth judgments of true and false information as a signal-detection problem, the current
article identifies three inaccurate assumptions in the public and scientific discourse about misinformation: (1) people are bad
at discerning true from false information; (2) partisan bias is not a driving force in judgments of misinformation; (3) gullibility
to false information is the main factor underlying inaccurate beliefs. Counter to these assumptions, we argue that (1) people
are quite good at discerning true from false information; (2) partisan bias in responses to true and false information is
pervasive and strong; and (3) skepticism against belief-incongruent true information is much more pronounced than
gullibility to belief-congruent false information. These conclusions have significant implications for person-centered
misinformation interventions to tackle inaccurate beliefs.
Keywords: fake news; gullibility; misinformation; partisan bias; skepticism
While psychologists have studied effects of
misinformation for decades (for a review, see
Lewandowsky et al., 2012), recent years have seen a
surge in research on why people fall for misinformation
and what could be done about it (for reviews, see Ecker
et al., 2022; Pennycook & Rand, 2021a; Van der
Linden, 2022). Three assumptions in the public and
scientific discourse related to this work are that (1)
people are bad at discerning true from false
information, (2) partisan bias is not a driving force in
judgments of misinformation, and (3) gullibility to false
information is the main factor underlying inaccurate
beliefs. Drawing on a framework that conceptualizes
truth judgments of true and false information as a
signal-detection problem (Batailler et al., 2022), we
argue that the three assumptions are inconsistent with
the available evidence, which shows that (1) people are
quite good at discerning true from false information, (2)
partisan bias in responses to true and false information
is pervasive and strong, and (3) skepticism against true
information that is incongruent with one’s beliefs is
much more pronounced than gullibility to false
information that is congruent with one’s beliefs. In the
current article, we address each of these points and their
implications for person-centered misinformation
interventions to tackle inaccurate beliefs.
Judging True and False Information
A helpful framework to illustrate the key points of
our analysis is signal detection theory (SDT; Green &
Swets, 1966). While SDT was originally developed for
research on visual perception, its core concepts can be
applied to any decision problem involving binary
categorical judgments of two stimulus classes (e.g.,
judgments of true and false information as either true or
false). Based on SDT, a correct judgment of true
information as true can be described as a hit; a correct
judgment of false information as false can be described
as a correct rejection; an incorrect judgment of true
information as false can be described as a miss; and an
incorrect judgment of false information as true can be
described as a false alarm (Batailler et al., 2022). SDT
offers a nuanced statistical framework for analyzing
responses to true and false information. Yet, for the
purpose of the current analysis, the most significant
insight provided by SDT is conceptual, in that it
identifies two distinct factors underlying judgments of
true and false information as either true or false.
The first factor, labeled sensitivity within SDT, refers
to how good people are at discerning true from false
information (Batailler et al., 2022). If a person is not
very good at distinguishing between true and false
information (see Figure 1, Panel A), this person would
mistakenly judge a lot of false information as true (i.e.,
high rate of false alarms) and mistakenly judge a lot of
true information as false (i.e., high rate of misses). In
contrast, a person who is relatively good at
distinguishing between true and false information (see
Figure 1, Panel B) would correctly judge a lot of false
information as false (i.e., high rate of correct rejections)
and correctly judge a lot of true information as true (i.e.,
high rate of hits).
The second factor, labeled threshold within SDT,
refers to people’s overall tendency to accept versus
reject information (Batailler et al., 2022). If a person has
a low threshold for judging information as true (see
Figure 2, Panel A), this person would correctly judge a
lot of true information as true (i.e., high rate of hits) but
also mistakenly judge a lot of false information as true
(i.e., high rate of false alarms). Conversely, if a person
has a high threshold for judging information as true (see
Figure 2, Panel B), this person would correctly judge a
lot of false information as false (i.e., high rate of correct
rejections) but also mistakenly judge a lot of true
information as false (i.e., high rate of misses).
An important aspect of the distinction between
sensitivity and threshold is that the two factors are
independent. For example, a higher threshold is not
necessarily associated with higher sensitivity, because
a higher threshold leads to greater accuracy in
in press, Current Directions in Psychological Science 2
judgments of false information (i.e., more false
information is correctly judged as false) and, at the
same time, greater inaccuracy in judgments of true
information (i.e., more true information is incorrectly
judged as false). Conversely, a lower threshold is not
necessarily associated with lower sensitivity, because a
lower threshold leads to greater inaccuracy in
judgments of false information (i.e., more false
information is incorrectly judged as true) and, at the
same time, greater accuracy in judgments of true
information (i.e., more true information is correctly
judged as true). Thus, overall accuracy in distinguishing
between true and false information (i.e., sensitivity)
often remains unaffected by higher or lower thresholds,
in that greater accuracy in judgments of false
information is compensated by greater inaccuracy in
judgments of true information, or vice versa (see Figure
2). As we explain below, these considerations are
important when assessing the validity of the three
assumptions that (1) people are bad at discerning true
from false information, (2) partisan bias is not a driving
force in responses to misinformation, and (3) gullibility
to false information is the main factor underlying
inaccurate beliefs.
Myth #1: People are bad at discerning true from
false information.
A central idea in the psychological literature is that
person-centered misinformation interventions require
approaches that effectively increase people’s ability to
discern true from false information (Guay et al., 2023).
That is, interventions should increase sensitivity. A tacit
assumption underlying this quest is that people are not
very good at discerning true from false information
(i.e., people show low sensitivity). However, many
experts on misinformation disagree with this
assumption (Altay et al., 2023a), and a closer look at
the available evidence suggests that it is incorrect (for a
meta-analysis, see Pfänder & Altay, in press). For
example, based on a review of data from more than
15,000 participants, Pennycook and Rand (2021b)
report an average difference of 2.9 standard deviations
in judgments of real versus fake news as true,
suggesting very high sensitivity in truth judgments of
real and fake news. To provide a frame of reference,
differences of that size are comparable to the
differences between Democrats and Republicans on
self-report measures of liberal versus conservative
political ideology (Gawronski et al., 2023). These
findings suggest that people are surprisingly good at
discerning true from false information.
1
1 An exception is the identification of AI-generated deepfakes, where
people often show chance-level performance (Nieweglowska et al.,
2023). Different from deepfakes, the current analysis focuses on
An important qualification of this conclusion is that
it is specific to judgments of truth and does not
generalize to sharing decisions. For the latter,
sensitivity is typically close to chance level, in that
people are as likely to share false information as they
are to share true information (Gawronski et al., 2023).
The discrepancy between truth judgments and sharing
decisions suggests that, although people can distinguish
between true and false information with a high degree
of accuracy when they are directly asked to judge the
truth of information, they do not seem to utilize this
skill in decisions to share information, possibly because
they do not think much about truth when sharing
information online. In line with this idea, nudging
people to think about truth when making sharing
decisions has been found to be effective in improving
the quality of shared information (Pennycook et al.,
2020). Nevertheless, the finding that people are able to
distinguish between true and false information with a
high degree of accuracy contradicts the idea that people
are not very good at telling the difference between true
and false information.
Myth #2: Partisan bias is not a driving force in
judgments of misinformation.
Although people are quite good at discerning true
from false information, they are not perfect. People still
make errors when judging the veracity of true and false
information. The available evidence suggests that these
errors are highly systematic, in that (1) people are much
more likely to mistakenly judge false information as
true when it is congruent with their political views than
when it is incongruent, and (2) people are much more
likely to mistakenly judge true information as false
when it is incongruent with their political views than
when it is congruent. More generally, people have
different thresholds for belief-congruent and belief-
incongruent information, in that they tend to (1) accept
belief-congruent information as true regardless of
whether it is true or false and (2) reject belief-
incongruent information as false regardless of whether
it is true or false (Batailler et al., 2022; Gawronski,
2023; Nahon et al., 2024). Such partisan-bias effects in
thresholds have been found to be pervasive and strong
among both left-leaning and right-leaning participants,
with effect sizes that exceed the conventional
benchmark for a large effect.
While many experts believe that partisan bias is a
major factor underlying susceptibility to
misinformation (Altay et al., 2023a), there is an
influential narrative in the misinformation literature
claiming that partisan bias is not a driving force in
verbal information involving propositions about states of affairs that
may be true or false.
in press, Current Directions in Psychological Science 3
responses to misinformation (e.g., Pennycook & Rand,
2019, 2021a). How can these conflicting views be
reconciled? The answer to this question is that the
dismissal of partisan bias is based on a questionable
conceptualization of partisan bias in terms of sensitivity
rather than threshold. For example, Pennycook and
Rand (2021a) argued that partisan bias should lead to
lower truth discernment for belief-congruent compared
to belief-incongruent information. Yet, if anything, the
empirical evidence suggests the opposite, which led
them to dismiss partisan bias as a driving force in
responses to misinformation. However, within an SDT
framework, partisan bias is reflected in differential
thresholds rather than differential sensitivityand
when conceptualized in terms of differential thresholds,
partisan-bias effects are large and pervasive.
To illustrate why a conceptualization of partisan bias
in terms of differential thresholds is more appropriate,
consider a hypothetical case where Participant A judges
70% of true information as true and 30% of false
information as true. Further imagine that Participant A
shows this pattern for information that is congruent
with their political views as well as for information that
is incongruent with their political views. Now imagine
another Participant B who judges 90% of true
information as true and 50% of false information as true
when the information is congruent with their political
views. Imagine further that Participant B judges 50% of
true information as true and 10% of false information
as true when the information is incongruent with their
political views. Based on a conceptualization of
partisan bias in terms of differential sensitivity (e.g.,
Pennycook & Rand, 2021a), neither Participant A nor
Participant B shows partisan bias, because both
participants have the same sensitivity for belief-
congruent and belief-incongruent information. For both
types of information, the two participants show a 40%
difference in the acceptance of true versus false
information.
Yet, despite the identical sensitivities for belief-
congruent and belief-incongruent information,
Participant B clearly evaluates information in a manner
biased toward their political views, in that the
participant is much more likely to judge information as
true when it is congruent than when it is incongruent
with their political views (see Stanovich et al., 2013).
Within SDT, this tendency is reflected in different
thresholds, in that Participant B shows a lower
threshold for belief-congruent compared to belief-
incongruent information (Batailler et al., 2022).
Because a conceptualization of partisan bias in terms of
sensitivity misses this important difference, such a
conceptualization can lead to the mistaken conclusion
that partisan bias is not a driving force in responses to
misinformation (e.g., Pennycook & Rand, 2021a). Yet,
when partisan bias is conceptualized in terms of
differential thresholds, partisan bias is pervasive and
strong (Gawronski et al., 2023), even in studies that
have been claimed to show its irrelevance (see Batailler
et al., 2022; Gawronski, 2021). Although the
mechanisms underlying partisan bias in judgments of
true and false information are still unclear (see Ditto et
al., in press; Tappin et al., 2020), these considerations
suggest that (1) the proclaimed irrelevance of partisan
bias is based on a questionable conceptualization in
terms of differential sensitivities and (2) partisan bias in
responses to misinformation is pervasive and strong
when it is conceptualized in terms of differential
thresholds.
Myth #3: Gullibility to false information is the
main factor underlying inaccurate beliefs.
A conceptualization of partisan bias in terms of
differential thresholds includes two components: (1) a
tendency to judge belief-congruent information as true
regardless of whether it is true or false and (2) a
tendency to judge belief-incongruent information as
false regardless of whether it is true or false (Batailler
et al., 2022). The first component involves gullibility to
belief-congruent false information due to a low
threshold for belief-congruent information (sometimes
called confirmation bias; see Edwards & Smith, 1996);
the second component involves skepticism against
belief-incongruent true information due to a high
threshold for belief-incongruent information
(sometimes called disconfirmation bias; see Edwards &
Smith, 1996). In our work, we consistently found that
effect sizes for the second component are much larger
than effect sizes for the first component (Gawronski et
al., 2023; Nahon et al., 2024). A forthcoming meta-
analysis with data from 193,282 participants from 40
countries across 7 continents found the same (Pfänder
& Altay, in press).
The identified asymmetry is important because
inaccurate beliefs can be rooted in both gullibility to
false information and skepticism against true
information. Yet, traditional misinformation
interventions primarily aim at reducing gullibility to
false information, and reducing skepticism against true
information likely requires different types of
interventions (Altay et al., 2023b)especially when
this skepticism is selectively directed at belief-
incongruent information without being applied to
belief-congruent information (see Ditto & Lopez, 1992;
Taber & Lodge, 2006). An example illustrating the
significance of this issue involves the effects of
gamified inoculation interventions, which expose game
players to weak examples of misinformation and
forewarn them about the ways in which they might be
misled (for a review, see Van der Linden, 2024). While
gamified inoculation interventions have been found to
reduce incorrect judgments of false information as true,
in press, Current Directions in Psychological Science 4
a reanalysis of the available data using SDT suggests
that this effect is driven by increased thresholds, not
increased sensitivity (Modirrousta-Galian & Higham,
2023). That is, the interventions made participants more
likely to judge both false and true information as false,
but the interventions did not improve participants’
overall accuracy in discerning true from false
information. Importantly, if enhanced skepticism from
a misinformation intervention is selectively directed at
belief-incongruent information without being directed
at belief-congruent information, the intervention could
even have detrimental effects by exacerbating a major
source of inaccurate beliefs (i.e., the dismissal of belief-
incongruent true information as false). The broader
points are that (1) skepticism against belief-incongruent
true information is much stronger compared to
gullibility to belief-congruent false information and (2)
reducing skepticism against true information likely
requires different types of interventions than reducing
gullibility to false information (Altay et al., 2023b),
especially when skepticism is primarily directed against
belief-incongruent information.
Conclusion
Decisions based on inaccurate beliefs can have
detrimental effects not only for the individual, but also
for society. The COVID-19 pandemic provides an
illustrative example of this issue (Van Bavel et al.,
2024). Yet, effective interventions to tackle inaccurate
beliefs also require an accurate understanding of why
people hold inaccurate beliefs. The current analysis
identified three inaccurate assumptions in the public
and scientific discourse about misinformation. Our
analysis suggests that: (1) counter to the idea that
people are bad at discerning true from false
information, people are quite good at discerning true
from false information; (2) counter to the claim that
partisan bias is not a driving force in judgments of
misinformation, partisan bias in truth judgments of true
and false information is pervasive and strong; and (3)
counter to the idea that gullibility to false information
is the main factor underlying inaccurate beliefs,
skepticism against belief-incongruent true information
is much more pronounced than gullibility to belief-
congruent false information. While extant interventions
may be helpful in tackling belief in false information,
different kinds of interventions are likely needed to
address the hitherto neglected problem that people
readily dismiss true information as false when it
conflicts with their views.
Author Note
This work was supported by the National Science
Foundation (BCS-2040684), the Swiss National
Science Foundation (P500PS_214298), and the John
Templeton Foundation. Any opinions, findings, and
conclusions or recommendations expressed in this
material are those of the authors and do not necessarily
reflect the views of the funding agencies.
Correspondence concerning this article should be
sent to: Bertram Gawronski, Department of
Psychology, University of Texas at Austin, 108 E Dean
Keeton A8000, Austin, TX 78712, USA, Email:
gawronski@utexas.edu
Recommended Readings
Batailler, C., Brannon, S. M., Teas, P. E., & Gawronski,
B. (2022). A signal detection approach to
understanding the identification of fake news.
Perspectives on Psychological Science, 17, 7898.
[This article provides an introduction to the use of
Signal Detection Theory for research on the
psychology of misinformation, including reanalyses
of existing data to illustrate the value of Signal
Detection Theory.]
Ditto, P. H., Celniker, J. B., Spitz Siddiqi, S., Güngör,
M., & Relihan, D. P. (in press). Partisan bias in
political judgment. Annual Review of Psychology.
[This article provides a review of research on
partisan bias in political judgment, including a
detailed analysis of debates about the processes
underlying partisan bias.]
Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid,
P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E.
K., & Amazeen, M. A. (2022). The psychological
drivers of misinformation belief and its resistance to
correction. Nature Reviews Psychology, 1, 13-29.
[This article provides a review of research on why
people fall for misinformation and why effects of
misinformation are often difficult to correct.]
Pfänder, J., & Altay, S. (in press). Spotting false news
and doubting true news: A meta-analysis of news
judgments. Nature Human Behavior. [This article
provides a quantitative synthesis of studies that
investigated judgments of misinformation,
comprising data from 193,282 participants from 40
countries across 7 continents.]
References
Altay, S., Berriche, M., & Acerbi, A. (2023b).
Misinformation on misinformation: Conceptual and
methodological challenges. Social Media + Society,
9, Article 20563051221150412.
Altay, S., Berriche, M., Heuer, H., Farkas, J., & Rathje,
S. (2023a). A survey of expert views on
misinformation: Definitions, determinants, solutions,
and future of the field. Harvard Kennedy School
Misinformation Review, 4, Article 4.
Batailler, C., Brannon, S. M., Teas, P. E., & Gawronski,
B. (2022). A signal detection approach to
understanding the identification of fake news.
Perspectives on Psychological Science, 17, 78-98.
in press, Current Directions in Psychological Science 5
Ditto, P. H., Celniker, J. B., Spitz Siddiqi, S., Güngör,
M., & Relihan, D. P. (in press). Partisan bias in
political judgment. Annual Review of Psychology.
Ditto, P. H., & Lopez, D. F. (1992). Motivated
skepticism: Use of differential decision criteria for
preferred and nonpreferred conclusions. Journal of
Personality and Social Psychology, 63, 568-584.
Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid,
P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E.
K., & Amazeen, M. A. (2022). The psychological
drivers of misinformation belief and its resistance to
correction. Nature Reviews Psychology, 1, 13-29.
Edwards, K., & Smith, E. E. (1996). A disconfirmation
bias in the evaluation of arguments. Journal of
Personality and Social Psychology, 71, 5-24.
Gawronski, B. (2021). Partisan bias in the identification
of fake news. Trends in Cognitive Sciences, 25, 723-
724.
Gawronski, B., Ng, N. L., & Luke, D. M. (2023). Truth
sensitivity and partisan bias in responses to
misinformation. Journal of Experimental
Psychology: General, 152, 2205-2236.
Green, D. M., & Swets, J. A. (1966). Signal detection
theory and psychophysics. New York: Wiley.
Guay, B., Berinsky, A. J., Pennycook, G., & Rand, D.
(2023). How to think about whether misinformation
interventions work. Nature Human Behaviour, 7,
1231-1233.
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M.,
Schwarz, N., & Cook, J. (2012). Misinformation and
its correction: Continued influence and successful
debiasing. Psychological Science in the Public
Interest, 13, 106-131.
Modirrousta-Galian, A., & Higham, P. A. (2023).
Gamified inoculation interventions do not improve
discrimination between true and fake news:
Reanalyzing existing research with receiver
operating characteristic analysis. Journal of
Experimental Psychology: General, 152, 2411-2437.
Nahon, L. S., Ng, N. L., & Gawronski, B. (2024).
Susceptibility to misinformation about COVID-19
vaccines: A signal detection analysis. Journal of
Experimental Social Psychology, 114, Article
104632.
Nieweglowska, M., Stellato, C., & Sloman, S. A.
(2023). Deepfakes: Vehicles for Radicalization, Not
Persuasion. Current Directions in Psychological
Science, 32, 236-241.
Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., &
Rand, D. G. (2020). Fighting COVID-19
misinformation on social media: Experimental
evidence for a scalable accuracy-nudge intervention.
Psychological Science, 31, 770-780.
Pennycook, G., & Rand, D. G. (2019). Lazy, not biased:
Susceptibility to partisan fake news is better
explained by lack of reasoning than by motivated
reasoning. Cognition, 188, 39-50.
Pennycook, G., & Rand, D. G. (2021a). The psychology
of fake news. Trends in Cognitive Sciences, 25, 388-
402.
Pennycook, G., & Rand, D. G. (2021b). Lack of
partisan bias in the identification of fake (versus real)
news. Trends in Cognitive Sciences, 25, 725-726.
Pfänder, J., & Altay, S. (in press). Spotting false news
and doubting true news: A meta-analysis of news
judgements. Nature Human Behavior.
Stanovich, K. E., West, R. F., & Toplak, M. E. (2013).
Myside bias, rational thinking, and intelligence.
Current Directions in Psychological Science, 22,
259-264.
Taber, C. S., & Lodge, M. (2006). Motivated
skepticism in the evaluation of political beliefs.
American Journal of Political Science, 50, 755-769.
Tappin, B. M., Pennycook, G., & Rand, D. G. (2020).
Thinking clearly about causal inferences of
politically motivated reasoning: Why paradigmatic
study designs often undermine causal
inference. Current Opinion in Behavioral
Sciences, 34, 81-87.
Van Bavel, J. J., Pretus, C., Rathje, S., Pärnamets, P.,
Vlasceanu, M., & Knowles, E. D. (2024). The costs
of polarizing a pandemic: Antecedents,
consequences, and lessons. Perspectives on
Psychological Science, 19, 624-639.
Van der Linden, S. (2022). Misinformation:
Susceptibility, spread, and interventions to immunize
the public. Nature Medicine, 28, 460-467.
Van der Linden, S. (2024). Countering misinformation
through psychological inoculation. Advances in
Experimental Social Psychology, 69, 1-58.
in press, Current Directions in Psychological Science 6
Figure 1. Graphical depiction of sensitivity within Signal Detection Theory, reflecting the
distance between distributions of judgments about true and false information along the
dimension of perceived veracity. Distributions that are closer together along the perceived-
veracity dimension have a lower sensitivity, indicating that participants ability in correctly
discriminating between true and false information is relatively low (upper panel). Distributions
that are further apart along the perceived-veracity dimension have a higher sensitivity, indicating
that participants’ ability in correctly discriminating between real news and fake news is relatively
high (lower panel).
in press, Current Directions in Psychological Science 7
Figure 2. Graphical depiction of threshold within Signal Detection Theory, reflecting the
threshold along the dimension of perceived veracity at which a participant decides to switch their
decision. When judging information as true (vs. false), threshold indicates the degree of veracity
the participant must perceive before judging information as true. Any stimulus with greater
perceived veracity than that value will be judged as true, whereas any stimulus with lower
perceived veracity than that value will be judged as false. A low threshold would indicate that a
participant is generally more likely to judge information as true (upper panel), whereas a high
threshold would indicate that a participant is generally less likely to judge information as true
(lower panel).
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
We surveyed 150 academic experts on misinformation and identified areas of expert consensus. Experts defined misinformation as false and misleading information, though views diverged on the importance of intentionality and what exactly constitutes misinformation. The most popular reason why people believe and share misinformation was partisanship, while lack of education was one of the least popular reasons. Experts were optimistic about the effectiveness of interventions against misinformation and supported system-level actions against misinformation, such as platform design changes and algorithmic changes. The most agreed-upon future direction for the field of misinformation was to collect more data outside of the United States.
Article
Full-text available
An analysis drawing on Signal Detection Theory suggests that people may fall for misinformation because they are unable to discern true from false information (truth insensitivity) or because they tend to accept information with a particular slant regardless of whether it is true or false (belief bias). Three preregistered experiments with participants from the United States and the United Kingdom (N = 961) revealed that (i) truth insensitivity in responses to (mis)information about COVID-19 vaccines differed as a function of prior attitudes toward COVID-19 vaccines; (ii) participants exhibited a strong belief bias favoring attitude-congruent information; (iii) truth insensitivity and belief bias jointly predicted acceptance of false information about COVID-19 vaccines, but belief bias was a much stronger predictor; (iv) cognitive elaboration increased truth sensitivity without reducing belief bias; and (v) higher levels of confidence in one’s beliefs were associated with greater belief bias. The findings provide insights into why people fall for misinformation, which is essential for individual-level interventions to reduce susceptibility to misinformation.
Article
Full-text available
Gamified inoculation interventions designed to improve the detection of online misinformation are becoming increasingly prevalent. Two of the most notable interventions of this kind are Bad News and Go Viral!. To assess their efficacy, prior research has typically used pre–post designs in which participants rated the reliability or manipulativeness of true and fake news items before and after playing these games, while most of the time also including a control group who played an irrelevant game (Tetris) or did nothing at all. Mean ratings were then compared between pre-tests and post-tests and/or between the control and experimental conditions. Critically, these prior studies have not separated response bias effects (overall tendency to respond “true” or “fake”) from discrimination (ability to distinguish between true and fake news, commonly dubbed discernment). We reanalyzed the results from five prior studies using receiver operating characteristic (ROC) curves, a method common to signal detection theory that allows for discrimination to be measured free from response bias. Across the studies, when comparable true and fake news items were used, Bad News and Go Viral! did not improve discrimination, but rather elicited more “false” responses to all news items (more conservative responding). These novel findings suggest that the current gamified inoculation interventions designed to improve fake news detection are not as effective as previously thought and may even be counterproductive. They also demonstrate the usefulness of ROC analysis, a largely unexploited method in this setting, for assessing the effectiveness of any intervention designed to improve fake news detection.
Article
Full-text available
Alarmist narratives about online misinformation continue to gain traction despite evidence that its prevalence and impact are overstated. Drawing on research examining the use of big data in social science and reception studies, we identify six misconceptions about misinformation and highlight the conceptual and methodological challenges they raise. The first set of misconceptions concerns the prevalence and circulation of misinformation. First, scientists focus on social media because it is methodologically convenient, but misinformation is not just a social media problem. Second, the internet is not rife with misinformation or news, but with memes and entertaining content. Third, falsehoods do not spread faster than the truth; how we define (mis)information influences our results and their practical implications. The second set of misconceptions concerns the impact and the reception of misinformation. Fourth, people do not believe everything they see on the internet: the sheer volume of engagement should not be conflated with belief. Fifth, people are more likely to be uninformed than misinformed; surveys overestimate misperceptions and say little about the causal influence of misinformation. Sixth, the influence of misinformation on people’s behavior is overblown as misinformation often “preaches to the choir.” To appropriately understand and fight misinformation, future research needs to address these challenges.
Article
Full-text available
Misinformation represents one of the greatest challenges for the functioning of societies in the information age. Drawing on a signal-detection framework, the current research investigated two distinct aspects of misinformation susceptibility: truth sensitivity, conceptualized as accurate discrimination between true and false information, and partisan bias, conceptualized as lower acceptance threshold for ideology-congruent information compared to ideology-incongruent information. Four preregistered experiments (N = 2,423) examined (1) truth sensitivity and partisan bias in veracity judgments and decisions to share information and (2) determinants and correlates of truth sensitivity and partisan bias in responses to misinformation. Although participants were able to distinguish between true and false information to a considerable extent, sharing decisions were largely unaffected by actual information veracity. A strong partisan bias emerged for both veracity judgments and sharing decisions, with partisan bias being unrelated to the overall degree of truth sensitivity. While truth sensitivity increased as a function of cognitive reflection during encoding, partisan bias increased as a function of subjective confidence. Truth sensitivity and partisan bias were both associated with misinformation susceptibility, but partisan bias was a stronger and more reliable predictor of misinformation susceptibility than truth sensitivity. Implications and open questions for future research are discussed.
Article
This article reviews empirical data demonstrating robust ingroup favoritism in political judgment. Partisans display systematic tendencies to seek out, believe, and remember information that supports their political beliefs and affinities. However, the psychological drivers of partisan favoritism have been vigorously debated, as has its consistency with rational inference. We characterize decades-long debates over whether such tendencies violate normative standards of rationality, focusing on the phenomenon of motivated reasoning. In light of evidence that both motivational and cognitive factors contribute to partisan bias, we advocate for a descriptive approach to partisan bias research. Rather than adjudicating the (ir)rationality of partisan favoritism, future research should prioritize the identification and measurement of its predictors and clarify the cognitive mechanisms underlying motivated political reasoning. Ultimately, we argue that political judgment is best evaluated by a standard of ecological rationality based on its practical implications for individual well-being and functional democratic governance.
Article
Polarization has been rising in the United States of America for the past few decades and now poses a significant—and growing—public-health risk. One of the signature features of the American response to the COVID-19 pandemic has been the degree to which perceptions of risk and willingness to follow public-health recommendations have been politically polarized. Although COVID-19 has proven more lethal than any war or public-health crisis in American history, the deadly consequences of the pandemic were exacerbated by polarization. We review research detailing how every phase of the COVID-19 pandemic has been polarized, including judgments of risk, spatial distancing, mask wearing, and vaccination. We describe the role of political ideology, partisan identity, leadership, misinformation, and mass communication in this public-health crisis. We then assess the overall impact of polarization on infections, illness, and mortality during the pandemic; offer a psychological analysis of key policy questions; and identify a set of future research questions for scholars and policy experts. Our analysis suggests that the catastrophic death toll in the United States was largely preventable and due, in large part, to the polarization of the pandemic. Finally, we discuss implications for public policy to help avoid the same deadly mistakes in future public-health crises.
Article
Deepfakes are an effective method of media manipulation because of their realism and also because truth is not a priority when people are consuming and sharing content online. Consumers are more focused on creating their own reality that aligns with their desires, opinions, and values. We explain how deepfakes differ from other sources of information. Their realism and vividness makes them unusually effective at depicting alternative facts, including fake news. Deepfakes are difficult to detect and will be even harder to detect in the future. However, people share deepfakes not necessarily because they believe them but because they want to reinforce their own identity and social position. The threat posed by deepfakes is that they can radicalize people by sowing chaos and confusion. They rarely change minds. We review the consequences of deepfakes in both the social sphere and private lives. We suggest potential solutions to reduce their negative consequences.