Overconﬁdence in News Judgments
Is Associated with False News Susceptibility†
Benjamin A. Lyons*a, Jacob M. Montgomeryb, Andrew M. Guessc, Brendan
Nyhand, and Jason Reiﬂere
aDepartment of Communication, University of Utah, Salt Lake City, UT 84112
bDepartment of Political Science, Washington University in St. Louis, St. Louis, MO 63130
cDepartment of Politics, Princeton University, Princeton, NJ 08544
dDepartment of Government, Dartmouth College, Hanover, NH 03755
eDepartment of Politics, University of Exeter, Exeter EX4 4RJ, United Kingdom
We examine the role of overconﬁdence in news judgment using two large nationally represen-
tative survey samples. First, we show that three in four Americans overestimate their relative
ability to distinguish between legitimate and false news headlines; respondents place themselves
22 percentiles higher than warranted on average. This overconﬁdence is in turn correlated with
consequential diﬀerences in real-world beliefs and behavior. We show that overconﬁdent indi-
viduals are more likely to visit untrustworthy websites in behavioral data; to fail to successfully
distinguish between true and false claims about current events in survey questions; and report
greater willingness to like or share false content on social media, especially when it is polit-
ically congenial. In all, these results paint a worrying picture: the individuals who are least
equipped to identify false news content are also the least aware of their own limitations and
therefore more susceptible to believing it and spreading it further.
Published version available at: http://doi.org/10.1073/pnas.2019527118
Author contributions: BL, JM, AG, BN, and JR designed the research. BL conducted the analysis.
BL and JM wrote the paper. BL, JM, AG, BN, and JR revised the paper. To whom correspondence
should be addressed. E-mail: firstname.lastname@example.org
†We thank Democracy Fund, the European Research Council (ERC) under the European Union’s Horizon 2020
research and innovation programme (grant agreement No. 682758), the Nelson A. Rockefeller Center at Dartmouth
College, and the Weidenbaum Center on the Economy, Government, and Public Policy at Washington University in St.
Louis for funding support; Rick Perloﬀ, Ye Sun, Matt Motta, Mike Wagner, and seminar participants at the University
of Gothenburg for helpful comments; and Sam Luks and Marissa Shih at YouGov for survey assistance.
Concern about public susceptibility to false news is widespread. However, though Americans
believe confusion caused by false news is extensive, relatively few indicate having seen or shared
it (1) — a discrepancy which suggests that members of the public may not only have a hard time
identifying false news, but fail to recognize their own deﬁciencies at doing so (2; 3; 4; 5). Such over-
conﬁdence may make individuals more likely to inadvertently expose themselves to misinformation
and to participate in its spread. If people incorrectly see themselves as highly skilled at identifying
false news, they may unwittingly be more likely to consume, believe, and share it, especially if it
conforms to their worldview.
Overconﬁdence plays a key role in shaping behavior, at least in some domains (e.g., 6; 7; 8;
9; 10). However, we know very little about its potential role in the spread of false news. Even
basic descriptive data on the phenomenon of overconﬁdence in news discernment (the ability to
distinguish false from legitimate news) is yet to be established. How pervasive is overconﬁdence?
Is overconﬁdence related to false news exposure? Are overconﬁdent individuals actually more likely
to hold misperceptions or share false stories? We currently lack answers to these questions.
In this paper, we examine the relationship between perceived and actual ability to distinguish
between false and legitimate information, drawing on a theoretical framework for understanding
biased self-perception (4). In two large nationally representative samples (N=8,285), respondents
completed a novel discernment task evaluating the accuracy of a series of headlines as they appear
on Facebook. They were further asked to rate their own abilities in discerning false news content
relative to others. We use these two measures to assess overconﬁdence among respondents and how
it is related to beliefs and behaviors.
Our results paint a worrying picture. The vast majority of respondents (about 90%) reported
that they are above average in their ability to discern false and legitimate news headlines, and many
Americans substantially overestimate their abilities. Accordingly, people’s self-perceptions are only
weakly correlated with actual performance. Further, using data measuring respondents’ online be-
havior, we show that those who overrate their ability more frequently visit websites known to spread
false or misleading news. These overconﬁdent respondents are also less able to distinguish between
true and false claims about current events and report higher willingness to share false content, es-
pecially when it aligns with respondents’ political predispositions. Although discernment ability
is a strong predictor of these outcomes, an alternative analysis using a “residualized” measure of
overconﬁdence net of actual ability also explains additional variance in these behaviors.
In the next section, we review existing research on overconﬁdence and how we expect it to
operate for news discernment. Our methods section describes our research design, including a
novel task assessing respondents’ news discernment abilities. Our results show that overconﬁdence
is both common and associated with a range of undesirable media-related behaviors. Although our
design does not allow us to identify the causal eﬀect of overconﬁdence, these ﬁndings suggest that
the mismatch between one’s perceived ability to spot false stories and people’s actual abilities may
play an important and previously unrecognized role in the spread of false information online.
Who spreads false news?
Which individuals are more likely to engage with, believe, and spread dubious news? One body of
research emphasizes the role of partisan predispositions or motivated reasoning in the assessment
of news content (11) and exposure to it and sharing of it (12; 13). A second literature consid-
ers how improving individuals’ information evaluation and digital literacy skills can reduce their
vulnerability to false information online (14; 15). Finally, other studies focus on the role of pur-
poseful reasoning processes in reducing individual vulnerability to misinformation. People who
think more analytically or are more deliberative in their evaluation of news claims rate false news
as less accurate (16; 17). Conversely, people who tend to rely on emotion as they process informa-
tion or wrongly claim familiarity with nonexistent entities are more likely to see false headlines as
accurate (18; 19).
Our research builds on cognitive style accounts by examining the disparity between people’s
ability to spot false news and their beliefs about their skill in doing so. This approach is intended to
assess the contribution of cognition as well as metacognition to engagement behaviors. As we argue
below, overconﬁdence in one’s ability to distinguish between legitimate and false news may help
account for whether and how individuals engage with false or dubious online content (e.g., liking
or sharing). To put the point more directly, some portion of the public is likely to be especially
vulnerable to false information precisely because they do not realize that they are, in fact, vulnerable
to false information. As a result, these individuals may be more likely to unknowingly consume,
believe, and share false news.
The Dunning-Kruger eﬀect for news discernment
Building on prior studies of perceptual bias in self-assessments, we test for a Dunning-Kruger eﬀect
(DKE) in false news discernment. The DKE describes a general tendency of poor performers in
social and intellectual domains to be unaware of their own deﬁciency (4). By contrast, the most
competent performers slightly underestimate their own ability relative to others due to a form of
false consensus eﬀect in which they assume others are performing more similarly to themselves
than they really are (20). This pattern arises whether researchers elicit comparative self-evaluations
(ratings of performance relative to peers) or self-evaluations using absolute scales (5).
DKE research contends that poor performers suﬀer from a double bind: not only does a lack of
expertise produce errors in the ﬁrst place, it also prevents recognition of these errors and awareness
of others’ capabilities. In studies of perception and performance, people in the bottom quartile
of performers have tended to provide the most upwardly distorted self-perceptions. For instance,
Anson (21) ﬁnds that individuals who perform worst on a quiz measuring basic political knowledge
rate their own performance the same or even better than high performers.
The reported overconﬁdence of underperformers is not erased by ﬁnancial or social incentives
(6) and is corroborated by real-world behavior (e.g., in (not) selecting insurance for exam perfor-
mance (7)). These studies suggest that low performers genuinely believe in their own abilities and
are not simply making face-saving expressions of self-worth. Further, past research shows that
overconﬁdence is more common when people have reason to see themselves as knowledgeable or
competent — i.e., if the subject is not arcane and is prevalent in everyday life (5). Given its famil-
iarity, judgments of news accuracy are likely to ﬁt the DKE pattern as do knowledge about either
politics (21) or vaccines (22). We therefore propose the following research question:1
Research Question 1: To what extent will people who are least accurate at distinguish-
ing between legitimate and false news overrate their ability to distinguish mainstream
from false news?
Importantly, the DKE predicts that low performers will not recognize how poorly they per-
formed in relative terms, not that low performers will think they perform best. We therefore do
not expect that low performers will think they are the best at our task of distinguishing between
legitimate and false news. Instead, we will examine the extent to which poor performers do not
recognize that they are worse than most others at the task.
Does overconﬁdence matter?
Importantly, the DKE may have downstream eﬀects on behavior. Because overconﬁdent individuals
fail to recognize their own poor performance, they are less able to improve their domain-speciﬁc
skills. For instance, several studies ﬁnd that overconﬁdent individuals learn the least in classroom
settings (24).2We therefore expect that overconﬁdence in news discernment will be associated with
a variety of tendencies including exposure to false news, belief in its accuracy, and sharing it with
To begin, we expect a positive association between overconﬁdence and visits to false news web-
sites. The DKE implies less ability to discern which news stories are false when an individual is
exposed (e.g., on a social media platform), combined with lesser awareness of this discernment
1We ﬁled a preregistration for this project prior to accessing the data. We report a “populated pre-analysis plan”
(23) that details our preregistered hypotheses and analysis plan and identiﬁes which main text ﬁndings are preregistered
in SI Appendix E.
2A related literature details the conﬁdence with which individuals hold political misperceptions (25; 26; 27). This
works shows that many people are somewhat aware of their ignorance and therefore many misperceptions are not
conﬁdently held (27), and these individuals are more likely than the conﬁdently wrong to update their beliefs in response
to corrections (26).
deﬁciency, which would lead to greater incidental exposure to false news stories. Similarly, over-
conﬁdence may be seen as a form of invulnerability bias in which assumed mastery leads people
to feel little need to take preventative actions (e.g., to be cautious or engage in deliberate think-
ing about which sites one visits), which may produce additional exposure to questionable media
messages (28; 29). We therefore propose the following research question:
Research Question 2: Is overconﬁdence in one’s ability to distinguish mainstream from
false news positively related to false news exposure?
In addition, overconﬁdence may make people less likely to question a dubious news story’s
veracity, as high conﬁdence is associated with less reﬂection (30; 31). As a result, people who are
overconﬁdent may be more willing to accept false claims and to engage with false content in the form
of liking or sharing these stories (similarly, recent work suggests people generally lack awareness of
their susceptibility to inaccurate “general knowledge” claims they come across when reading works
of ﬁction (32)). Further, previous research indicates that individuals are generally more likely to
believe false claims when they are consistent with their own prior political beliefs (33). Therefore,
we would expect that the relationship between overconﬁdence and beliefs and engagement will be
strongest when the content involved aligns with respondents’ partisan preferences.
Research Question 3: (a) Is overconﬁdence positively related to holding mispercep-
tions on speciﬁc topics? (b) Is this relationship stronger when the claim is politically
Research Question 4: (a) Is overconﬁdence positively related to self-reported willing-
ness to like or share false content? (b) Is this relationship stronger when the claim is
We ﬁrst describe the DKE in our data. We divide the sample into four quartiles based on respon-
dents’ actual performance in our discernment task. For each of these four groups, we calculate
the mean score for both actual and perceived ability (percentiles ranging from 1–100), which we
present in Figure 1. As expected, actual performance closely tracks the idealized 45-degree line
when we plot the mean performance score in each quartile. However, for perceived ability, we see
a much ﬂatter line. Perceived ability increases modestly across our measure of actual ability. The
mean self-reported percentile for individuals in the bottom quartile in actual ability (i.e., the 1st–
25th percentile) is 63 in the Oct./Nov. survey and 64 in the Nov./Dec. survey. This quantity rises to
only 74 for the top quartile in both surveys. In other words, those who are in the bottom quartile in
actual performance rate themselves as being in about the 63rd/64th percentile, a vast overestimate
of their own performance. While those in the top quartile of actual performance rate their perceived
ability higher than those in the bottom quartile do, they underestimate where they rank in actual
In general, performance is only weakly associated with perceived ability (Oct./Nov. r= .08;
Nov./Dec. r= .10), as shown in SI Appendix Figure B3. Moreover, average self-reported percentile
(69th) is well above 50 (one-sample t-test p<.005), indicating that many people are overconﬁdent.
As Figure 1 illustrates, this overconﬁdence is concentrated most heavily among individuals in the
bottom quartile. That is, the individuals whose performance is objectively at the lowest level are
the most overconﬁdent in their abilities.
In line with prior work, male respondents display more overconﬁdence (7; 8; 42), and overcon-
ﬁdence is negatively associated with general political knowledge. There is no association with age
(43) despite age-based disparities in exposure to false news (12; 13). Finally, Republicans are more
3Critics of DKE analyses like the one presented in Figure 1 argue that it does not reﬂect the proposed mechanism —
metacognitive diﬀerences (i.e., perception accuracy) between high and low performers — and is instead the result of
systematic bias or measurement error; e.g., regression to the mean and the better-than-average eﬀect (34; 35). Proposed
alternative accounts of the DKE have led to vigorous theoretical and empirical debates (5; 6; 36; 37; 38; 39; 40;
41). Though no consensus has emerged, recent work suggests that metacognitive diﬀerences, general biases in self-
estimation, and statistical artefacts each contribute to the DKE (41).
Figure 1: Perceived false news detection ability for respondents grouped by actual performance
(a) Oct./Nov. 2018
(b) Nov./Dec. 2018
Notes: Gaps depict miscalibration between actual and self-assessed percentile of performance for quartile groups based
on actual performance with 95% CIs (note: CIs are smaller than the markers for actual performance and thus not
visible). Oct./Nov. N = 2,855, Nov./Dec. N = 4,150.
overconﬁdent than Democrats (44), which is not surprising given the lower levels of media trust
they report (see SI Appendix C, which shows that mass media trust and media aﬀect are both nega-
tively associated with overconﬁdence). We report pre-registered analyses regarding demographics
in greater depth in a separate manuscript (see SI Appendix E for details).
False news exposure
We next examine whether visits to false news websites are associated with overconﬁdence. Build-
ing on prior research examining the diﬀerence between subjective self-perceptions and objective
performance (8; 9; 10; 45; 46), we measure this concept as the diﬀerence between self-reported
relative performance and our objective measure of relative performance. As Parker and Stone (47)
argue, the diﬀerence score measure we employ here is appropriate when the theoretical mechanism
of interest is overconﬁdence rather than self-assessed ability per se (i.e., controlling for ability).
We are interested in the miscalibration between these components because the DKE relies on the
double bind of low ability paired with a lack of awareness.
We estimate ordinary least squares models using binary measures of false and mainstream news
exposure for both surveys (Oct./Nov. and Nov./Dec.). In each of these models, which are estimated
using survey weights, we include a set of standard covariates as well as a measure of the ideological
orientation of respondents’ news diet. Finally, we re-scale our measure of overconﬁdence to range
from -1 to 1 rather than from -100 to 100 to aid in interpretation. Results are shown in Table 1
and Figure 2. The baseline exposure rate to false news in the Oct./Nov. survey was 6.5%. We
ﬁnd that overconﬁdence is associated with greater rates of exposure in that survey (β=.06,SE =
.02, p<.01). Speciﬁcally, respondents at the 95th percentile of overconﬁdence were about six
percentage points more likely to have been exposed to false news in the post-survey period than the
those at the 5th percentile conditional on demographics. Similarly, those at the maximum value of
overconﬁdence were about 11 percentage points more likely to have been exposed than those at the
minimum. The relationship is not statistically signiﬁcant for the Nov./Dec. model, but the sample
size for that survey is signiﬁcantly reduced (n=767). When we instead pool the data, the results
are nearly identical to the results for the Oct./Nov. survey (see SI Appendix Table F1 for results
from logit models, which are substantively identical). One concern is that overconﬁdent individuals
may simply be more (or less) likely to visit online news websites in general. To test for this, we also
estimate identical regressions with mainstream news exposure as the dependent variable. We ﬁnd
that overconﬁdence is not associated with our binary measure of mainstream news exposure after
accounting for demographics.
Next, we examine the association between overconﬁdence and ability to distinguish between true
and false claims about political events that were topical at the time the surveys were ﬁelded. Here we
examine a misperceptions battery from the October/November survey measuring beliefs in claims
related to Brett Kavanaugh’s Supreme Court nomination. These regression models are again esti-
mated using survey weights and our set of standard covariates. We also again re-scale our measure
Figure 2: Overconﬁdence and news exposure
-1 -.5 0 .5 1
False news exposure
Mainstream news exposure
Notes: Predictive margins with 95% conﬁdence intervals, based on full model, all other variables held constant. The
overconﬁdence measure subtracts respondents’ actual percentile from their self-rated percentile and is re-scaled to
range from -1 to 1. False news exposure is coded as 1 if the respondent visited any such domain and 0 otherwise.
Mainstream news exposure is coded as 1 if the respondent visited any such domain and 0 otherwise. Data come from
the pooled model (N=2,547) which pools data from Oct./Nov. (N=1780) and Nov./Dec. (N=767) surveys.
Table 1: Overconﬁdence and news exposure (binary measures)
Oct./Nov. Nov./Dec. Pooled
False Mainstream False Mainstream False Mainstream
Overconﬁdence 0.0609** -0.0450 0.0003 -0.0007 0.0569*** -0.0415
(0.0231) (0.0505) (0.0003) (0.0006) (0.0186) (0.0411)
Constant -0.0815* 0.4645*** -0.0225 0.3735*** -0.0715* 0.4419***
(0.0354) (0.1010) (0.0498) (0.1199) (0.0298) (0.0799)
Control variables X X X X X X
R20.17 0.11 0.08 0.17 0.11 0.11
N 1780 1780 767 767 2547 2547
*p<.05, ** p<.01, *** p<.005 (two-sided). Cell entries are OLS coeﬃcients estimated using survey weights. The
overconﬁdence measure subtracts the respondent’s actual percentile from their self-rated percentile and is re-scaled
to range from -1 to 1. False news exposure is coded as 1 if the respondent visited any such domain and 0 otherwise.
Mainstream news exposure is coded as 1 if the respondent visited any such domain and 0 otherwise. All models include
controls for Democrat, Republican, college education, gender, nonwhite racial background, age, and media diet slant.
Table 2: Overconﬁdence and topical misperceptions
False Diﬀerence score
Overconﬁdence 0.1123 -0.3667***
Overconﬁdence ×congeniality -0.0390
Constant 1.9550*** -0.2223*
Control variables X X
Statement ﬁxed eﬀects X
N (statement) 4872
N (respondent) 2444 2904
*p<.05, ** p<.01, *** p<.005 (two-sided). Cell entries are OLS coeﬃcients. Respondents rated the accuracy of
four statements regarding the Kavanaugh appointment on four-point scales. The ﬁrst model’s outcome variable is per-
ceived accuracy of false statements only. The second model’s outcome variable is the diﬀerence in the mean perceived
accuracy of true and false statements. The overconﬁdence measure subtracts the respondent’s actual percentile from
their self-rated percentile, and is re-scaled to range from -1 to 1. Controls: Democrat, Republican, college education,
gender, nonwhite racial background, and age.
of overconﬁdence to range from -1 to 1 rather than from -100 to 100 to aid in interpretation. The
main results of this analysis are shown in Table 2. The ﬁrst column shows the results for the two
false statements provided to respondents. We include ﬁxed eﬀects for each statement to account
for their diﬀering baseline levels of plausibility and cluster at the respondent level to account for
correlations between their ratings across headlines. We ﬁnd no main eﬀect of overconﬁdence on
belief in these false claims in isolation (and no evidence that this relationship is moderated by
congeniality). The second column, however, shows results for diﬀerence scores (discernment),
which subtracts the perceived accuracy of false claims from that of true claims. Higher discern-
ment scores reﬂect greater belief in true statements relative to false ones. The negative coeﬃcient
(β=−.37, SE =.06, p<.005) thus indicates that overconﬁdence is negatively associated with
discernment ability on these topical claims.
We next turn to our measure of self-reported engagement intention (intent to like or share a post
on social media). These regression models again include a set of standard covariates, and we re-
scale our measure of overconﬁdence to range from -1 to 1 rather than from -100 to 100, to aid in
interpretation. We use survey weights in all models. These results are shown in Table 3. The ﬁrst
and third columns show results at the headline level, where the outcome is a four-point scale of
intention to either share or like a false story. Overconﬁdence has a clear positive relationship with
liking or sharing false stories (RQ4a). Moreover, this relationship varies as a function of partisan
congeniality (RQ4b). Likewise, the second and fourth columns of Table 3 use the average diﬀerence
in engagement intention across true and false headlines as a measure of discernment. The results
show that overconﬁdence is negatively related to discernment in which stories respondents would
engage with. Overconﬁdent individuals are thus not merely more likely to engage with news content
in general, but instead are speciﬁcally more inclined to share false stories versus mainstream ones
relative to respondents who are less overconﬁdent.
We construct our primary independent variable above as: Overconﬁdence = (Perceived ability -
Actual ability). We view this measurement strategy as appropriate for two reasons. First, this ap-
proach is consistent with how overconﬁdence has been measured in related studies of its behavioral
eﬀects (8; 9; 10; 45; 46). Second, and more importantly, our theory is explicitly about the diﬀerence
between perceived and actual ability and not about the independent role of either component. Thus,
the main regressions of interest are (broadly) structured as:
yi=γ0+γ1(Perceived ability −Actual ability) + εi,(1)
where yirepresents the outcome of interest and εiis our error term.
Table 3: Overconﬁdence and engagement intention
False Diﬀ. score False Diﬀ. score
Overconﬁdence 0.6690*** -0.4051*** 0.7610*** -0.3984***
(0.0691) (0.0193) (0.0466) (0.0283)
Congeniality 0.1893*** 0.1902***
Overconf ×congenial 0.2394*** 0.2692***
Constant 1.0620*** 0.0520 1.1020*** 0.0600
(0.0242) (0.0270) (0.0139) (0.0369)
Control variables X X X X
Headline ﬁxed eﬀects X X
R20.11 0.15 0.13 0.13
N (headline) 10194 14720
N (respondent) 2549 2566 3680 3717
*p<.05, ** p<.01, *** p<.005 (two-sided). Cell entries are OLS coeﬃcients. DVs are based on self-reported
intention to “like” or “share” each of the articles in the headline task on Facebook (four-point scales; 1 = not at all likely,
4 = very likely). These questions are asked only of respondents who report using Facebook. The ﬁrst model’s outcome
variable is engagement intent for false headlines only. The second model’s outcome variable is the diﬀerence in the
mean engagement intent for mainstream and false headlines. The overconﬁdence measure subtracts the respondent’s
actual percentile from their self-rated percentile, and is re-scaled to range from -1 to 1. Controls: Democrat, Republican,
college education, gender, nonwhite racial background, and age.
We use this speciﬁcation because we have no theoretical expectations about the independent
role of these predictors — our theory is about the mismatch between actual and perceived ability
(47). However, it is still worthwhile to consider whether one of these factors (perceived or actual
ability) is responsible for our results. In particular, it is important to try to isolate the eﬀects of
perceived ability given prior ﬁndings showing that false news belief and exposure are related to
individual-level diﬀerences in analytical thinking skills and reasoning ability (13; 14; 16; 17).
We therefore consider two alternative model speciﬁcations that seek to estimate the direct asso-
ciation between perceived ability and our outcome measures independent of its relationship to ac-
tual ability below. First, we attempt to “residualize” perceived ability, an approach that has become
standard in personality and social psychology (48). Second, we disaggregate the two components
and include them independently in a regression. (These approaches are mathematically quite simi-
lar, but we include them both for the sake of completeness).4To account for the fact that these three
approaches each have unique weaknesses, recent work has suggested all three be employed (48).
Residualizing perceived ability
We begin by following the strategy outlined in Anderson et al. (49), which uses a residualized
measure of perceived ability. Speciﬁcally, we ﬁrst ﬁt the regression
where δiis the residual error term. Assuming this model is correct, we can then use the estimated
residual error ˆ
δias a measure of perceived ability that is unrelated to actual ability. We then ﬁt a
model such as
4Indeed, the results would be identical if we used the full set of covariates from the disaggregated regression in our
where β1is intended to represents the independent relationship between (residualized) perceived
ability and the outcome.
With this residualization approach, we start with news exposure in SI Appendix Table F2. We
ﬁnd a positive correlation between residualized perceived ability and exposure but it is only statisti-
cally signiﬁcant for the pooled model (Oct./Nov.: β=0.07,p>.05; Nov./Dec.: β=0.02,p>.05;
Pooled: β=0.06,p<.05). This result diﬀers from our primary analysis only in that coeﬃcient
for the Oct./Nov. survey is not statistically signiﬁcant, though as in the primary analysis, it is sim-
ilar to the pooled coeﬃcient. Turning to topical misperceptions, SI Appendix Table F3 shows that
there is a signiﬁcant interaction between residualized perceived ability and congeniality (β=0.59,
p<.05), indicating that overconﬁdent individuals are more likely to believe in false statements that
are consistent with their prior beliefs. This result is more favorable for our theory than the one re-
ported in the primary analysis. However, unlike the primary results, residualized perceived ability
is not signiﬁcantly associated with decreased discernment between true and false claims. Finally,
SI Appendix Table F4 shows that residualized perceived ability is positively associated with liking
or sharing false stories (Oct./Nov.: β=0.33,p<0.01; Nov./Dec.: β=0.47,p<.005). These re-
lationships are strongest for congenial stories (Oct./Nov.: β=0.41, p<.005; Nov./Dec.: β=0.47,
p<.005). However, there is again no signiﬁcant association with discernment between mainstream
and false news in either wave (Oct./Nov.: β=−0.10,p>.05; Nov./Dec.:β=0.03,p>.05).
Our second approach is to include perceived and actual ability as two separate independent variables
in our model. Though our theory focuses on overconﬁdence, one might expect the coeﬃcient for
perceived ability to be positive and the coeﬃcient for actual ability to be negative for the outcome
measures we consider. To illustrate this idea, we simulate data according to formulas that assume a
data-generating process in which overconﬁdence is linearly associated with some outcome measure
per our theory (see SI Appendix Table F5). These results, which show perceived ability is positively
associated with the outcome and actual ability is negatively related to the outcome, suggest that the
disaggregation approach will provide the correct conclusion.
However, it is important to emphasize a few important limitations before presenting our dis-
aggregated results. First, this approach assumes that the component measures do not aﬀect one
another despite the fact that self-perception and performance likely do so (5; 48). Second, this
strategy is more diﬃcult to interpret because both coeﬃcients relate to the theory of interest. In-
creased perceived ability (controlling for actual ability) is an indicator for overconﬁdence but so
too is decreasing actual ability (controlling for perceived ability). Interpreting either coeﬃcient in
isolation with respect to our theory is therefore diﬃcult, especially in more complex models (i.e.,
those that include interaction terms; see Parker and Stone (47) for more extensive discussion of this
point). Third, the simulated results are based on the assumption of constant levels of measurement
error between the perceived and actual ability variables. If measurement error varies between them,
however, it may appear as if only one of the two variables is important despite the fact that both
are equally weighted in the true data generating process. To illustrate this point, we conduct a ver-
sion of the simulation described above but now add additional measurement error to the observed
perceived ability variable included in the disaggregated regression.5We thus assume the same data
generating process where overconﬁdence drives our results but now add diﬀerential measurement
error for perceived ability. The results in SI Appendix Table F6 now show a null result for perceived
ability and a signiﬁcant negative association with actual ability. Researchers who failed to consider
the possibility of diﬀerential measurement error might mistakenly infer that it is only actual ability
that drives these results. This scenario seems empirically plausible. A priori we would not expect
equal rates of measurement error between these components. Speciﬁcally, actual ability is mea-
sured via a series of twelve objective evaluation tasks that are combined into an aggregate score.
By contrast, perceived ability is measured as the average of two self-assessment survey items. Stan-
dard psychometric theory would suggest higher rates of measurement error for the perceived ability
5Speciﬁcally, we add normal noise with standard deviation 2.5.
With these caveats, we turn to our disaggregated results below. First we re-examine RQ1, which
predicts that overconﬁdence will be related to diﬀerential rates of exposure to false news websites.
The disaggregated models are shown in SI Appendix Table F7. Consistent with the extrapolation
from our theory described above, the perceived and actual ability coeﬃcients are signed in opposite
directions, but only the actual ability coeﬃcients are signiﬁcant for the Oct./Nov. (β=−0.06,
p<.05) and pooled samples (β=−0.06, p<.05) (perceived ability: Oct./Nov. β=0.06, pooled
β=0.05, both not signiﬁcant). These results suggest either that actual ability is more important
than perceived ability or is measured with less error per our discussion above. As in the primary
analysis, neither is signiﬁcant for the Nov./Dec. sample. Next, we turn to the topical misperceptions
results (SI Appendix Table F8). In the primary analysis, we ﬁnd no main eﬀects or interactions with
congeniality but do ﬁnd a main eﬀect for the diﬀerence outcome. When we disaggregate, we do
ﬁnd main eﬀects for the actual ability measure in both analyses. The interaction terms, however,
tell a complicated story. There is a positive signiﬁcant interaction between perceived ability and
congeniality (β=0.62,p<.01), indicating that more overconﬁdent individuals are more likely
to believe false claims that are congenial to their prior beliefs. However, there is also a positive
signiﬁcant coeﬃcient for the interaction with actual ability (β=0.26,p<.05), which suggests that
overconﬁdence (decreased actual ability controlling for perceived ability) increases belief in false
stories only when they are not congenial. Finally, we turn to the results for engagement intentions
(SI Appendix Table F9). For the headline-level analyses, the results again mirror the ﬁndings in the
primary analysis with both the perceived ability and actual ability coeﬃcients being signiﬁcant (but
signed in opposite directions). The interactions with headline congeniality are also both signiﬁcant
and correctly signed. For the diﬀerence score analysis, the results are more mixed. The actual
ability coeﬃcient is signiﬁcant and positive for the Oct./Nov. sample (β=0.60,p<.005) and the
Nov./Dec. sample (β=0.61,p<.005). However, the perceived ability coeﬃcient is not signiﬁcant
for the Oct./Nov. sample (β=0.03,p>.05) and signiﬁcant but incorrectly signed for the Nov./Dec.
In all, these additional tests provide a somewhat mixed picture. While our results certainly show
that not all results are driven purely by the actual ability measure, some of the evidence suggests that
actual ability could be playing a crucial role for some of our results. However, we cannot rule out
the possibility that these diﬀerences are attributable to diﬀerential measurement error. In several
cases, the point estimates for perceived and actual ability are quite similar and the main diﬀerences
in our inferences are the result of our estimates of perceived ability being more imprecise (e.g., SI
Appendix Table F7).
We ﬁnd that respondents tend to think they are better than the average person at news discernment,
and perceived ability is only weakly associated with actual ability, with the worst performers also
being the most overconﬁdent. Importantly, overconﬁdence is associated with a range of normatively
troubling outcomes, including visits to false news websites in online behavior data. The overconﬁ-
dent also express greater willingness to share false headlines and are less able to discern between
true and false statements about contemporaneous news events. Notably, the overconﬁdent are par-
ticularly susceptible to congenial false news. These results suggest that overconﬁdence may be a
crucial factor for explaining how false and low-quality information spreads via social media.6Many
people are simply unaware of their own vulnerability to misinformation. Targeting these overconﬁ-
dent individuals could be an important step toward reducing misinformation on social media sites,
though how best to do so remains an open question. Other research ﬁnds that the behavioral ef-
fects of high conﬁdence and weak performance include resistance to help, training, and corrections
(5; 26; 50). An incorrect view of one’s ability to detect false news might reduce the inﬂuence of
new information about how to assess media items’ credibility as well as willingness to engage with
digital literacy programs. For this reason, it may be important to better understand the roots of
overconﬁdence, from demographics (51) to domain involvement (52) to social incentives (49; 53),
6SI Appendix C also explores whether and how overconﬁdence is related to trust in the media. We show that
overconﬁdence is negatively associated with trust in the mainstream media, but positively associated with trust in
information seen on Facebook.
and how they apply in the case of perceptions of news discernment.
These results should also be understood in the context of their limitations. Most critically,
our analyses are correlational and thus face concerns about endogeneity. In this context, we have
speciﬁc ex ante reasons to suspect that the relationship between overconﬁdence and our outcome
measures is at least partially endogenous. For instance, habitual exposure to false news might lead
to poorly calibrated estimates of one’s ability to detect it, especially given the tendency for people
to treat incoming information as true and the subsequent eﬀects this can have on feelings of ﬂu-
ency (54). Overconﬁdence and false news engagement could even mutually reinforce one another
over time (55). Future work must determine the extent to which overconﬁdence plays a causal role
in the behaviors with which we show it is associated. One possible direction would be to exper-
imentally manipulate overconﬁdence by informing respondents about their relative performance.
Another approach might be to manipulate individuals’ self-perception by randomly assigning them
a competency score, although such a study would require careful ethical consideration.
Beyond issues of endogeneity, it is important to carefully interpret the associations we detect.
Based on prior work regarding the role of purposeful reasoning (16; 17) and literacy skills (14)
in individual vulnerability to misinformation, we would assume discernment ability itself — from
which our overconﬁdence measure is in part derived — drives engagement with this content. Un-
surprisingly, we ﬁnd that people who are worse at discerning between legitimate and false news in
the context of a survey are worse at doing so in their browsing habits. Further, actual ability is a
stronger predictor than perceived ability, though the eﬀect sizes are similar; as noted, this discrep-
ancy may be a reﬂection of greater measurement error in our measure of perceived ability. However,
our results also show that inﬂated perceptions of ability are independently associated with engaging
with misinformation, suggesting perceived ability net of actual ability may be a further source of
vulnerability (i.e., an additional, metacognitive component). Speciﬁcally, when residualized, per-
ceived ability net of actual ability is associated with dubious news site exposure, misperceptions,
and sharing intent. It is not our goal here to argue that overconﬁdence supersedes ability itself
as the key predictor or cause of vulnerability to misinformation, nor do our ﬁndings support this
interpretation. Indeed, our results lend further support to work that shows ability deﬁcits are a
serious issue in this domain. Further, because excess conﬁdence is associated with less reﬂection
(30; 31), the ways that discernment ability and overconﬁdence inﬂuence engagement with dubious
information may be linked. Ultimately, adjudicating between these accounts would require further
improvements to the measurement of overconﬁdence, which remains a complicated endeavor in all
research contexts (5; 47; 48). We rely on overconﬁdence as measured by the diﬀerence between
actual and self-assessed performance on a news discernment task. Future research should explore
diﬀerent approaches to measuring overconﬁdence in this domain and assess how they relate to who
views, believes, and spreads false news content. In particular, scholars should consider how to
measure perceived ability with more precision and/or seek to directly manipulate these concepts in
isolation to understand their independent eﬀects.
Finally, though we replicate our results in multiple samples, further eﬀorts to demonstrate that
the relationship we observe holds in other contexts would be valuable. First, work should validate
these results with mobile data and with data that allow us to observe actual sharing behavior in
addition to self-reported sharing (though they appear to correspond at least to some extent (56)).
Likewise, our data come from the American context, though based on cross-national ﬁndings re-
garding the pervasive nature of overconﬁdence (57), it is reasonable to believe the outcomes are not
unique to the U.S. and may be even more worrisome elsewhere.
Ultimately, our results provide new evidence of an important potential mechanism by which
people may fall victim to misinformation and disseminate it online using survey and behavioral
data from multiple large national samples. Understanding overconﬁdence may be an important
step toward better understanding the public’s vulnerability to false news and the steps we should
take to address it.
Materials and methods
To answer our research questions, we draw on data from two novel two-wave survey panels con-
ducted by the survey company YouGov during and after the 2018 U.S. midterm elections, allowing
us to replicate our analyses across time and samples:
•A two-wave panel study ﬁelded October 19–26 (wave 1; N = 3,378) and October 30–November
6, 2018 (wave 2; N = 2,948)
•A two-wave panel study ﬁelded November 20–December 27, 2018 (wave 1; N = 4,907) and
December 14, 2018–January 3, 2019 (wave 2; N = 4,283)
Respondents were selected by YouGov’s matching and weighting algorithm to approximate the de-
mographic and political attributes of the U.S. population (see SI Appendix A). Participants were
ineligible to take part in more than one study. Both surveys in this research were approved by
the institutional review boards of the University of Exeter, the University of Michigan, Princeton
University, and Washington University in St. Louis. All subjects gave informed consent to par-
ticipate in each survey. The pre-analysis plans are available at https://osf.io/fr4k5 and
Measuring discernment ability: News headline rating task
In each survey, we asked respondents to evaluate the accuracy of a number of headlines on a four-
point scale ranging from “Not at all accurate” (1) to “Very accurate” (4). The articles, all of which
appeared during the 2018 midterms, were published by actual mainstream and false news sources
and were balanced within each group in terms of their partisan congeniality. In total, we selected
four mainstream news articles that were congenial to Democrats and four that were congenial to
Republicans (each split between low- and high-prominence sources), and two pro-Democrat and
7Participants received an orthogonal treatment related to media literacy in both surveys. Due to a programming
error, all respondents received the treatment in the Oct./Nov. survey. The results of this study are reported in (14).
Other orthogonal studies embedded in these surveys are reported in (58) and (59).
two pro-Republican false news articles. We deﬁne high prominence mainstream sources as those
that more than four in ten Americans reported recognizing in recent polling by Pew (60). False
news stories were veriﬁed as false by at least one third party fact-checking organization.8To the
extent possible, we chose stories that were balanced in their face validity. The complete listing of
all stories tested is provided in SI Appendix A.
The stories were formatted exactly as they appeared in the Facebook news feed at the time the
study was designed. This format replicated the decision environment faced by everyday users, who
frequently assess the accuracy of news stories given only the content that appears in social media
feeds.9Respondents rated twelve stories provided in randomized order during the second wave of
We then calculate their measured ability to discern mainstream from false news. We do this
by taking the diﬀerence in the mean perceived accuracy between true and false news headlines
(i.e., mean perceived mainstream news accuracy - mean perceived false news accuracy). We use
a diﬀerence score, rather than perceived accuracy of false news alone, to account for respondents
who may tend to rate all news as mostly accurate (i.e., are highly credulous) or all news as mostly
inaccurate (i.e., are indiscriminately skeptical). This approach has been frequently used in past
studies (e.g., 14; 16).
SI Appendix Table B1 shows descriptive statistics for the mean perceived accuracy of main-
stream and false headlines as well as the diﬀerence score in each wave. The results show that, on
average, respondents did ﬁnd mainstream stories to be more credible. For instance, the average
rating for mainstream articles in the Oct./Nov. wave was 2.68 while it was 1.90 for false head-
lines. Although these diﬀerences are statistically distinguishable, the diﬀerence — our measure
8Respondents also rated the accuracy of four hyper-partisan news headlines, which are technically factual but present
slanted facts in a deceptive manner. We do not include these articles in this analysis due to the inherent ambiguity as
to whether they are truthful. These headlines were included as part of a separate study reported in Guess et al. (14).
9Due to Facebook’s native formatting, the visual appearance of the false article previews diﬀered somewhat from
those of the mainstream articles — see SI Appendix Figures A1 and A2.
10In each survey’s ﬁrst wave, respondents were randomly assigned to evaluate 1 of the 2 stories that fall into each of
the 6 categories (e.g., pro-Republican false news, pro-Democrat high-prominence mainstream news, etc.) for a total of
six headline evaluations. In the second wave, respondents evaluated all 12 stories using the same approach. We focus
only on the wave 2 measures.
of discernment — is less than one-point on the four point scale (0.78 for Oct./Nov. and 0.62 for
Nov./Dec.). In other words, respondents rated a mainstream headline as less than one point more
accurate on our four-point accuracy scale compared to false news headlines. The ranges of values
we observe for discernment are -1.5 to 2.88 in Oct./Nov. and -1.38 to 2.75 in Nov./Dec. Although
our inferences regarding overconﬁdence are based on a relatively small number of news headlines
(k= 12), these headlines appear to be comparable to the large set of political headlines in Penny-
cook et al. (61) (k= 146). After re-scaling all outcomes to range from 0–1, the average accuracy
rating for our mainstream headlines was .67/.66 across our two surveys and the average rating for
false headlines was .48/.50. These mean values are highly similar to Pennycook et al., who found
an average rating of .63 for mainstream headlines and .49 for false headlines.
With our discernment measure, we then then order respondents and calculate their percentile.
That is, each respondent is scored on a scale ranging from one to one hundred based on their per-
formance where a score of one means that 99% of respondents performed better and a score of 99
means they performed better than 99% of respondents. In the Oct./Nov. survey, the 25th percentile
score for discernment was .38, the 50th was .88, the 75th was 1.25, and the 99th was 2.38. Similarly,
in Nov./Dec., the 25th percentile discernment score was .13, the 50th was .63 the 75th was 1.00,
and the 99th was 2.25.
Accuracy of perceptions of relative ability (overconﬁdence)
After the headline rating task, we ask two questions in wave 2 of each survey that directly measure
diﬀerences in perceived ability to detect false news compared to the public.
1. “How do you think you compare to other Americans in your general ability to recognize
news that is made up? Please respond using the scale below, where 1 means you’re at the
very bottom (worse than 99% of people) and 100 means you’re at the very top (better than
99% of people),”
2. “How do you think you compare to other Americans in how well you performed in this study
at recognizing news that is made up? Please respond using the scale below, where 1 means
you’re at the very bottom (worse than 99% of people) and 100 means you’re at the very top
(better than 99% of people).”
For each question, respondents could use a slider to indicate a number between 1 and 100. These
measures (“general ability”/“in this study”) are highly correlated (r = .73 in both Oct./Nov. 2018
and Nov./Dec. 2018 surveys), so we take their average as our measure of perceived relative ability.
On the resulting scale, the mean self-assessed relative ability was in the 69th percentile for both
surveys (Oct./Nov. M = 69.46, SD = 18.59; Nov./Dec. M = 69.43, SD = 17.8). In both surveys,
fewer than 12% of respondents placed themselves below the 50th percentile. The full distributions
are shown in SI Appendix Figure B1.
We then combine these variables to compute the overconﬁdence measure as the diﬀerence be-
tween people’s self-reported ability and their actual performance. The result is a scale that can range
from -100 to 100. We show the distribution of overconﬁdence in SI Appendix B2. (SI Appendix
Figure B3 shows the fairly weak relationship between self-rating and actual ability underlying the
overconﬁdence measure.) In both surveys, 73% of respondents were at least somewhat overcon-
ﬁdent, with an average overconﬁdence score of 21.76 in Oct./Nov. and 21.7 in Nov./Dec. (SD =
30.77 to 30.43), meaning the average respondent placed themselves about 22 percentiles higher
than their actual score warranted. About 20% of respondents in each survey rated themselves 50 or
more percentiles higher than their discernment score warranted.
One potential concern is that our measure of overconﬁdence may be driven by diﬀerences in
people’s ability to recognize one type of stories rather than how well they can diﬀerentiate between
them per se. SI Appendix Figure B4 therefore disaggregates these components. The ﬁgure shows
that overconﬁdent respondents perceived mainstream news as less accurate than their counterparts,
and, to an even greater extent, perceived false news as more accurate than their counterparts.
Outcomes and behaviors of interest
To answer RQ2–RQ4, we also create measures of visits to false news websites, topical mispercep-
tions, and self-reported engagement intentions (sharing/liking). We describe our measures for each
News exposure data News exposure is measured using behavioral data on respondents’ web visits
collected unobtrusively with their informed consent. Data is available from users’ laptop or desktop
computers. Web visits are collected anonymously with users’ permission through a mix of browser
plug-ins, proxies, and VPNs. The provider of this passive metering data is the ﬁrm Reality Mine,
whose technology underlies the YouGov Pulse panel from which survey respondents were sampled.
Our measures of news exposure come from a period immediately following the survey. The lists
we used to code each type of media are below:
•Mainstream news visit: One of AOL, ABC News, CBSNews.com, CNN.com, FiveThir-
tyEight, FoxNews.com, Huﬃngton Post, MSN.com, NBCNews.com, NYTimes.com, Politico,
RealClearPolitics, Talking Points Memo, The Weekly Standard, WashingtonPost.com, WSJ.com,
•False news visit: Any visit to one of the 673 domains identiﬁed (62) as a false news producer
as of September 2018 excluding those with print versions (including but not limited to Ex-
press, the British tabloid) and also domains that were previously classiﬁed (63) as a source
of hard news. In addition, we exclude sites that predominantly feature user-generated content
(e.g., online bulletin boards) and political interest groups.
Duplicate visits to webpages were not counted if they were successive (i.e., a page that was
reloaded after ﬁrst opening it). URLs were cleaned of referrer information and other parameters
before de-duplication. (For more details, see the processing steps described in Guess (13).)
We ﬁrst created a binary measure of whether respondents made one or more visits to false news
sites.11 Our binary measure of false news exposure is coded as 1 if the respondent visited any of
the domains in our list (Oct./Nov.: 7%, Nov./Dec.: 6%) and 0 otherwise. We also created a binary
measure of mainstream news exposure that is coded as 1 if the respondent visited any such domain
in our list (Oct./Nov.: 60%, Nov./Dec.: 52%) and 0 otherwise. We use the latter measure to account
for the possibility that overconﬁdent individuals may simply be more likely to be exposed to news
In addition to false and mainstream news exposure, we also measure the overall ideological
slant of respondents’ total information diet, which we divide into deciles from most liberal (decile
1) to most conservative (decile 10) using the method presented by Guess (64). We use this measure
in our analysis of news exposure to control for the general ideological orientation of respondents’
Importantly, not all respondents who were part of our survey chose to provide behavioral data.
Thus, our sample sizes using this data decrease, especially in the Nov./Dec. wave in which only
22% of respondents also provided online traﬃc data (vs. 63% in Oct./Nov.). The decline between
surveys reﬂects the lack of available respondents who (a) participated in YouGov Pulse panel and
(b) did not participate in our earlier waves of data collection. The result is that analyses using news
exposure data have less power (we also consider pooled analyses across surveys for this reason).
Topical misperceptions, engagement, and congeniality In the Oct./Nov. survey, we included a
battery of questions asking respondents about their beliefs in speciﬁc claims related to the conﬁrma-
tion hearings for Justice Brett Kavanaugh, which occurred shortly before the survey was ﬁelded.12
Respondents were shown two true and two false statements that they rated on a four-point accuracy
scale ranging from “Not at all accurate” to “Very accurate.”
These statements were balanced in terms of partisan orientation so one true and one false state-
11The distribution was highly skewed; 93–94% of respondents visited zero false news sites and the distribution
among non-zero respondents had a long right tail (Oct./Nov. M =.43, SD = 3.24, min=0, max=75; Nov./Dec. M =.21,
SD = 1.37, min=0, max=25).
12The conﬁrmation hearings where Kavanaugh and Christine Blasey Ford testiﬁed took place in late September
2018. The ﬁnal senate vote took place on October 6.
ment was congenial to Democrats and one true and one false statement was congenial to Republi-
cans. Both the true and false statements were highly visible on social media during the hearings.13
We measured potential engagement (liking/sharing) with false news stories during the headline rat-
ing task. For each headline, respondents were asked to self-report their intention to “like” or “share”
each article (1 = not at all likely, 4 = very likely). This question was asked only of respondents who
report using Facebook.14 It should be noted that perceived accuracy questions appeared immedi-
ately before our sharing intent questions in the survey, which may prime accuracy concerns among
respondents and thereby alter self-reported sharing behavior (65).
For both misperceptions and engagement, we analyze the data in two ways. First, we create a
diﬀerence score. For the topical misperceptions, for instance, we subtracted the perceived accuracy
of false statements from the perceived accuracy of true statements to create a measure of discern-
ment. We calculated mean responses of intentions to like/share mainstream stories, false stories,
and the diﬀerence using an identical procedure. Descriptive statistics for these measures are shown
in SI Appendix Table B1.
Finally, we also examine our results at the headline or statement level using only false state-
ments/headlines so that we can test whether the relationship between overconﬁdence and beliefs or
behavior varies by partisan congeniality. Congeniality is coded at the headline or statement level for
partisans to indicate that a story or statement is consistent with the respondents’ partisan leanings
(e.g., a Democrat evaluating a story that is favorable to a Democrat). To determine the partisanship
of respondents in the U.S. survey, we used the standard two-question party identiﬁcation battery
(which includes leaners) to classify respondents as Democrats or Republicans.
Additional covariates Our statistical models include a series of standard covariates including
dichotomous indicators of Democrat and Republican party aﬃliation (including leaners), college
13This battery was included in both waves of the Oct./Nov. survey. We focus only on the wave 2 results as the ﬁrst
wave preceded our collection of the overconﬁdence measure, but SI Appendix D shows that our results replicate fully
when using the wave 1 topical misperception battery.
14We only observe self-reported behavioral intentions. However, self-reported sharing intention for political news
articles has been shown to correlate with aggregate observed sharing behavior on Twitter at r= .44 (56).
education, gender, nonwhite racial background, and dichotomous indicators of membership in age
groups (30–44, 45–59, and 60+; 18–29 is the omitted category). Complete descriptions of all survey
items and measures are included in SI Appendix A.
Our October/November 2018 respondents are 57% female, 80% white, median age 55, 37%
hold a four-year college degree or higher, 49% identify as Democrats (including leaners), and 34%
identify as Republicans (including leaners). Our November/December 2018 respondents are 55%
female, 68% white, median age 50, 32% hold a four-year college degree or higher, 46% identify as
Democrats (including leaners), and 36% identify as Republicans (including leaners).
Data ﬁles and scripts necessary to replicate the results in this article will be made available at the
following Open Science Framework repository: https://osf.io/xygwt/.
Barthel M, Mitchell A, Holcomb J (2016) Many americans believe fake news is sowing confusion.
Pew Research Center 15:12.
Davison WP (1983) The third-person eﬀect in communication. Public opinion quarterly 47(1):1–
Sun Y, Shen L, Pan Z (2008) On the behavioral component of the third-person eﬀect. Communi-
cation Research 35(2):257–278.
Kruger J, Dunning D (1999) Unskilled and unaware of it: how diﬃculties in recognizing one’s
own incompetence lead to inﬂated self-assessments. Journal of personality and social psychology
Dunning D (2011) The dunning–kruger eﬀect: On being ignorant of one’s own ignorance. Ad-
vances in experimental social psychology 44:247–296.
Ehrlinger J, Johnson K, Banner M, Dunning D, Kruger J (2008) Why the unskilled are unaware:
Further explorations of (absent) self-insight among the incompetent. Organizational behavior and
human decision processes 105(1):98–121.
Ferraro PJ (2010) Know thyself: Competence and self-awareness. Atlantic Economic Journal
Ortoleva P, Snowberg E (2015) Overconﬁdence in political behavior. American Economic Review
Sheﬀer L, Loewen P (2019) Electoral conﬁdence, overconﬁdence, and risky behavior: Evidence
from a study with elected politicians. Political Behavior 41(1):31–51.
Kovacs RJ, Lagarde M, Cairns J (2020) Overconﬁdent health workers provide lower quality health-
care. Journal of Economic Psychology 76:102213.
Vegetti F, Mancosu M (2020) The impact of political sophistication and motivated reasoning on
misinformation. Political Communication.
Grinberg N, Joseph K, Friedland L, Swire-Thompson B, Lazer D (2019) Fake news on twitter
during the 2016 us presidential election. Science 363(6425):374–378.
Guess AM, Nyhan B, Reiﬂer J (2020) Exposure to untrustworthy websites in the 2016 us election.
Nature human behaviour 4(5):472–480.
Guess AM, et al. (2020) A digital media literacy intervention increases discernment between main-
stream and false news in the united states and india. Proceedings of the National Academy of
Roozenbeek J, van der Linden S (2019) Fake news game confers psychological resistance against
online misinformation. Palgrave Communications 5(1):1–10.
Pennycook G, Rand DG (2018) Lazy, not biased: Susceptibility to partisan fake news is better
explained by lack of reasoning than by motivated reasoning. Cognition.
Bago B, Rand DG, Pennycook G (2020) Fake news, fast and slow: Deliberation reduces belief in
false (but not true) news headlines. Journal of experimental psychology: general.
Pennycook G, Rand DG (2018) Who falls for fake news? the roles of bullshit receptivity, over-
claiming, familiarity, and analytic thinking. Journal of personality.
Martel C, Pennycook G, Rand DG (2020) Reliance on emotion promotes belief in fake news.
Cognitive research: principles and implications 5(1):1–20.
Ross L, Greene D, House P (1977) The ’false consensus eﬀect’: An egocentric bias in social
perception and attribution processes. Journal of experimental social psychology 13(3):279–301.
Anson IG (2018) Partisanship, political knowledge, and the dunning-kruger eﬀect. Political Psy-
Motta M, Callaghan T, Sylvester S (2018) Knowing less but presuming more: Dunning-kruger
eﬀects and the endorsement of anti-vaccine policy attitudes. Social Science & Medicine 211:274–
Duﬂo E, et al. (2020) In praise of moderation: Suggestions for the scope and use of pre-analysis
plans for rcts in economics, (National Bureau of Economic Research), Technical report.
Pazicni S, Bauer CF (2014) Characterizing illusions of competence in introductory chemistry
students. Chemistry Education Research and Practice 15(1):24–34.
Pasek J, Sood G, Krosnick JA (2015) Misinformed about the aﬀordable care act? leveraging
certainty to assess the prevalence of misperceptions. Journal of Communication 65(4):660–673.
Li J, Wagner MW (2020) The value of not knowing: Partisan cue-taking and belief updating of
the uninformed, the ambiguous, and the misinformed. Journal of Communication.
Graham MH (2020) Self-awareness of political knowledge. Political Behavior 42(1):305–326.
Douglas KM, Sutton RM (2004) Right about others, wrong about ourselves? actual and perceived
self-other diﬀerences in resistance to persuasion. British Journal of Social Psychology 43(4):585–
Hansen EM, Yakimova K, Wallin M, Thomsen L (2010) Can thinking you’re skeptical make you
more gullible? the illusion of invulnerability and resistance to manipulation in The individual and
the group: Future challenges, eds. Jacobsson C, Ricciardi MR. (Proceedings from the 7th GRASP
conference, University of Gothenburg, May 2010).
Thompson VA, Turner JAP, Pennycook G (2011) Intuition, reason, and metacognition. Cognitive
Pennycook G, Ross RM, Koehler DJ, Fugelsang JA (2017) Dunning–kruger eﬀects in reasoning:
Theoretical implications of the failure to recognize incompetence. Psychonomic bulletin & review
Salovich NA, Rapp DN (2020) Misinformed and unaware? metacognition and the inﬂuence of
inaccurate information. Journal of experimental psychology: learning, memory, and cognition.
Flynn D, Nyhan B, Reiﬂer J (2017) The nature and origins of misperceptions: Understanding false
and unsupported beliefs about politics. Political Psychology 38:127–150.
Burson KA, Larrick RP, Klayman J (2006) Skilled or unskilled, but still unaware of it: how per-
ceptions of diﬃculty drive miscalibration in relative comparisons. Journal of personality and
social psychology 90(1):60.
Krueger J, Mueller RA (2002) Unskilled, unaware, or both? the better-than-average heuristic and
statistical regression predict errors in estimates of own performance. Journal of personality and
social psychology 82(2):180.
Kruger J, Dunning D (2002) Unskilled and unaware–but why? a reply to krueger and mueller
(2002). Journal of Personality and Social Psychology 82(2):189–192.
Feld J, Sauermann J, De Grip A (2017) Estimating the relationship between skill and overconﬁ-
dence. Journal of behavioral and experimental economics 68:18–24.
Schlösser T, Dunning D, Johnson KL, Kruger J (2013) How unaware are the unskilled? empirical
tests of the “signal extraction” counterexplanation for the dunning–kruger eﬀect in self-evaluation
of performance. Journal of Economic Psychology 39:85–100.
Gignac GE, Zajenkowski M (2020) The dunning-kruger eﬀect is (mostly) a statistical arte-
fact: Valid approaches to testing the hypothesis with individual diﬀerences data. Intelligence
Miller JE, Windschitl PD, Treat TA, Scherer AM (2019) Unhealthy and unaware? misjudging so-
cial comparative standing for health-relevant behavior. Journal of Experimental Social Psychology
McIntosh RD, Fowler EA, Lyu T, Della Sala S (2019) Wise up: Clarifying the role of metacogni-
tion in the dunning-kruger eﬀect. Journal of Experimental Psychology: General.
Niederle M, Vesterlund L (2007) Do women shy away from competition? do men compete too
much? The quarterly journal of economics 122(3):1067–1101.
Prims JP, Moore DA (2017) Overconﬁdence over the lifespan. Judgment and decision making
Ortoleva P, Snowberg E (2015) Are conservatives overconﬁdent? European Journal of Political
Bregu K (2020) Overconﬁdence and (over) trading: The eﬀect of feedback on trading behavior.
Journal of Behavioral and Experimental Economics 88:101598.
Lambert J, Bessière V, N’Goala G (2012) Does expertise inﬂuence the impact of overconﬁdence
on judgment, valuation and investment decision? Journal of Economic Psychology 33(6):1115–
Parker AM, Stone ER (2014) Identifying the eﬀects of unjustiﬁed conﬁdence versus overcon-
ﬁdence: Lessons learned from two analytic methods. Journal of behavioral decision making
Belmi P, Neale MA, Reiﬀ D, Ulfe R (2020) The social advantage of miscalibrated individuals:
The relationship between social class and overconﬁdence and its implications for class-based in-
equality. Journal of personality and social psychology 118(2):254.
Anderson C, Brion S, Moore DA, Kennedy JA (2012) A status-enhancement account of overcon-
ﬁdence. Journal of personality and social psychology 103(4):718.
Sheldon OJ, Dunning D, Ames DR (2014) Emotionally unskilled, unaware, and uninterested in
learning more: Reactions to feedback about deﬁcits in emotional intelligence. Journal of Applied
Mondak JJ, Anderson MR (2004) The knowledge gap: A reexamination of gender-based diﬀer-
ences in political knowledge. The Journal of Politics 66(2):492–512.
Perloﬀ RM (1989) Ego-involvement and the third person eﬀect of televised news coverage. Com-
munication research 16(2):236–262.
Cheng JT, et al. (2020) The social transmission of overconﬁdence. Journal of Experimental Psy-
Brashier NM, Marsh EJ (2020) Judging truth. Annual review of psychology 71.
Slater MD (2007) Reinforcing spirals: The mutual inﬂuence of media selectivity and media eﬀects
and their impact on individual behavior and social identity. Communication theory 17(3):281–303.
Mosleh M, Pennycook G, Rand DG (2020) Self-reported willingness to share political news arti-
cles in online surveys correlates with actual sharing on twitter. Plos one 15(2):e0228882.
Stankov L, Lee J (2014) Overconﬁdence across world regions. Journal of Cross-Cultural Psy-
Guess AM, et al. (2020) “fake news” may have limited eﬀects beyond increasing beliefs in false
claims. Harvard Kennedy School Misinformation Review 1(1).
Berlinski N, et al. (2021) The eﬀects of unsubstantiated claims of voter fraud on conﬁdence in
elections. Journal of Experimental Political Science.
Mitchell A, Gottfried J, Kiley J, Matsa KE (2014) Political polarization & media
habits. Pew Research Center, October 21, 2014. Downloaded March 21, 2019 from
Pennycook G, Binnendyk J, Newton C, Rand D (2020) A practical guide to doing behavioural re-
search on fake news and misinformation. PsyArXiv [Preprint] (2020). https://psyarxiv.com/g69ha
(accessed 21 January 2021).
Allcott H, Gentzkow M, Yu C (2019) Trends in the diﬀusion of misinformation on social media.
Research & Politics 6(2):2053168019848554.
Bakshy E, Messing S, Adamic LA (2015) Exposure to ideologically diverse news and opinion on
facebook. Science 348(6239):1130–1132.
Guess AM (Forthcoming) (almost) everything in moderation: New evidence on americans’ online
media diets. American Journal of Political Science.
Pennycook G, et al. (2020) Shifting attention to accuracy can reduce misinformation online.
PsyArXiv [Preprint] (2020). https://psyarxiv.com/3n9u8/ (accessed 21 January 2021).