Available via license: CC BY-NC-ND 4.0
Content may be subject to copyright.
Creative Commons CC-BY-NC-ND: This article is distributed under the terms of the Creative Commons Attribution-
NonCommercial-NoDerivs 4.0 License (http://www.creativecommons.org/licenses/by-nc-nd/4.0/) which permits non-commercial
use, reproduction and distribution of the work as published without adaptation or alteration, without further permission provided the original work
is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).
https://doi.org/10.1177/2053168019870351
Research and Politics
July-September 2019: 1 –10
© The Author(s) 2019
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/2053168019870351
journals.sagepub.com/home/rap
Fact-checking websites have changed how media outlets
cover politics. Sites like PolitiFact, FactCheck.org, and The
Washington Post Fact Checker seek to correct misinforma-
tion and hold politicians accountable by providing exten-
sive coverage of the accuracy of claims made by political
figures. Nevertheless, inaccurate claims made by politi-
cians continue to mar public debate. How can journalists
more effectively hold elites accountable when they spread
misinformation?
Extensive research has been conducted on the effects of
fact-checking on people’s factual beliefs (e.g., Flynn,
Nyhan and Reifler, 2017) and the effects of media scrutiny
on how the public views politicians (e.g., Snyder and
Strömberg, 2010). However, less is known about how and
under what conditions journalistic scrutiny might increase
the reputational costs to politicians for promoting misinfor-
mation (though see Nyhan and Reifler, 2015) — even
successful fact-checks that change respondents’ beliefs
about a false statement have relatively little effect on the
image of the offending politician (Nyhan et al., Forthcoming;
Swire-Thompson et al., Forthcoming).
One promising alternative approach is summary fact-
checking, which seeks to paint a more comprehensive pic-
ture of a politician’s accuracy by aggregating all existing
ratings of statements they have made. For example,
when Donald Trump said Hillary Clinton “wants to
abolish the Second Amendment,” PolitFact conducted a
Counting the Pinocchios: The effect of
summary fact-checking data on perceived
accuracy and favorability of politicians
Alexander Agadjanian1, Nikita Bakhru2, Victoria Chi2,
Devyn Greenberg2, Byrne Hollander2, Alexander Hurt2,
Joseph Kind2, Ray Lu2, Annie Ma2, Brendan Nyhan2,
Daniel Pham2, Michael Qian2, Mackinley Tan2, Clara Wang2,
Alexander Wasdahl2 and Alexandra Woodruff2
Abstract
Can the media effectively hold politicians accountable for making false claims? Journalistic fact-checking assesses the
accuracy of individual public statements by public officials, but less is known about whether this process effectively
imposes reputational costs on misinformation-prone politicians who repeatedly make false claims. This study therefore
explores the effects of exposure to summaries of fact-check ratings, a new format that presents a more comprehensive
assessment of politician statement accuracy over time. Across three survey experiments, we compared the effects of
negative individual statement ratings and summary fact-checking data on favorability and perceived statement accuracy of
two prominent elected officials. As predicted, summary fact-checking had a greater effect on politician perceptions than
individual fact-checking. Notably, we did not observe the expected pattern of motivated reasoning: co-partisans were
not consistently more resistant than supporters of the opposition party. Our findings suggest that summary fact-checking
is particularly effective at holding politicians accountable for misstatements.
Keywords
Fact-checking, misinformation, accountability, political psychology
1Massachusetts Institute of Technology, Cambridge, MA, USA
2Dartmouth College, Hanover, NH, USA
Corresponding author:
Brendan Nyhan, Dartmouth College, HB 6108, Hanover, NH 03755,
USA.
Email: nyhan@dartmouth.edu
870351RAP0010.1177/2053168019870351Research & PoliticsAgadjanian et al.
research-article20192019
Research Article
2 Research and Politics
traditional (individual) fact-check of this singular state-
ment and rated it False using its “Truth-o-Meter” system
(Qiu, 2016). A summary fact-check, on the other hand,
would describe the distribution of fact-check ratings for
a given politician. For instance, PolitiFact editor Angie
Drobnic Holan wrote a New York Times op-ed in
December 2015 noting that the site had “fact-checked
more than 70 Trump statements and rated fully three-
quarters of them as Mostly False, False or ‘Pants on
Fire’” (Holan, 2015). By drawing on a larger number of
ratings, this form of fact-checking could potentially
provide stronger evidence of inaccuracy and have a
greater influence on how the public perceives politi-
cians than individual fact-checks do.
This study therefore compares the effects of summary
fact-checking data and individual fact-check ratings on
views of politicians who make misleading claims.
Consistent with our preregistered hypotheses, summary
fact-checking data reduced perceptions of politicians’ accu-
racy and favorability more than exposure to a negative indi-
vidual fact-check rating did. These results, which were not
consistently moderated by other factors such as partisan-
ship, political knowledge, or education, demonstrate that
fact-checking, especially when presented in a summary for-
mat, can play an important role in holding politicians
accountable for misleading statements.
Theory
Existing research offers mixed conclusions about the
effects of fact-checks and corrective information. Meta-
analyses conclude that corrections can moderately reduce
misinformation (Chan et al., 2017; Walter and Murphy,
2018). Similarly, recent work shows that individuals
update their perceptions in the direction of corrective
information (Wood and Porter, 2019). Several studies also
argue fact-checks can increase political knowledge and
affect voter behavior (e.g., Fridkin, Kenney and
Wintersieck, 2015; Gottfried et al., 2013;), but others find
that fact-checks may have limited effects or be counter-
productive (e.g.,Garrett, Nisbet and Lynch, 2013; Garrett
and Weeks, 2013). Fact-checks may be less effective
when a misperception is salient or invokes strong cues,
such as partisanship or outgroup membership (Flynn
et al., 2017).
However, less is known about the effects of summary
fact-checks, which aggregate fact-check ratings of politi-
cians, and how those effects compare to those of fact-
checks of individual statements by politicians. Though the
summary fact-check format is relatively uncommon, fact-
checkers and other media outlets increasingly provide these
statistics for politicians to help readers differentiate between
candidates who have made a few false statements and those
with long histories of spreading misinformation. For
instance, fact-checkers like Holan and media outlets
frequently compile multiple ratings of a given politician on
the PolitiFact’s Truth-O-Meter or the Pinocchios scale of
The Washington Post Fact Checker.
To date, most studies have focused on how fact-
checks affect belief accuracy. However, summary fact-
checking does not attempt to correct specific false or
misleading claims. We therefore assessed its effects on
perceptions of politicians (a key mechanism of demo-
cratic accountability) rather than factual beliefs. If the
images of politicians suffer as a result of getting repeat-
edly fact-checked, politicians would face a stronger rep-
utational incentive to avoid making false statements
(Nyhan and Reifler, 2015).
Our theoretical expectations were that people would be
less likely to dismiss a falsehood as an isolated incident and
would instead view a politician’s behavior as more prob-
lematic when presented with summary data, which offers
stronger evidence of a pattern of inaccuracy.1 Exposure to
summary fact-checking might promote greater updating of
respondent views toward a candidate compared to a fact-
check of an individual statement through various mecha-
nisms. These include a memory-based “running tally” (e.g.,
Fiorina, 1981) in which candidate inaccuracy is more likely
to be registered as a negative consideration, online process-
ing of negative affect inspired by information about a sus-
tained record of inaccuracy (e.g., Lodge and Taber, 2005),
or a Bayesian process in which more information about
past inaccuracy leads to greater updating of candidate atti-
tudes (Zechman, 1979).2
There are reasons to doubt this hypothesis, however.
First, the way in which information is presented can some-
times matter more than the strength of evidence presented.
For instance, one recent study found that a compelling nar-
rative about a single event was more important than broader
statistical information about a topic in changing public
opinion (Norris and Mullinix, Forthcoming). In addition,
Swire-Thompson et al. (Forthcoming) found that present-
ing numerous fact-checks only affected ratings of target
politicians when false statements outnumbered true ones
and even then generated very small effects. Adjudicating
between the effects of summary and individual fact-checks
thus merits scholarly attention.
We specifically proposed three hypotheses and two
research questions, all of which were preregistered. First,
drawing on the experimental literature supporting the effi-
cacy of fact-checks, we predicted that individuals exposed
to negative fact-checking of a politician in either format
would view that politician less favorably and perceive
them as less accurate (H1). For reasons discussed above,
we also predicted that summary fact-checking data would
have a larger effect on these outcome variables than an
individual fact-check rating would (H2). Finally, as per
theories of motivated reasoning (e.g., Kunda, 1990; Taber
and Lodge, 2006) we predicted that the favorability and
perceived accuracy of politicians would decrease more
Agadjanian et al. 3
when individuals viewed fact-checking of the opposition
party compared to their own (H3).
In addition, we proposed two related research questions,
asking whether participants’ political knowledge or level of
education would moderate the effects of fact-checking
(RQ1) and whether exposure to a summary or individual
fact-check would affect attitudes toward fact-checking
(RQ2). Existing evidence is limited on both points. Fact-
checking may be more effective among the politically
knowledgeable (Fridkin et al., 2015), but people who are
more sophisticated may also be more skilled at resisting
corrective information (Taber and Lodge, 2006). No pub-
lished studies examine the effects of fact-checking on atti-
tudes toward the practice.
The following sections discuss three survey experiments
that test these hypotheses and research questions. We
describe Study 1 in detail and more briefly review studies 2
and 3, which are slight variants of Study 1 that address limi-
tations in the design of prior studies.
Study 1
Methods
Prior to conducting the study, we preregistered the design,
hypotheses, and analysis plan in the EGAP archive, which is
an online platform where researchers preregister study
designs to promote scientific accountability.3 The sample
consisted of 2825 participants recruited via Amazon
Mechanical Turk, an online marketplace frequently used to
recruit research participants (e.g., Berinsky, Huber and Lenz,
2012).4 Data collection took place from May 7–10, 2016.
Participants were required to be US residents aged 18 or
older with at least a 95% HIT (“Human Intelligence Task”)
approval rating on Mechanical Turk. Demographically, our
sample mirrors other Mechanical Turk studies in being
younger and more liberal, educated, and white than the gen-
eral US population. Specific demographic distributions can
be found in Online Appendix C.
Experimental design
Our study used a 3 × 2 between-subjects design that
randomly varied fact-check type and politician parti-
sanship. Respondents were randomly assigned to one of
three treatments: an individual fact-check rating of a
fictitious statement about job creation, a summary fact-
check rating, or the control condition. Fact-checks used
in all three studies were negative, indicating that the
statements in question were not accurate. Participants
were also randomly assigned to a target politician:
Mitch McConnell (R-KY), the Senate majority leader
at the time, or Harry Reid (D-NV), the Senate minority
leader at the time. McConnell and Reid were chosen
because they belong to different parties but are compa-
rable figures.
The graphics in our individual fact-check rating treat-
ment were adapted from PolitiFact’s Truth-O-Meter; both
senators were presented as making the same false claim.
Participants in the summary fact-checking data condition
were exposed to a graphic adapted from The New York
Times (Holan, 2015) presenting either McConnell or Reid
as making more false statements than the average senator.
Figure 1 presents the graphics used for respondents in the
treatment conditions.5
Finally, participants in the control group were shown a
graphic displaying predicted weather for Des Moines,
Iowa. We included a caption for each graphic to ensure that
participants understood the information presented and to
match the format and design of the stimuli between condi-
tions as closely as possible (see Online Appendix A).
Procedure
Participants were first required to provide informed consent
and their age. They then answered questions regarding their
demographics, party affiliation, and political knowledge
before the experimental manipulation. Demographic and
knowledge characteristics did not vary significantly
between our three experimental groups (see Table C1 in
Online Appendix C). After a brief task intended to conceal
the study’s purpose, participants answered questions meas-
uring three outcome variables: favorability toward
McConnell or Reid, perceived accuracy of that senator, and
favorability toward fact-checking (see Online Appendix A
for full survey text).6 Participants were then debriefed and
compensated for their time.
Measures
Our study measured two primary outcome variables on
five-point scales: how often statements made by the sena-
tor are accurate from “never” (1) to “all of the time” (5)
and how favorable or unfavorable their views of the sena-
tor are from “very unfavorable” (1) to “very favorable”
(5). For our second research question, we asked four
questions about participants’ perceptions of fact-check-
ing (see Online Appendix A for details). We also consid-
ered several pre-treatment moderators. For H3, we
classified participants’ partisanship according to which
political party they identify with or lean toward. To
explore RQ1, we measured the education level and politi-
cal knowledge of participants. We classified those with a
bachelor’s degree or above as having a high level of edu-
cation and those who correctly answer at least four of five
questions on a standard political knowledge battery as
high knowledge in a median split.
Results
We analyzed the effects of our experiment using ordinary
least squares (OLS) regression with robust standard errors.7
4 Research and Politics
Main effects of fact-check type
Consistent with our first preregistered hypothesis (H1),
exposure to either negative summary fact-checking data
or a negative individual fact-check rating led to signifi-
cantly lower accuracy and favorability ratings than we
observed in the control condition. These findings hold for
both outcome measures and both target politicians (see
Table 1). Consistent with H1, respondents provided with
an individual fact-check rating (
β
= −0.16, SE = 0.05)
or summary fact-checking data (
β
= −0.33, SE = 0.05)
about McConnell rated the accuracy of his statements
lower than those in the control condition. Results were
substantively identical for those exposed to an individual
fact-check rating (
β
= −0.22, SE = 0.06) or summary
fact-checking data (
β
= −0.65, SE = 0.05) about Reid.
Results were identical for favorability ratings. When
asked about McConnell’s favorability, the individual fact-
check rating (
β
= −0.28, SE = 0.06) and summary fact-
checking data (
β
= −0.59, SE = 0.06) groups rated him
lower than the control group did; the individual (
β
=
−0.20, SE = 0.06) and summary (
β
= −0.62, SE = 0.06)
groups assigned to Reid did the same.
Figure 1. Treatment graphics.
Respondents were randomly assigned to see one of the four fact-checking treatments portrayed here or to a control condition. See Online Appen-
dix A for question wording and stimulus materials.
Table 1. Effects of fact-check type on politician accuracy and favorability ratings.
McConnell Reid
Accuracy Favorability Accuracy Favorability
Individual fact-check rating −0.16** −0.28** −0.22** −0.20**
(0.05) (0.06) (0.06) (0.06)
Summary fact-checking data −0.33** −0.59** −0.65** −0.62**
(0.05) (0.06) (0.05) (0.06)
Constant (control mean) 2.65** 2.54** 2.99** 2.92**
(0.04) (0.04) (0.04) (0.04)
Summary
−
individual −0.17** −0.31** −0.44** −0.42**
(0.05) (0.06) (0.05) (0.06)
N1435 1435 1393 1393
*p < .05, **p < .01 (two-sided). OLS = ordinary least squares models with robust standard errors.
Agadjanian et al. 5
Our second preregistered hypothesis (H2) predicted
that the summary fact-checking data group would rate
politicians lower than those exposed to negative individ-
ual fact-check ratings. The results corresponded directly
with our hypothesis: the summary fact-checking data
group rated McConnell lower on accuracy (
β
= −0.17,
SE = 0.05) and favorability (
β
= −0.31, SE = 0.06)
than did the individual fact-check rating group. Similarly,
respondents assigned to unfavorable summary fact-
checking data about Reid rated him lower on accuracy
(
β
= −0.44, SE = 0.05) and favorability (
β
= −0.43,
SE = 0.06) than did those who saw a rating of an indi-
vidual statement by Reid.
Figure 2 illustrates the substantive magnitude of these
effects.
As displayed in Figure 2a, control group participants
perceived the senators’ statements as more accurate
(McConnell: mean = 2.64, SE = 0.04; Reid: mean =
2.98, SE = 0.04) than did the individual fact-checking
group (McConnell: mean = 2.49, SE = 0.04; Reid: mean
= 2.77, SE = 0.04), but the summary fact-checking group
rated them as least accurate of all (McConnell: mean =
2.32, SE = 0.03; Reid: mean = 2.33, SE = 0.03). Figure
2b presents similar results for favorability ratings.
Participants in the control condition viewed the senators
more favorably (McConnell: mean = 2.54, SE = 0.04;
Reid: mean = 2.92, SE = 0.04) than those who viewed
individual fact-checks (McConnell: mean = 2.26, SE =
0.04; Reid: mean = 2.72, SE = 0.05). Those who viewed
summary fact-checking data rated the senators the least
favorably (McConnell: mean = 1.95, SE = 0.04; Reid:
mean = 2.30, SE = 0.04).
Party interactions
To check for directionally motivated reasoning (H3), we
estimated heterogeneous treatment effects by party in
Table 2.8 Contrary to our hypothesis, these models did
not suggest a directionally motivated response to nega-
tive fact-checking information. For instance, we expected
the negative effect of summary fact-checking data on per-
ceptions of Reid’s accuracy to be greater among
Republicans than among independents and Democrats.
Instead, the negative effect was greater among Democrats
than among both independents (
β
= −0.54, SE = 0.15)
and Republicans (
β
= −0.25, SE = 0.11) (the latter esti-
mate represents the difference in the two summary fact-
check interaction coefficients). We therefore do not
discuss H3 further.
Research questions
Our first research question asked whether political knowl-
edge or education would moderate treatment effects overall
or among partisans. Our findings did not yield consistent
results. Both fact-check types reduced McConnell favora-
bility ratings more among people with low knowledge than
among those with high knowledge. However, summary
fact-checking data reduced Reid accuracy ratings more
among respondents with high knowledge. Our findings are
thus inconclusive. For our education results, we compared
participants with and without a bachelor’s degree. We
found only one significant difference in fact-check effects
by education. Thus, we cannot conclude that education
affects participants’ responses toward fact-checks of either
type.9 Similarly, fact-checking exposure had no measurable
effect on perceptions of fact-checking for the four outcome
measures we examined: favorability toward fact-checking,
demand for more fact-checking, and the perceived accu-
racy and fairness of fact-checking.10 We obtained similar
results for both research questions in Studies 2 and 3 (see
Online Appendix C for full results from all three studies)
and therefore do not discuss them further.
Discussion
The results of this study confirmed that summary fact-
checking is more effective at influencing the perceived
accuracy and favorability of selected politicians than is
individual fact-checking. These results did not vary by
party or other preregistered moderators. The design we
employed has two principal limitations. First, the summary
fact-check we tested also includes fact-check information
for an average senator; perhaps comparisons to the average
senator drove the summary fact-check’s larger effects rather
than the rating aggregation itself. Second, summary fact-
checks explicitly lay out the politician’s record of accuracy,
while an individual fact-check is centered around a single
statement. Asking about overall accuracy might result in
participants repeating back what they saw in the summary,
rather than acting on an updated belief. We conducted two
additional experiments to address these concerns.11
Study 2
Study 2 replicated Study 1 except for two changes. In
addition to measuring accuracy and favorability ratings of
senators Reid and McConnell, we included a new ques-
tion that tested participants’ perceived accuracy of a new
statement putatively made by the senator about whom
they saw a fact-check (“Nevada/Kentucky has more pri-
vate sector jobs than ever before”). This measure addressed
a potential concern about response bias in Study 1. By
asking respondents to rate a novel statement, we could
better test whether respondents were actively updating
their beliefs rather than merely reporting what they saw in
the fact-check graphics. In addition, we altered the sum-
mary fact-check graphic to remove the comparison to an
average senator to address a potential design confound.
This change clarified whether the aggregate information
6 Research and Politics
Figure 2. Perceived accuracy and candidate favorability by fact-check type.
Means by condition (control, individual fact-check rating, or summary fact-checking data); see Online Appendix A for question wording and stimulus
materials.
provided by the summary fact-check accounted for its
stronger effects or whether they were the result of a con-
trast effect with an average senator.12 Study 2 was
conducted November 4–8, 2016 among a sample that was
similar demographically to Study 1’s sample. It was also
preregistered at EGAP.
Agadjanian et al. 7
Results
Study 2 results were similar to Study 1 for the two existing
outcome measures (see Table B1). As in Study 1, respondents
perceived the general accuracy of statements made by
McConnell and Reid as lower when exposed to summary
fact-checking versus an individual fact-check rating (−.11,
p< .05 for McConnell; −.43, p< .01 for Reid). We again
found that favorability toward the target politician was
reduced more by summary fact-checking information versus
a fact-check of an individual statement, though the results
were not statistically significant for McConnell (−.09,
p< .20 for McConnell; −.36, p< .01 for Reid). However,
our fact-checking manipulation did not have the anticipated
effect on our new outcome measure, an accuracy rating of a
new statement by the target politician. Participants shown a
fact-check of an individual statement by McConnell (
β
=
−0.42, SE = 0.04) actually rated the new statement about
more jobs being created as significantly less accurate than
those shown summary fact-checking data (
β
= −0.32, SE =
0.04; difference = 0.10, p< .05 ). In addition, the difference
in the accuracy rating of the new statement by Reid was null
for participants shown an individual fact-check rating (
β
=
−0.33, SE = 0.04) and those shown summary fact-checking
data (
β
= −0.35, SE = 0.04).
Discussion
Study 2 largely replicated the results of Study 1. Summary fact-
checking information typically had more negative effects on
the perceived accuracy of a politician and favorability toward
that figure than an individual fact-check rating did. However,
we found an anomalous result in how respondents evaluated
the accuracy of a new statement attributed to the politician in
question. This difference in accuracy rating may have been the
result of an inadvertent confound between the topic of the indi-
vidual fact-check (a claim about job creation during their ten-
ure as majority leader) and the topic of the novel statement that
participants rated afterward (private sector jobs in the majority
leader’s state). We therefore replicated our findings in Study 3
using a design that removed this confound.
Study 3
Study 3 corrected a confound in the design of Study 2. Due to
concerns about the close conceptual relationship between the
topic for the fact-check of an individual statement (job crea-
tion) and the topic of the new statement whose accuracy
respondents were asked to assess in Study 2 (private sector
jobs), respondents were instead asked in Study 3 to evaluate
the accuracy of the following statement from either McConnell
or Reid: “I haven’t switched my position on the Trans-Pacific
Partnership trade deal.” Study 3 was conducted January 12–
16, 2017, had sample demographics that were similar to those
in the previous studies, and was also preregistered with EGAP.
Results
Study 3 replicated the findings in studies 1 and 2 (see Table
B2 in Online Appendix B). The perceived accuracy of
Table 2. Effects of fact-check type by party.
McConnell Reid
Accuracy Favorability Accuracy Favorability
Individual fact-check rating −0.40* −0.52** 0.01 0.01
(0.16) (0.17) (0.16) (0.16)
Individual FC
×
Democrat 0.26 0.25 −0.21 −0.22
(0.17) (0.19) (0.18) (0.17)
Individual FC
×
Republican 0.26 0.29 −0.26 −0.13
(0.19) (0.20) (0.19) (0.19)
Democrat (with leaners) −0.13 −0.36** 0.64** 0.72**
(0.12) (0.13) (0.13) (0.12)
Republican (with leaners) 0.30* 0.34* 0.19 −0.15
(0.12) (0.14) (0.14) (0.13)
Summary fact-checking data −0.40** −0.70** −0.24 −0.45**
(0.13) (0.17) (0.14) (0.15)
Summary FC
×
Democrat 0.14 0.11 −0.54** −0.31
(0.15) (0.19) (0.15) (0.16)
Summary FC
×
Republican −0.04 0.13 −0.29 0.09
(0.16) (0.20) (0.17) (0.19)
Constant 2.65** 2.67** 2.53** 2.51**
(0.10) (0.12) (0.12) (0.11)
N1435 1435 1393 1393
*p < .05, **p < .01 (two-sided). OLS = ordinary least squares models with robust standard errors. FC = fact-check.
8 Research and Politics
statements made by McConnell and Reid and favorability
toward them were lower when respondents were shown
summary fact-checking data compared to an individual
fact-check rating ( p< .01 in each case). Most notably,
when the topic confound in Study 2 was removed (by
changing the topic of the novel statement), participants
shown summary fact-checking data on McConnell (
β
=
−0.37, SE = 0.05) rated the new statement as less accurate
than those shown an individual fact-check rating of
McConnell (
β
= −0.17, SE = 0.05; difference = −0.20,
p< .01 ). Those shown summary fact-checking data on
Reid (
β
= −0.46, SE = 0.05) also rated the additional
statement as less accurate than participants shown an indi-
vidual fact-check rating of Reid (
β
= −0.24, SE = 0.05;
difference = −0.22, p< .01 ).
Discussion
The results of Study 3 helped explain the unexpected find-
ing in Study 2, where participants who saw an individual
fact-check rating viewed a new statement by that politi-
cian as less accurate than those who saw summary fact-
checking information. We hypothesized that this finding
was the result of the topic of the fact-check graphic and
the novel statement being closely related. When this con-
found was removed and we asked respondents to evaluate
a novel statement on an unrelated issue, we found the
expected relationship: participants who were shown sum-
mary fact-checking data rated the novel statement as less
accurate than those who were shown an individual fact-
check rating.
Conclusion
Summary fact-checking data had significantly greater
effects on perceptions of political figures than fact-
check ratings of an individual statement did. Compared
to respondents who saw an unfavorable or negative fact-
check rating of a single statement, those who saw unfa-
vorable summary fact-checking data viewed the
politicians in question less favorably and perceived state-
ments they made as less accurate. These effects were also
not consistently moderated by other factors, including
partisan affiliation, political knowledge, or education.
The lack of partisan heterogeneity is particularly impor-
tant given frequent concerns that directionally moti-
vated reasoning undermines fact-checking effectiveness
(e.g., Graves and Glaisyer, 2012).
These results suggest that news organizations should use
summary fact-checking to encourage responsible conduct
by political figures. However, caution is still required. First,
fact-checking individual statements is still the best way to
set the record straight about a specific claim. In addition,
reporters and editors must consider whether aggregated
fact-checks accurately represent a political figure’s overall
record or will leave a distorted impression (Uscinski and
Butler, 2013).
Future research should consider other research ques-
tions and approaches we did not evaluate. First, it would
be valuable to test fact-checks of non-quantitative claims
as well as different stimulus graphics or ratings.
Additional studies could also consider more controver-
sial targets or issues, vary the source of fact-checks, or
test the effects of positive fact-checks. Second, we did
not directly assess factual beliefs about a specific state-
ment. Third, it would be worthwhile to further investigate
the mechanisms for this effect (a difficult question under
any circumstances).
Nonetheless, these results are an important first step
toward understanding the new summary fact-checking for-
mat, which we found had greater effects on perceptions of
politicians than did an individual fact-check rating. By
increasing the reputational risk of making false claims in
this way, it may help to discourage politicians from promot-
ing misinformation in the first place.
Authors’ note
Brendan Nyhan is professor of Government at Dartmouth College,
Hanover, USA. Other co-authors are former undergraduate stu-
dents at Dartmouth. Alexander Agadjanian is currently a research
associate in the MIT Election Lab.
Acknowledgements
We thank the Dartmouth College Office of Undergraduate
Research for generous funding support. Our title comes from The
Washington Post headline on a letter to the editor about fact-
checking (Morris, 2013).
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with
respect to the research, authorship, and/or publication of this
article.
Funding
The author(s) disclosed receipt of the following financial support
for the research, authorship, and/or publication of this article: The
authors received funding support from the Dartmouth College
Office of Undergraduate Research.
ORCID iD
Alexander Agadjanian https://orcid.org/0000-0002-6756-8472
Supplemental materials
The supplemental files are available at http://journals.sagepub.com/
doi/suppl/10.1177/2053168019870351
The replication files are available at https://dataverse.harvard.edu/
dataset.xhtml?persistentId=doi:10.7910/DVN/6B9C1N
Agadjanian et al. 9
Notes
1. Concerns about selection bias and subjectivity still apply,
however (see e.g., Uscinski and Butler, 2013).
2. Pinpointing the exact mechanism(s) is beyond the scope of
this study but is identified as a key topic for future research
in the Conclusion.
3. Anonymized preregistrations for all three studies are
attached. Deviations from the preregistered study plan are
noted in Footnote 9 below.
4. An additional 719 respondents were excluded because they
participated in a pilot study or did not consent to participate.
5. See Online Appendix A for the full survey instrument and
stimulus materials, including the graphic used in the control
conditions. The treatment graphics were designed to resem-
ble real-world stimuli as closely as possible (see Online
Appendix C for examples).
6. Respondents were also asked about their perceptions of
the other senator to whom they were not assigned, but we
excluded those responses from all analyses because of the
possibility of a contrast effect.
7. OLS allowed us to better communicate effect sizes and
confidence intervals than analysis of variance, and directly
estimate the causal quantities of interest (e.g., the average
treatment effect) with minimum assumptions. However,
equivalent ordered probit models are also provided in Online
Appendix C for all OLS models in the main text. As per our
preregistration, we analyzed the results separately by candi-
date rather than pooling because we rejected the null of no
difference in treatment effects by fact-checking target (i.e.,
politician) for at least one outcome variable in each study.
8. These preregistered models and all others testing for het-
erogeneous treatment effects in the main text and in Online
Appendix C include interactions between our treatment con-
ditions and the potential moderators in question as well as all
constituent terms as per Brambor, Clark and Golder (2006).
9. See Online Appendix C for the results of these simple
exploratory models as well as our preregistered analyses,
which instead interact the treatments with both partisan indi-
cators as well as linear or tercile measures of education or
knowledge.
10. These results did not scale together well in a factor analysis.
As per our preregistration, we therefore analyzed each out-
come measure separately.
11. In each subsequent study, we excluded respondents who had
taken part in earlier studies.
12. See Online Appendix B for the altered graphic. No changes
were made to the individual fact-check rating treatment.
Carnegie Corporation of New York Grant
The open access article processing charge (APC) for this article
was waived due to a grant awarded to Research & Politics from
Carnegie Corporation of New York under its ‘Bridging the Gap’
initiative.
References
Berinsky AJ, Huber GA and Lenz GS (2012) Evaluating online
labor markets for experimental research: Amazon.com’s
Mechanical Turk. Political Analysis 20(3): 351–368.
Brambor T, Roberts Clark W and Golder M (2006) Understanding
interaction models: Improving empirical analyses. Political
Analysis 14(1): 63–82.
Chan M-pS, Jones CR, Hall Jamieson K and Albarracín D (2017)
Debunking: A meta-analysis of the psychological efficacy of
messages countering misinformation. Psychological Science
28(11): 1531–1546.
Fiorina MP (1981) Retrospective Voting in American National
Elections. New Haven, CT: Yale University Press.
Flynn DJ, Nyhan B and Reifler J (2017) The nature and origins of
misperceptions: Understanding false and unsupported beliefs
about politics. Advances in Political Psychology 38(S1): 127–
150.
Fridkin K, Kenney PJ and Wintersieck A (2015) Liar, liar, pants
on fire: How fact-checking influences citizens’ reactions
to negative advertising. Political Communication 32(1):
127–151.
Garrett RK and Weeks BE (2013) The promise and peril of real-
time corrections to political misperceptions.” In: Proceedings
of the 2013 Conference on Computer Supported Cooperative
Work (CSCW ‘13). New York, NY, USA: ACM. pp. 1047–
1058.
Garrett RK, Nisbet EC and Lynch EK (2013) Undermining the
corrective effects of media-based political fact checking?
The role of contextual cues and naïve theory. Journal of
Communication 63(4): 617–637.
Gottfried JA, Hardy BW, Winneg KM, et al. (2013) Did fact
checking matter in the 2012 presidential campaign?
American Behavioral Scientist 57(11): 1558–1567.
Graves L and Glaisyer T (2012) The fact-checking universe in
spring 2012: An overview. Media Policy Initiative Research
Paper. Washington, D.C.: New America Foundation.
Holan AD (2015) All politicians lie. Some lie more than others.
New York Times, 11 December, 2015. Available at: http://
www.nytimes.com/2015/12/13/opinion/campaign-stops/all-
politicians-lie-some-lie-more-than-others.html (accessed 22
May 2016).
Kunda Z (1990) The case for motivated reasoning. Psychological
bulletin 108(3): 480–498.
Lodge M and Taber CS (2005) The automaticity of affect for
political leaders, groups, and issues: An experimental test
of the hot cognition hypothesis. Political Psychology 26(3):
455–482.
Morris J (2013) Counting the Pinocchios. The Washington Post,
May 24. Available at: https://www.washingtonpost.com/
opinions/counting-the-pinocchios/2013/05/24/82f07d38-
c2f3-11e2-9642-a56177f1cdf7_story.html (accessed 20 May
2016).
Norris RJ and Mullinix KJ (Forthcoming) Framing innocence: An
experimental test of the effects of wrongful convictions on
public opinion. Journal of Experimental Criminology.
Nyhan B, Porter E, Reifler J, et al. (Forthcoming) Taking fact-
checks literally but not seriously? The effects of journalistic
fact-checking on factual beliefs and candidate favorability.
Political Behavior.
Nyhan B and Reifler J (2015) The effect of fact-checking on
elites: A field experiment on U.S. state legislators. American
Journal of Political Science 59(3): 628–640.
Qiu L (2016) Donald Trump falsely claims Hillary Clinton
“wants to abolish the 2nd Amendment.” PolitiFact, May 11.
10 Research and Politics
Available at: https://www.politifact.com/truth-o-meter/state-
ments/2016/may/11/donald-trump/donald-trump-falsely-
claims-hillary-clinton-wants-/ (accessed 8 July 2019).
Snyder JM and Strömberg D (2010) Press coverage and political
accountability. Journal of Political Economy 118(2): 355–408.
Swire-Thompson B, Ecker U K H, Lewandowsky S, et al.
(Forthcoming) They might be a liar but they’re my liar:
Source evaluation and the prevalence of misinformation.
Political Psychology.
Taber CS and Lodge M (2006) Motivated skepticism in the evalu-
ation of political beliefs. American Journal of Political
Science 50(3): 755–769.
Uscinski JE and Butler RW (2013) The epistemology of fact
checking. Critical Review 25(2): 162–180.
Walter N and Murphy ST (2018) How to unring the bell: A
meta-analytic approach to correction of misinformation.
Communication Monographs 85(3): 423–441.
Wood T and Porter E (2019) The elusive backfire effect: Mass
attitudes’ steadfast factual adherence. Political Behavior
41(1): 135–163.
Zechman MJ (1979) Dynamic models of the voter’s decision
calculus: Incorporating retrospective considerations into
rational-choice models of individual voting behavior. Public
Choice 34(3–4): 297–315.