ArticlePDF Available

Counting the Pinocchios: The effect of summary fact-checking data on perceived accuracy and favorability of politicians

Authors:

Abstract and Figures

Can the media effectively hold politicians accountable for making false claims? Journalistic fact-checking assesses the accuracy of individual public statements by public officials, but less is known about whether this process effectively imposes reputational costs on misinformation-prone politicians who repeatedly make false claims. This study therefore explores the effects of exposure to summaries of fact-check ratings, a new format that presents a more comprehensive assessment of politician statement accuracy over time. Across three survey experiments, we compared the effects of negative individual statement ratings and summary fact-checking data on favorability and perceived statement accuracy of two prominent elected officials. As predicted, summary fact-checking had a greater effect on politician perceptions than individual fact-checking. Notably, we did not observe the expected pattern of motivated reasoning: co-partisans were not consistently more resistant than supporters of the opposition party. Our findings suggest that summary fact-checking is particularly effective at holding politicians accountable for misstatements.
Content may be subject to copyright.
Creative Commons CC-BY-NC-ND: This article is distributed under the terms of the Creative Commons Attribution-
NonCommercial-NoDerivs 4.0 License (http://www.creativecommons.org/licenses/by-nc-nd/4.0/) which permits non-commercial
use, reproduction and distribution of the work as published without adaptation or alteration, without further permission provided the original work
is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).
https://doi.org/10.1177/2053168019870351
Research and Politics
July-September 2019: 1 –10
© The Author(s) 2019
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/2053168019870351
journals.sagepub.com/home/rap
Fact-checking websites have changed how media outlets
cover politics. Sites like PolitiFact, FactCheck.org, and The
Washington Post Fact Checker seek to correct misinforma-
tion and hold politicians accountable by providing exten-
sive coverage of the accuracy of claims made by political
figures. Nevertheless, inaccurate claims made by politi-
cians continue to mar public debate. How can journalists
more effectively hold elites accountable when they spread
misinformation?
Extensive research has been conducted on the effects of
fact-checking on people’s factual beliefs (e.g., Flynn,
Nyhan and Reifler, 2017) and the effects of media scrutiny
on how the public views politicians (e.g., Snyder and
Strömberg, 2010). However, less is known about how and
under what conditions journalistic scrutiny might increase
the reputational costs to politicians for promoting misinfor-
mation (though see Nyhan and Reifler, 2015) — even
successful fact-checks that change respondents’ beliefs
about a false statement have relatively little effect on the
image of the offending politician (Nyhan et al., Forthcoming;
Swire-Thompson et al., Forthcoming).
One promising alternative approach is summary fact-
checking, which seeks to paint a more comprehensive pic-
ture of a politician’s accuracy by aggregating all existing
ratings of statements they have made. For example,
when Donald Trump said Hillary Clinton “wants to
abolish the Second Amendment,” PolitFact conducted a
Counting the Pinocchios: The effect of
summary fact-checking data on perceived
accuracy and favorability of politicians
Alexander Agadjanian1, Nikita Bakhru2, Victoria Chi2,
Devyn Greenberg2, Byrne Hollander2, Alexander Hurt2,
Joseph Kind2, Ray Lu2, Annie Ma2, Brendan Nyhan2,
Daniel Pham2, Michael Qian2, Mackinley Tan2, Clara Wang2,
Alexander Wasdahl2 and Alexandra Woodruff2
Abstract
Can the media effectively hold politicians accountable for making false claims? Journalistic fact-checking assesses the
accuracy of individual public statements by public officials, but less is known about whether this process effectively
imposes reputational costs on misinformation-prone politicians who repeatedly make false claims. This study therefore
explores the effects of exposure to summaries of fact-check ratings, a new format that presents a more comprehensive
assessment of politician statement accuracy over time. Across three survey experiments, we compared the effects of
negative individual statement ratings and summary fact-checking data on favorability and perceived statement accuracy of
two prominent elected officials. As predicted, summary fact-checking had a greater effect on politician perceptions than
individual fact-checking. Notably, we did not observe the expected pattern of motivated reasoning: co-partisans were
not consistently more resistant than supporters of the opposition party. Our findings suggest that summary fact-checking
is particularly effective at holding politicians accountable for misstatements.
Keywords
Fact-checking, misinformation, accountability, political psychology
1Massachusetts Institute of Technology, Cambridge, MA, USA
2Dartmouth College, Hanover, NH, USA
Corresponding author:
Brendan Nyhan, Dartmouth College, HB 6108, Hanover, NH 03755,
USA.
Email: nyhan@dartmouth.edu
870351RAP0010.1177/2053168019870351Research & PoliticsAgadjanian et al.
research-article20192019
Research Article
2 Research and Politics
traditional (individual) fact-check of this singular state-
ment and rated it False using its “Truth-o-Meter” system
(Qiu, 2016). A summary fact-check, on the other hand,
would describe the distribution of fact-check ratings for
a given politician. For instance, PolitiFact editor Angie
Drobnic Holan wrote a New York Times op-ed in
December 2015 noting that the site had “fact-checked
more than 70 Trump statements and rated fully three-
quarters of them as Mostly False, False or ‘Pants on
Fire’” (Holan, 2015). By drawing on a larger number of
ratings, this form of fact-checking could potentially
provide stronger evidence of inaccuracy and have a
greater influence on how the public perceives politi-
cians than individual fact-checks do.
This study therefore compares the effects of summary
fact-checking data and individual fact-check ratings on
views of politicians who make misleading claims.
Consistent with our preregistered hypotheses, summary
fact-checking data reduced perceptions of politicians’ accu-
racy and favorability more than exposure to a negative indi-
vidual fact-check rating did. These results, which were not
consistently moderated by other factors such as partisan-
ship, political knowledge, or education, demonstrate that
fact-checking, especially when presented in a summary for-
mat, can play an important role in holding politicians
accountable for misleading statements.
Theory
Existing research offers mixed conclusions about the
effects of fact-checks and corrective information. Meta-
analyses conclude that corrections can moderately reduce
misinformation (Chan et al., 2017; Walter and Murphy,
2018). Similarly, recent work shows that individuals
update their perceptions in the direction of corrective
information (Wood and Porter, 2019). Several studies also
argue fact-checks can increase political knowledge and
affect voter behavior (e.g., Fridkin, Kenney and
Wintersieck, 2015; Gottfried et al., 2013;), but others find
that fact-checks may have limited effects or be counter-
productive (e.g.,Garrett, Nisbet and Lynch, 2013; Garrett
and Weeks, 2013). Fact-checks may be less effective
when a misperception is salient or invokes strong cues,
such as partisanship or outgroup membership (Flynn
et al., 2017).
However, less is known about the effects of summary
fact-checks, which aggregate fact-check ratings of politi-
cians, and how those effects compare to those of fact-
checks of individual statements by politicians. Though the
summary fact-check format is relatively uncommon, fact-
checkers and other media outlets increasingly provide these
statistics for politicians to help readers differentiate between
candidates who have made a few false statements and those
with long histories of spreading misinformation. For
instance, fact-checkers like Holan and media outlets
frequently compile multiple ratings of a given politician on
the PolitiFact’s Truth-O-Meter or the Pinocchios scale of
The Washington Post Fact Checker.
To date, most studies have focused on how fact-
checks affect belief accuracy. However, summary fact-
checking does not attempt to correct specific false or
misleading claims. We therefore assessed its effects on
perceptions of politicians (a key mechanism of demo-
cratic accountability) rather than factual beliefs. If the
images of politicians suffer as a result of getting repeat-
edly fact-checked, politicians would face a stronger rep-
utational incentive to avoid making false statements
(Nyhan and Reifler, 2015).
Our theoretical expectations were that people would be
less likely to dismiss a falsehood as an isolated incident and
would instead view a politician’s behavior as more prob-
lematic when presented with summary data, which offers
stronger evidence of a pattern of inaccuracy.1 Exposure to
summary fact-checking might promote greater updating of
respondent views toward a candidate compared to a fact-
check of an individual statement through various mecha-
nisms. These include a memory-based “running tally” (e.g.,
Fiorina, 1981) in which candidate inaccuracy is more likely
to be registered as a negative consideration, online process-
ing of negative affect inspired by information about a sus-
tained record of inaccuracy (e.g., Lodge and Taber, 2005),
or a Bayesian process in which more information about
past inaccuracy leads to greater updating of candidate atti-
tudes (Zechman, 1979).2
There are reasons to doubt this hypothesis, however.
First, the way in which information is presented can some-
times matter more than the strength of evidence presented.
For instance, one recent study found that a compelling nar-
rative about a single event was more important than broader
statistical information about a topic in changing public
opinion (Norris and Mullinix, Forthcoming). In addition,
Swire-Thompson et al. (Forthcoming) found that present-
ing numerous fact-checks only affected ratings of target
politicians when false statements outnumbered true ones
and even then generated very small effects. Adjudicating
between the effects of summary and individual fact-checks
thus merits scholarly attention.
We specifically proposed three hypotheses and two
research questions, all of which were preregistered. First,
drawing on the experimental literature supporting the effi-
cacy of fact-checks, we predicted that individuals exposed
to negative fact-checking of a politician in either format
would view that politician less favorably and perceive
them as less accurate (H1). For reasons discussed above,
we also predicted that summary fact-checking data would
have a larger effect on these outcome variables than an
individual fact-check rating would (H2). Finally, as per
theories of motivated reasoning (e.g., Kunda, 1990; Taber
and Lodge, 2006) we predicted that the favorability and
perceived accuracy of politicians would decrease more
Agadjanian et al. 3
when individuals viewed fact-checking of the opposition
party compared to their own (H3).
In addition, we proposed two related research questions,
asking whether participants’ political knowledge or level of
education would moderate the effects of fact-checking
(RQ1) and whether exposure to a summary or individual
fact-check would affect attitudes toward fact-checking
(RQ2). Existing evidence is limited on both points. Fact-
checking may be more effective among the politically
knowledgeable (Fridkin et al., 2015), but people who are
more sophisticated may also be more skilled at resisting
corrective information (Taber and Lodge, 2006). No pub-
lished studies examine the effects of fact-checking on atti-
tudes toward the practice.
The following sections discuss three survey experiments
that test these hypotheses and research questions. We
describe Study 1 in detail and more briefly review studies 2
and 3, which are slight variants of Study 1 that address limi-
tations in the design of prior studies.
Study 1
Methods
Prior to conducting the study, we preregistered the design,
hypotheses, and analysis plan in the EGAP archive, which is
an online platform where researchers preregister study
designs to promote scientific accountability.3 The sample
consisted of 2825 participants recruited via Amazon
Mechanical Turk, an online marketplace frequently used to
recruit research participants (e.g., Berinsky, Huber and Lenz,
2012).4 Data collection took place from May 7–10, 2016.
Participants were required to be US residents aged 18 or
older with at least a 95% HIT (“Human Intelligence Task”)
approval rating on Mechanical Turk. Demographically, our
sample mirrors other Mechanical Turk studies in being
younger and more liberal, educated, and white than the gen-
eral US population. Specific demographic distributions can
be found in Online Appendix C.
Experimental design
Our study used a 3 × 2 between-subjects design that
randomly varied fact-check type and politician parti-
sanship. Respondents were randomly assigned to one of
three treatments: an individual fact-check rating of a
fictitious statement about job creation, a summary fact-
check rating, or the control condition. Fact-checks used
in all three studies were negative, indicating that the
statements in question were not accurate. Participants
were also randomly assigned to a target politician:
Mitch McConnell (R-KY), the Senate majority leader
at the time, or Harry Reid (D-NV), the Senate minority
leader at the time. McConnell and Reid were chosen
because they belong to different parties but are compa-
rable figures.
The graphics in our individual fact-check rating treat-
ment were adapted from PolitiFact’s Truth-O-Meter; both
senators were presented as making the same false claim.
Participants in the summary fact-checking data condition
were exposed to a graphic adapted from The New York
Times (Holan, 2015) presenting either McConnell or Reid
as making more false statements than the average senator.
Figure 1 presents the graphics used for respondents in the
treatment conditions.5
Finally, participants in the control group were shown a
graphic displaying predicted weather for Des Moines,
Iowa. We included a caption for each graphic to ensure that
participants understood the information presented and to
match the format and design of the stimuli between condi-
tions as closely as possible (see Online Appendix A).
Procedure
Participants were first required to provide informed consent
and their age. They then answered questions regarding their
demographics, party affiliation, and political knowledge
before the experimental manipulation. Demographic and
knowledge characteristics did not vary significantly
between our three experimental groups (see Table C1 in
Online Appendix C). After a brief task intended to conceal
the study’s purpose, participants answered questions meas-
uring three outcome variables: favorability toward
McConnell or Reid, perceived accuracy of that senator, and
favorability toward fact-checking (see Online Appendix A
for full survey text).6 Participants were then debriefed and
compensated for their time.
Measures
Our study measured two primary outcome variables on
five-point scales: how often statements made by the sena-
tor are accurate from “never” (1) to “all of the time” (5)
and how favorable or unfavorable their views of the sena-
tor are from “very unfavorable” (1) to “very favorable”
(5). For our second research question, we asked four
questions about participants’ perceptions of fact-check-
ing (see Online Appendix A for details). We also consid-
ered several pre-treatment moderators. For H3, we
classified participants’ partisanship according to which
political party they identify with or lean toward. To
explore RQ1, we measured the education level and politi-
cal knowledge of participants. We classified those with a
bachelor’s degree or above as having a high level of edu-
cation and those who correctly answer at least four of five
questions on a standard political knowledge battery as
high knowledge in a median split.
Results
We analyzed the effects of our experiment using ordinary
least squares (OLS) regression with robust standard errors.7
4 Research and Politics
Main effects of fact-check type
Consistent with our first preregistered hypothesis (H1),
exposure to either negative summary fact-checking data
or a negative individual fact-check rating led to signifi-
cantly lower accuracy and favorability ratings than we
observed in the control condition. These findings hold for
both outcome measures and both target politicians (see
Table 1). Consistent with H1, respondents provided with
an individual fact-check rating (
β
= −0.16, SE = 0.05)
or summary fact-checking data (
β
= −0.33, SE = 0.05)
about McConnell rated the accuracy of his statements
lower than those in the control condition. Results were
substantively identical for those exposed to an individual
fact-check rating (
β
= −0.22, SE = 0.06) or summary
fact-checking data (
β
= −0.65, SE = 0.05) about Reid.
Results were identical for favorability ratings. When
asked about McConnell’s favorability, the individual fact-
check rating (
β
= −0.28, SE = 0.06) and summary fact-
checking data (
β
= −0.59, SE = 0.06) groups rated him
lower than the control group did; the individual (
β
=
−0.20, SE = 0.06) and summary (
β
= −0.62, SE = 0.06)
groups assigned to Reid did the same.
Figure 1. Treatment graphics.
Respondents were randomly assigned to see one of the four fact-checking treatments portrayed here or to a control condition. See Online Appen-
dix A for question wording and stimulus materials.
Table 1. Effects of fact-check type on politician accuracy and favorability ratings.
McConnell Reid
Accuracy Favorability Accuracy Favorability
Individual fact-check rating −0.16** −0.28** −0.22** −0.20**
(0.05) (0.06) (0.06) (0.06)
Summary fact-checking data −0.33** −0.59** −0.65** −0.62**
(0.05) (0.06) (0.05) (0.06)
Constant (control mean) 2.65** 2.54** 2.99** 2.92**
(0.04) (0.04) (0.04) (0.04)
Summary
individual −0.17** −0.31** −0.44** −0.42**
(0.05) (0.06) (0.05) (0.06)
N1435 1435 1393 1393
*p < .05, **p < .01 (two-sided). OLS = ordinary least squares models with robust standard errors.
Agadjanian et al. 5
Our second preregistered hypothesis (H2) predicted
that the summary fact-checking data group would rate
politicians lower than those exposed to negative individ-
ual fact-check ratings. The results corresponded directly
with our hypothesis: the summary fact-checking data
group rated McConnell lower on accuracy (
β
= −0.17,
SE = 0.05) and favorability (
β
= −0.31, SE = 0.06)
than did the individual fact-check rating group. Similarly,
respondents assigned to unfavorable summary fact-
checking data about Reid rated him lower on accuracy
(
β
= −0.44, SE = 0.05) and favorability (
β
= −0.43,
SE = 0.06) than did those who saw a rating of an indi-
vidual statement by Reid.
Figure 2 illustrates the substantive magnitude of these
effects.
As displayed in Figure 2a, control group participants
perceived the senators’ statements as more accurate
(McConnell: mean = 2.64, SE = 0.04; Reid: mean =
2.98, SE = 0.04) than did the individual fact-checking
group (McConnell: mean = 2.49, SE = 0.04; Reid: mean
= 2.77, SE = 0.04), but the summary fact-checking group
rated them as least accurate of all (McConnell: mean =
2.32, SE = 0.03; Reid: mean = 2.33, SE = 0.03). Figure
2b presents similar results for favorability ratings.
Participants in the control condition viewed the senators
more favorably (McConnell: mean = 2.54, SE = 0.04;
Reid: mean = 2.92, SE = 0.04) than those who viewed
individual fact-checks (McConnell: mean = 2.26, SE =
0.04; Reid: mean = 2.72, SE = 0.05). Those who viewed
summary fact-checking data rated the senators the least
favorably (McConnell: mean = 1.95, SE = 0.04; Reid:
mean = 2.30, SE = 0.04).
Party interactions
To check for directionally motivated reasoning (H3), we
estimated heterogeneous treatment effects by party in
Table 2.8 Contrary to our hypothesis, these models did
not suggest a directionally motivated response to nega-
tive fact-checking information. For instance, we expected
the negative effect of summary fact-checking data on per-
ceptions of Reid’s accuracy to be greater among
Republicans than among independents and Democrats.
Instead, the negative effect was greater among Democrats
than among both independents (
β
= −0.54, SE = 0.15)
and Republicans (
β
= −0.25, SE = 0.11) (the latter esti-
mate represents the difference in the two summary fact-
check interaction coefficients). We therefore do not
discuss H3 further.
Research questions
Our first research question asked whether political knowl-
edge or education would moderate treatment effects overall
or among partisans. Our findings did not yield consistent
results. Both fact-check types reduced McConnell favora-
bility ratings more among people with low knowledge than
among those with high knowledge. However, summary
fact-checking data reduced Reid accuracy ratings more
among respondents with high knowledge. Our findings are
thus inconclusive. For our education results, we compared
participants with and without a bachelor’s degree. We
found only one significant difference in fact-check effects
by education. Thus, we cannot conclude that education
affects participants’ responses toward fact-checks of either
type.9 Similarly, fact-checking exposure had no measurable
effect on perceptions of fact-checking for the four outcome
measures we examined: favorability toward fact-checking,
demand for more fact-checking, and the perceived accu-
racy and fairness of fact-checking.10 We obtained similar
results for both research questions in Studies 2 and 3 (see
Online Appendix C for full results from all three studies)
and therefore do not discuss them further.
Discussion
The results of this study confirmed that summary fact-
checking is more effective at influencing the perceived
accuracy and favorability of selected politicians than is
individual fact-checking. These results did not vary by
party or other preregistered moderators. The design we
employed has two principal limitations. First, the summary
fact-check we tested also includes fact-check information
for an average senator; perhaps comparisons to the average
senator drove the summary fact-check’s larger effects rather
than the rating aggregation itself. Second, summary fact-
checks explicitly lay out the politician’s record of accuracy,
while an individual fact-check is centered around a single
statement. Asking about overall accuracy might result in
participants repeating back what they saw in the summary,
rather than acting on an updated belief. We conducted two
additional experiments to address these concerns.11
Study 2
Study 2 replicated Study 1 except for two changes. In
addition to measuring accuracy and favorability ratings of
senators Reid and McConnell, we included a new ques-
tion that tested participants’ perceived accuracy of a new
statement putatively made by the senator about whom
they saw a fact-check (“Nevada/Kentucky has more pri-
vate sector jobs than ever before”). This measure addressed
a potential concern about response bias in Study 1. By
asking respondents to rate a novel statement, we could
better test whether respondents were actively updating
their beliefs rather than merely reporting what they saw in
the fact-check graphics. In addition, we altered the sum-
mary fact-check graphic to remove the comparison to an
average senator to address a potential design confound.
This change clarified whether the aggregate information
6 Research and Politics
Figure 2. Perceived accuracy and candidate favorability by fact-check type.
Means by condition (control, individual fact-check rating, or summary fact-checking data); see Online Appendix A for question wording and stimulus
materials.
provided by the summary fact-check accounted for its
stronger effects or whether they were the result of a con-
trast effect with an average senator.12 Study 2 was
conducted November 4–8, 2016 among a sample that was
similar demographically to Study 1’s sample. It was also
preregistered at EGAP.
Agadjanian et al. 7
Results
Study 2 results were similar to Study 1 for the two existing
outcome measures (see Table B1). As in Study 1, respondents
perceived the general accuracy of statements made by
McConnell and Reid as lower when exposed to summary
fact-checking versus an individual fact-check rating (−.11,
p< .05 for McConnell; −.43, p< .01 for Reid). We again
found that favorability toward the target politician was
reduced more by summary fact-checking information versus
a fact-check of an individual statement, though the results
were not statistically significant for McConnell (−.09,
p< .20 for McConnell; −.36, p< .01 for Reid). However,
our fact-checking manipulation did not have the anticipated
effect on our new outcome measure, an accuracy rating of a
new statement by the target politician. Participants shown a
fact-check of an individual statement by McConnell (
β
=
−0.42, SE = 0.04) actually rated the new statement about
more jobs being created as significantly less accurate than
those shown summary fact-checking data (
β
= −0.32, SE =
0.04; difference = 0.10, p< .05 ). In addition, the difference
in the accuracy rating of the new statement by Reid was null
for participants shown an individual fact-check rating (
β
=
−0.33, SE = 0.04) and those shown summary fact-checking
data (
β
= −0.35, SE = 0.04).
Discussion
Study 2 largely replicated the results of Study 1. Summary fact-
checking information typically had more negative effects on
the perceived accuracy of a politician and favorability toward
that figure than an individual fact-check rating did. However,
we found an anomalous result in how respondents evaluated
the accuracy of a new statement attributed to the politician in
question. This difference in accuracy rating may have been the
result of an inadvertent confound between the topic of the indi-
vidual fact-check (a claim about job creation during their ten-
ure as majority leader) and the topic of the novel statement that
participants rated afterward (private sector jobs in the majority
leader’s state). We therefore replicated our findings in Study 3
using a design that removed this confound.
Study 3
Study 3 corrected a confound in the design of Study 2. Due to
concerns about the close conceptual relationship between the
topic for the fact-check of an individual statement (job crea-
tion) and the topic of the new statement whose accuracy
respondents were asked to assess in Study 2 (private sector
jobs), respondents were instead asked in Study 3 to evaluate
the accuracy of the following statement from either McConnell
or Reid: “I haven’t switched my position on the Trans-Pacific
Partnership trade deal.” Study 3 was conducted January 12–
16, 2017, had sample demographics that were similar to those
in the previous studies, and was also preregistered with EGAP.
Results
Study 3 replicated the findings in studies 1 and 2 (see Table
B2 in Online Appendix B). The perceived accuracy of
Table 2. Effects of fact-check type by party.
McConnell Reid
Accuracy Favorability Accuracy Favorability
Individual fact-check rating −0.40* −0.52** 0.01 0.01
(0.16) (0.17) (0.16) (0.16)
Individual FC
×
Democrat 0.26 0.25 −0.21 −0.22
(0.17) (0.19) (0.18) (0.17)
Individual FC
×
Republican 0.26 0.29 −0.26 −0.13
(0.19) (0.20) (0.19) (0.19)
Democrat (with leaners) −0.13 −0.36** 0.64** 0.72**
(0.12) (0.13) (0.13) (0.12)
Republican (with leaners) 0.30* 0.34* 0.19 −0.15
(0.12) (0.14) (0.14) (0.13)
Summary fact-checking data −0.40** −0.70** −0.24 −0.45**
(0.13) (0.17) (0.14) (0.15)
Summary FC
×
Democrat 0.14 0.11 −0.54** −0.31
(0.15) (0.19) (0.15) (0.16)
Summary FC
×
Republican −0.04 0.13 −0.29 0.09
(0.16) (0.20) (0.17) (0.19)
Constant 2.65** 2.67** 2.53** 2.51**
(0.10) (0.12) (0.12) (0.11)
N1435 1435 1393 1393
*p < .05, **p < .01 (two-sided). OLS = ordinary least squares models with robust standard errors. FC = fact-check.
8 Research and Politics
statements made by McConnell and Reid and favorability
toward them were lower when respondents were shown
summary fact-checking data compared to an individual
fact-check rating ( p< .01 in each case). Most notably,
when the topic confound in Study 2 was removed (by
changing the topic of the novel statement), participants
shown summary fact-checking data on McConnell (
β
=
−0.37, SE = 0.05) rated the new statement as less accurate
than those shown an individual fact-check rating of
McConnell (
β
= −0.17, SE = 0.05; difference = −0.20,
p< .01 ). Those shown summary fact-checking data on
Reid (
β
= −0.46, SE = 0.05) also rated the additional
statement as less accurate than participants shown an indi-
vidual fact-check rating of Reid (
β
= −0.24, SE = 0.05;
difference = −0.22, p< .01 ).
Discussion
The results of Study 3 helped explain the unexpected find-
ing in Study 2, where participants who saw an individual
fact-check rating viewed a new statement by that politi-
cian as less accurate than those who saw summary fact-
checking information. We hypothesized that this finding
was the result of the topic of the fact-check graphic and
the novel statement being closely related. When this con-
found was removed and we asked respondents to evaluate
a novel statement on an unrelated issue, we found the
expected relationship: participants who were shown sum-
mary fact-checking data rated the novel statement as less
accurate than those who were shown an individual fact-
check rating.
Conclusion
Summary fact-checking data had significantly greater
effects on perceptions of political figures than fact-
check ratings of an individual statement did. Compared
to respondents who saw an unfavorable or negative fact-
check rating of a single statement, those who saw unfa-
vorable summary fact-checking data viewed the
politicians in question less favorably and perceived state-
ments they made as less accurate. These effects were also
not consistently moderated by other factors, including
partisan affiliation, political knowledge, or education.
The lack of partisan heterogeneity is particularly impor-
tant given frequent concerns that directionally moti-
vated reasoning undermines fact-checking effectiveness
(e.g., Graves and Glaisyer, 2012).
These results suggest that news organizations should use
summary fact-checking to encourage responsible conduct
by political figures. However, caution is still required. First,
fact-checking individual statements is still the best way to
set the record straight about a specific claim. In addition,
reporters and editors must consider whether aggregated
fact-checks accurately represent a political figure’s overall
record or will leave a distorted impression (Uscinski and
Butler, 2013).
Future research should consider other research ques-
tions and approaches we did not evaluate. First, it would
be valuable to test fact-checks of non-quantitative claims
as well as different stimulus graphics or ratings.
Additional studies could also consider more controver-
sial targets or issues, vary the source of fact-checks, or
test the effects of positive fact-checks. Second, we did
not directly assess factual beliefs about a specific state-
ment. Third, it would be worthwhile to further investigate
the mechanisms for this effect (a difficult question under
any circumstances).
Nonetheless, these results are an important first step
toward understanding the new summary fact-checking for-
mat, which we found had greater effects on perceptions of
politicians than did an individual fact-check rating. By
increasing the reputational risk of making false claims in
this way, it may help to discourage politicians from promot-
ing misinformation in the first place.
Authors’ note
Brendan Nyhan is professor of Government at Dartmouth College,
Hanover, USA. Other co-authors are former undergraduate stu-
dents at Dartmouth. Alexander Agadjanian is currently a research
associate in the MIT Election Lab.
Acknowledgements
We thank the Dartmouth College Office of Undergraduate
Research for generous funding support. Our title comes from The
Washington Post headline on a letter to the editor about fact-
checking (Morris, 2013).
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with
respect to the research, authorship, and/or publication of this
article.
Funding
The author(s) disclosed receipt of the following financial support
for the research, authorship, and/or publication of this article: The
authors received funding support from the Dartmouth College
Office of Undergraduate Research.
ORCID iD
Alexander Agadjanian https://orcid.org/0000-0002-6756-8472
Supplemental materials
The supplemental files are available at http://journals.sagepub.com/
doi/suppl/10.1177/2053168019870351
The replication files are available at https://dataverse.harvard.edu/
dataset.xhtml?persistentId=doi:10.7910/DVN/6B9C1N
Agadjanian et al. 9
Notes
1. Concerns about selection bias and subjectivity still apply,
however (see e.g., Uscinski and Butler, 2013).
2. Pinpointing the exact mechanism(s) is beyond the scope of
this study but is identified as a key topic for future research
in the Conclusion.
3. Anonymized preregistrations for all three studies are
attached. Deviations from the preregistered study plan are
noted in Footnote 9 below.
4. An additional 719 respondents were excluded because they
participated in a pilot study or did not consent to participate.
5. See Online Appendix A for the full survey instrument and
stimulus materials, including the graphic used in the control
conditions. The treatment graphics were designed to resem-
ble real-world stimuli as closely as possible (see Online
Appendix C for examples).
6. Respondents were also asked about their perceptions of
the other senator to whom they were not assigned, but we
excluded those responses from all analyses because of the
possibility of a contrast effect.
7. OLS allowed us to better communicate effect sizes and
confidence intervals than analysis of variance, and directly
estimate the causal quantities of interest (e.g., the average
treatment effect) with minimum assumptions. However,
equivalent ordered probit models are also provided in Online
Appendix C for all OLS models in the main text. As per our
preregistration, we analyzed the results separately by candi-
date rather than pooling because we rejected the null of no
difference in treatment effects by fact-checking target (i.e.,
politician) for at least one outcome variable in each study.
8. These preregistered models and all others testing for het-
erogeneous treatment effects in the main text and in Online
Appendix C include interactions between our treatment con-
ditions and the potential moderators in question as well as all
constituent terms as per Brambor, Clark and Golder (2006).
9. See Online Appendix C for the results of these simple
exploratory models as well as our preregistered analyses,
which instead interact the treatments with both partisan indi-
cators as well as linear or tercile measures of education or
knowledge.
10. These results did not scale together well in a factor analysis.
As per our preregistration, we therefore analyzed each out-
come measure separately.
11. In each subsequent study, we excluded respondents who had
taken part in earlier studies.
12. See Online Appendix B for the altered graphic. No changes
were made to the individual fact-check rating treatment.
Carnegie Corporation of New York Grant
The open access article processing charge (APC) for this article
was waived due to a grant awarded to Research & Politics from
Carnegie Corporation of New York under its ‘Bridging the Gap’
initiative.
References
Berinsky AJ, Huber GA and Lenz GS (2012) Evaluating online
labor markets for experimental research: Amazon.com’s
Mechanical Turk. Political Analysis 20(3): 351–368.
Brambor T, Roberts Clark W and Golder M (2006) Understanding
interaction models: Improving empirical analyses. Political
Analysis 14(1): 63–82.
Chan M-pS, Jones CR, Hall Jamieson K and Albarracín D (2017)
Debunking: A meta-analysis of the psychological efficacy of
messages countering misinformation. Psychological Science
28(11): 1531–1546.
Fiorina MP (1981) Retrospective Voting in American National
Elections. New Haven, CT: Yale University Press.
Flynn DJ, Nyhan B and Reifler J (2017) The nature and origins of
misperceptions: Understanding false and unsupported beliefs
about politics. Advances in Political Psychology 38(S1): 127–
150.
Fridkin K, Kenney PJ and Wintersieck A (2015) Liar, liar, pants
on fire: How fact-checking influences citizens’ reactions
to negative advertising. Political Communication 32(1):
127–151.
Garrett RK and Weeks BE (2013) The promise and peril of real-
time corrections to political misperceptions.” In: Proceedings
of the 2013 Conference on Computer Supported Cooperative
Work (CSCW ‘13). New York, NY, USA: ACM. pp. 1047–
1058.
Garrett RK, Nisbet EC and Lynch EK (2013) Undermining the
corrective effects of media-based political fact checking?
The role of contextual cues and naïve theory. Journal of
Communication 63(4): 617–637.
Gottfried JA, Hardy BW, Winneg KM, et al. (2013) Did fact
checking matter in the 2012 presidential campaign?
American Behavioral Scientist 57(11): 1558–1567.
Graves L and Glaisyer T (2012) The fact-checking universe in
spring 2012: An overview. Media Policy Initiative Research
Paper. Washington, D.C.: New America Foundation.
Holan AD (2015) All politicians lie. Some lie more than others.
New York Times, 11 December, 2015. Available at: http://
www.nytimes.com/2015/12/13/opinion/campaign-stops/all-
politicians-lie-some-lie-more-than-others.html (accessed 22
May 2016).
Kunda Z (1990) The case for motivated reasoning. Psychological
bulletin 108(3): 480–498.
Lodge M and Taber CS (2005) The automaticity of affect for
political leaders, groups, and issues: An experimental test
of the hot cognition hypothesis. Political Psychology 26(3):
455–482.
Morris J (2013) Counting the Pinocchios. The Washington Post,
May 24. Available at: https://www.washingtonpost.com/
opinions/counting-the-pinocchios/2013/05/24/82f07d38-
c2f3-11e2-9642-a56177f1cdf7_story.html (accessed 20 May
2016).
Norris RJ and Mullinix KJ (Forthcoming) Framing innocence: An
experimental test of the effects of wrongful convictions on
public opinion. Journal of Experimental Criminology.
Nyhan B, Porter E, Reifler J, et al. (Forthcoming) Taking fact-
checks literally but not seriously? The effects of journalistic
fact-checking on factual beliefs and candidate favorability.
Political Behavior.
Nyhan B and Reifler J (2015) The effect of fact-checking on
elites: A field experiment on U.S. state legislators. American
Journal of Political Science 59(3): 628–640.
Qiu L (2016) Donald Trump falsely claims Hillary Clinton
“wants to abolish the 2nd Amendment.” PolitiFact, May 11.
10 Research and Politics
Available at: https://www.politifact.com/truth-o-meter/state-
ments/2016/may/11/donald-trump/donald-trump-falsely-
claims-hillary-clinton-wants-/ (accessed 8 July 2019).
Snyder JM and Strömberg D (2010) Press coverage and political
accountability. Journal of Political Economy 118(2): 355–408.
Swire-Thompson B, Ecker U K H, Lewandowsky S, et al.
(Forthcoming) They might be a liar but they’re my liar:
Source evaluation and the prevalence of misinformation.
Political Psychology.
Taber CS and Lodge M (2006) Motivated skepticism in the evalu-
ation of political beliefs. American Journal of Political
Science 50(3): 755–769.
Uscinski JE and Butler RW (2013) The epistemology of fact
checking. Critical Review 25(2): 162–180.
Walter N and Murphy ST (2018) How to unring the bell: A
meta-analytic approach to correction of misinformation.
Communication Monographs 85(3): 423–441.
Wood T and Porter E (2019) The elusive backfire effect: Mass
attitudes’ steadfast factual adherence. Political Behavior
41(1): 135–163.
Zechman MJ (1979) Dynamic models of the voter’s decision
calculus: Incorporating retrospective considerations into
rational-choice models of individual voting behavior. Public
Choice 34(3–4): 297–315.
... For instance, the awareness of a fact-check may cause a politician to make less false statements in public [30]. Traditional fact-checking also affects the favorability rating of a politician among voters even though it does not change how voters view the fact-checking process itself [31]. However, fact-checking improves perceptions of information accuracy among people who encounter vetted misinformation [32,33], even though other research indicates that perceptions of accuracy may depend on a user's ideological orientation [34]. ...
Article
Full-text available
Using a random sample of active social media users (N = 1,156), this study examined the effectiveness of social media fact-checking against online misinformation sharing. Data indicates that these fact-checks are minimally effective in stopping the spread of misinformation on social media. Being aware of the fact-checks, being fact-checked, or even having content deleted from one’s account were not deterrents to sharing misinformation. The fear of isolation was the strongest deterrent, and this suggests that account freezes, suspensions, or bans may be the most effective method to curtail the spread of misinformation. The study contributes to research on fact-checking, to research on online surveillance, and to research on online expression and the spiral of silence theory. Keywords: Fact-checking; social media; fear of isolation; the spiral of silence; misinformation, disinformation
... Politics is a kind of topic that generates the most hoaxes (Blanco-Alfonso, Chaparro-Dominguez, and Repiso 2021). It is the duty of all fact-checking websites to provide accurate information to users and also to make politicians accountable (Agadjanian et al. 2019). ...
Article
Full-text available
This study identifies the different fake news verified by PIB fact-checking site. This study verified different themes along with the frequency of fake news which was verified by PIB site of Indian Government. As the data were collected during pandemic, this study attempted to find out the frequency in which different aspects of covid-19 were manipulated as fake news and subsequently shared through different media platforms. The fake news producers used different media platforms, including mainstream and social media, to disseminate fake news. This study also identified the different media applications where fake news were spread in large numbers by giants of fake news. The study also found that misinformation can be disseminated in different forms and languages too.
... Past quantitative and computational research studies on fact-checking have primarily focused on automating multiple stages of the fact-checking process [45,50,79,94,108,152], determining the perception and believability of fact checks [32,75,131] and construction of fact-check databases [112,165,174]. Scholars have adopted several approaches to determine the veracity of content, such as use of knowledge graphs [153], crowd-sourcing [95,111], deep learning models [105], natural language processing techniques coupled with supervised learning techniques [94] and combination of human knowledge and AI [129]. ...
Preprint
Full-text available
Increasing demands for fact-checking has led to a growing interest in developing systems and tools to automate the fact-checking process. However, such systems are limited in practice because their system design often does not take into account how fact-checking is done in the real world and ignores the insights and needs of various stakeholder groups core to the fact-checking process. This paper unpacks the fact-checking process by revealing the infrastructures -- both human and technological -- that support and shape fact-checking work. We interviewed 26 participants belonging to 16 fact-checking teams and organizations with representation from 4 continents. Through these interviews, we describe the human infrastructure of fact-checking by identifying and presenting, in-depth, the roles of six primary stakeholder groups, 1) Editors, 2) External fact-checkers, 3) In-house fact-checkers, 4) Investigators and researchers, 5) Social media managers, and 6) Advocators. Our findings highlight that the fact-checking process is a collaborative effort among various stakeholder groups and associated technological and informational infrastructures. By rendering visibility to the infrastructures, we reveal how fact-checking has evolved to include both short-term claims centric and long-term advocacy centric fact-checking. Our work also identifies key social and technical needs and challenges faced by each stakeholder group. Based on our findings, we suggest that improving the quality of fact-checking requires systematic changes in the civic, informational, and technological contexts.
... En este campo, se han realizado numerosos estudios sobre la efectividad de estas entidades a la hora de desmontar noticias falsas sobre ciencia (Peter y Koch, 2016), salud (Vraga y Bode, 2017), en contextos electorales y altamente polarizados (Zollo et al., 2017). La aproximación al estudio del fact-checking también se ha efectuado desde el análisis de sus mecanismos de transparencia para explicar los procesos de verificación a la ciudadanía (Humprecht, 2020), sus efectos sobre la credibilidad de los políticos (Agadjanian et al., 2019) o el uso de las redes sociales y los servicios de mensajería instantánea para desarrollar su labor (Bernal-Triviño y Clares-Gavilan, 2019). Otras investigaciones han puesto en valor el papel del periodismo de investigación colaborativo (Carson y Farhall, 2018), el establecimiento de límites y estándares periodísticos de calidad que contribuyan a diferenciar claramente la información veraz de los contenidos maliciosos y de dudosa calidad (Luengo y García-Marín, 2020), las estrategias a llevar a cabo desde los propios medios para mejorar su credibilidad y la confianza de la ciudadanía (Vos y Thomas, 2018;Pickard, 2020) y la adopción de modelos de comunicación más horizontales y participativos en su relación con las audiencias (Meier, Kraus y Michaeler, 2018). ...
Article
Full-text available
Desde el año 2016, el análisis de la desinformación y las fake news se ha convertido en una de las principales tendencias en la investigación científica producida desde diferentes campos del saber. A fin de conocer cuáles son los temas, las áreas del conocimiento, los países de procedencia y los tipos de trabajo y metodologías de estos estudios, así como analizar su impacto, se presenta una revisión sistemática de la literatura realizada en la base de datos Web of Science. Se analizó un total de 605 artículos publicados entre 2016 y 2020. Se aplicaron cálculos estadísticos descriptivos e inferenciales no paramétricos (pruebas de Kruskal-Wallis y U de Mann-Whitney) con el objetivo de descubrir diferencias significativas en el número de citas recibidas en cada una de las variables anteriormente mencionadas. Las soluciones para afrontar el fenómeno de la desinformación son las temáticas más frecuentes, pero los trabajos sobre los patrones de propagación del contenido falso reciben más citas. Destaca la alta presencia de estudios de tipo cuantitativo, sobre todo procedentes del ámbito anglosajón. La temática de los trabajos es la única variable que resulta relevante en el impacto de este tipo de estudios.
Research
Full-text available
The COVID-19 pandemic has revealed an information crisis, labeled an “Infodemic”. This study explores the methodology of debunking fake news related to the pandemic on independent Arabic fact-checking websites. The study’s results are derived from a textual and visual analysis of the content related to COVID-19 that have been published on two major Arabic fact-checking websites, Fatabyyano and Da Begad, between December 31, 2019 and October 31, 2020, paired with qualitative content analysis of the polices of both websites. The study concluded that the independent fact-checking websites have produced a new genre of investigative journalism, performed by non-journalistic actors. These individuals play the role of gate-keepers of information online, constituting pressure groups that compel mass media outlets and users, in many cases, to apologize for fake news and correct it. These websites have professionally debunked and verified online fake news related to COVID-19, including conspiracy theories and pseudoscientific therapies disseminated mainly on social networking sites, putting public health at risk.
Article
Full-text available
We reviewed 555 papers published from 2016–2022 that presented misinformation to participants. We identified several trends in the literature—increasing frequency of misinformation studies over time, a wide variety of topics covered, and a significant focus on COVID-19 misinformation since 2020. We also identified several important shortcomings, including overrepresentation of samples from the United States and Europe and excessive emphasis on short-term consequences of brief, text-based misinformation. Most studies examined belief in misinformation as the primary outcome. While many researchers identified behavioural consequences of misinformation exposure as a pressing concern, we observed a lack of research directly investigating behaviour change.
Article
Full-text available
This article presents the experimental design of ClaimCheck, an artificial intelligence tool for detecting repeated falsehoods in political discourse using a semantic similarity model developed by the fact-checking organization Newtral in collaboration with ABC Australia. The study reviews the state of the art in algorithmic fact-checking and proposes a definition of claim matching. Additionally, it outlines the scheme for annotating similar sentences and presents the results of experiments conducted with the tool.
Article
Increasing demands for fact-checking have led to a growing interest in developing systems and tools to automate the fact-checking process. However, such systems are limited in practice because their system design often does not take into account how fact-checking is done in the real world and ignores the insights and needs of various stakeholder groups core to the fact-checking process. This paper unpacks the fact-checking process by revealing the infrastructures---both human and technological---that support and shape fact-checking work. We interviewed 26 participants belonging to 16 fact-checking teams and organizations with representation from 4 continents. Through these interviews, we describe the human infrastructure of fact-checking by identifying and presenting, in-depth, the roles of six primary stakeholder groups, 1) Editors, 2) External fact-checkers, 3) In-house fact-checkers, 4) Investigators and researchers, 5) Social media managers, and 6) Advocators. Our findings highlight that the fact-checking process is a collaborative effort among various stakeholder groups and associated technological and informational infrastructures. By rendering visibility to the infrastructures, we reveal how fact-checking has evolved to include both short-term claims centric and long-term advocacy centric fact-checking. Our work also identifies key social and technical needs and challenges faced by each stakeholder group. Based on our findings, we suggest that improving the quality of fact-checking requires systematic changes in the civic, informational, and technological contexts.
Article
Full-text available
Objectives Discourse about criminal justice in the USA increasingly revolves around wrongful convictions. Research has documented the emergence of the “innocence frame,” but relatively little is known about its effects on public opinion. We utilize framing theory to examine how various presentations of wrongful conviction information affect attitudes toward the justice system and highlight the consequences of the innocence movement for public opinion. Methods We implement two survey experiments to test the effects of innocence information for criminal justice attitudes. In the first experiment, we test the impact of wrongful conviction numbers relative to a control group for death penalty support. In the second experiment, we analyze the effects—both separately and jointly—of exoneration numbers and a wrongful conviction narrative relative to a control group for attitudes toward the death penalty and police reform, trust in the justice system, and personal concern. Results We demonstrate that the presentation of factual numbers of exonerations reduces support for capital punishment and erodes trust in the justice system, but fails to garner support for police reforms or increase personal concern about wrongful convictions. However, a narrative about an individual wrongful conviction predictably has more pronounced effects on death penalty attitudes and increases personal concern and support for police reform, but has little effect on trust in the justice system more broadly. Conclusions Wrongful convictions are consequential for public opinion, but the effects are contingent on how the information is framed and the attitudinal outcome of interest. Our findings have implications for criminal justice attitudes and policy, the innocence movement, and framing theory.
Article
Full-text available
Are citizens willing to accept journalistic fact-checks of misleading claims from candidates they support and to update their attitudes about those candidates? Previous studies have reached conflicting conclusions about the effects of exposure to counter-attitudinal information. As fact-checking has become more prominent, it is therefore worth examining how respondents respond to fact-checks of politicians—a question with important implications for understanding the effects of this journalistic format on elections. We present results to two experiments conducted during the 2016 campaign that test the effects of exposure to realistic journalistic fact-checks of claims made by Donald Trump during his convention speech and a general election debate. These messages improved the accuracy of respondents’ factual beliefs, even among his supporters, but had no measurable effect on attitudes toward Trump. These results suggest that journalistic fact-checks can reduce misperceptions but often have minimal effects on candidate evaluations or vote choice.
Article
Full-text available
Can citizens heed factual information, even when such information challenges their partisan and ideological attachments? The “backfire effect,” described by Nyhan and Reifler (Polit Behav 32(2):303–330. https://doi.org/10.1007/s11109-010-9112-2, 2010), says no: rather than simply ignoring factual information, presenting respondents with facts can compound their ignorance. In their study, conservatives presented with factual information about the absence of Weapons of Mass Destruction in Iraq became more convinced that such weapons had been found. The present paper presents results from five experiments in which we enrolled more than 10,100 subjects and tested 52 issues of potential backfire. Across all experiments, we found no corrections capable of triggering backfire, despite testing precisely the kinds of polarized issues where backfire should be expected. Evidence of factual backfire is far more tenuous than prior research suggests. By and large, citizens heed factual information, even when such information challenges their ideological commitments.
Article
Full-text available
This meta-analysis investigated the factors underlying effective messages to counter attitudes and beliefs based on misinformation. Because misinformation can lead to poor decisions about consequential matters and is persistent and difficult to correct, debunking it is an important scientific and public-policy goal. This meta-analysis (k = 52, N = 6,878) revealed large effects for presenting misinformation (ds = 2.41–3.08), debunking (ds = 1.14–1.33), and the persistence of misinformation in the face of debunking (ds = 0.75–1.06). Persistence was stronger and the debunking effect was weaker when audiences generated reasons in support of the initial misinformation. A detailed debunking message correlated positively with the debunking effect. Surprisingly, however, a detailed debunking message also correlated positively with the misinformation-persistence effect.
Article
Even if people acknowledge that misinformation is incorrect after a correction has been presented, their feelings towards the source of the misinformation can remain unchanged. The current study investigated whether participants reduce their support of Republican and Democratic politicians when the prevalence of misinformation disseminated by the politicians appears to be high in comparison to the prevalence of their factual statements. We presented U.S. participants either with (1) equal numbers of false and factual statements from political candidates or (2) disproportionately more false than factual statements. Participants received fact‐checks as to whether items were true or false, then rerated both their belief in the statements as well as their feelings towards the candidate. Results indicated that when corrected misinformation was presented alongside equal presentations of affirmed factual statements, participants reduced their belief in the misinformation but did not reduce their feelings towards the politician. However, if there was considerably more misinformation retracted than factual statements affirmed, feelings towards both Republican and Democratic figures were reduced—although the observed effect size was extremely small.
Article
The study reports on a meta-analysis of attempts to correct misinformation (k = 65). Results indicate that corrective messages have a moderate influence on belief in misinformation (r = .35); however, it is more difficult to correct for misinformation in the context of politics (r = .15) and marketing (r = .18) than health (r = .27). Correction of real-world misinformation is more challenging (r = .14), as opposed to constructed misinformation (r = .48). Rebuttals (r = .38) are more effective than forewarnings (r = .16), and appeals to coherence (r = .55) outperform fact-checking (r = .25), and appeals to credibility (r = .14).
Article
Political misperceptions can distort public debate and undermine people's ability to form meaningful opinions. Why do people often hold these false or unsupported beliefs, and why is it sometimes so difficult to convince them otherwise? We argue that political misperceptions are typically rooted in directionally motivated reasoning, which limits the effectiveness of corrective information about controversial issues and political figures. We discuss factors known to affect the prevalence of directionally motivated reasoning and assess strategies for accurately measuring misperceptions in surveys. Finally, we address the normative implications of misperceptions for democracy and suggest important topics for future research.
Article
We examine the trade-offs associated with using Amazon.com's Mechanical Turk (MTurk) interface for subject recruitment. We first describe MTurk and its promise as a vehicle for performing low-cost and easy-to-field experiments. We then assess the internal and external validity of experiments performed using MTurk, employing a framework that can be used to evaluate other subject pools. We first investigate the characteristics of samples drawn from the MTurk population. We show that respondents recruited in this manner are often more representative of the U.S. population than in-person convenience samples-the modal sample in published experimental political science-but less representative than subjects in Internet-based panels or national probability samples. Finally, we replicate important published experimental work using MTurk samples. © The Author 2012. Published by Oxford University Press on behalf of the Society for Political Methodology. All rights reserved.
Article
The new media environment raises two questions: Will campaign deceptions have traveled around the web before journalism has the fact-checking in place to ensnare them? And if diligent checking of claims does exist, will it fall on an audience too enmeshed in its own biases to see past them? This essay draws on evidence from the Annenberg Public Policy Center's 2012 Institutions of Democracy Political Knowledge Survey to argue that long-form political fact-checking can increase the accuracy of voters' perceptions of both candidate stands on issues and the background facts of the presidential race.