ArticlePDF Available

When Corrections Fail: The Persistence of Political Misperceptions

Authors:

Abstract and Figures

An extensive literature addresses citizen ignorance, but very little research focuses on misperceptions. Can these false or unsubstantiated beliefs about politics be corrected? Previous studies have not tested the efficacy of corrections in a realistic format. We conducted four experiments in which subjects read mock news articles that included either a misleading claim from a politician, or a misleading claim and a correction. Results indicate that corrections frequently fail to reduce misperceptions among the targeted ideological group. We also document several instances of a “backfire effect” in which corrections actually increase misperceptions among the group in question. KeywordsMisperceptions-Misinformation-Ignorance-Knowledge-Correction-Backfire
Content may be subject to copyright.
When Corrections Fail:
The persistence of political misperceptions
Brendan Nyhan
Duke University
brendan.nyhan@duke.edu
Jason Reifler
Georgia State University
poljar@langate.gsu.edu
February 2, 2008
Abstract
An extensive literature addresses citizen ignorance, but very little research focuses on
misperceptions. Can these false or unsubstantiated beliefs about politics be corrected?
Previous studies have not tested the efficacy of corrections in a realistic format. We
conducted four experiments in which subjects read mock news articles that included
either a misleading claim from a politician, or a misleading claim and a correction.
Results indicate that corrections frequently fail to reduce misperceptions among the
targeted ideological group. We also document several instances of a “backfire” effect in
which corrections actually increase misperceptions among the group in question.
1
“It ain’t what you don’t know that gets you into trouble. It’s what you
know for sure that just ain’t so.”
-Mark Twain
A substantial amount of scholarship in political science has sought to determine whether
citizens can participate meaningfully in politics. Recent work has shown that most
citizens appear to lack factual knowledge about political matters (see, e.g., Delli Carpini
and Keeter 1996) and that this deficit affects the issue opinions that they express (Althaus
1998, Kuklinski et al 2000, Gilens 2001). Some scholars respond that citizens can
successfully use heuristics, or information shortcuts, as a substitute for detailed factual
information in some circumstances (Popkin 1991; Sniderman, Brody and Tetlock 1991,
Lupia 1994; Lupia and McCubbins 1998).1
However, as Kuklinski et al point out (2000: 792), there is an important
distinction between being uninformed and being misinformed. Advocates of heuristics
typically assume that voters know they are uninformed and respond accordingly. But
many citizens may base their policy preferences on false, misleading, or unsubstantiated
information that they believe to be true (see, eg, Kuklinski et al 2000: 798). Frequently,
such misinformation is related to one’s political preferences. For instance, after the U.S.
invasion of Iraq, the belief that Iraq had weapons of mass destruction before the invasion
was closely associated with support for President Bush (Kull, Ramsay, and Lewis 2003).
From a normative perspective, it is especially important to determine whether
misperceptions, which distort public opinion and political debate, can be corrected.
Previous research in political science has found that it is possible to change issue
opinions by directly providing relevant facts to subjects (Kuklinski et al 2000, Gilens
1 Kuklinski and Quirk (2000) and Lau and Redlawsk (2001) make a compelling argument that citizens are
likely to fail to use heuristics correctly in even modestly complex situations.
2
2001). However, such authoritative statements of fact (such as those provided by survey
interviewer to a subject) are not reflective of how citizens typically receive and process
information. Instead, people typically receive corrective information within “objective”
news reports pitting two sides of an argument against each other, which is significantly
more ambiguous than receiving a correct answer from an omniscient source. In such
cases, citizens are likely to resist or reject arguments and evidence contradicting their
opinions – a view that is consistent with a wide array of research (e.g. Lord, Ross, and
Lepper 1979; Edwards and Smith 1996; Redlawsk 2002; Taber and Lodge 2006).
In this paper, we report the results of two rounds of experiments investigating the
extent to which corrective information embedded in realistic news reports succeeds in
reducing prominent misperceptions about contemporary politics. In each of the four
experiments, which were conducted in fall 2005 and spring 2006, ideological subgroups
failed to update their beliefs when presented with corrective information that runs counter
to their predispositions. Indeed, in several cases, we find that corrections actually
strengthened misperceptions among the most strongly committed subjects.
DEFINING MISPERCEPTIONS
To date, the study of citizens’ knowledge of politics has tended to focus on questions like
veto override requirements for which answers are clearly true or false (e.g. Delli Carpini
and Keeter 1996). As such, studies have typically contrasted voters who lack factual
knowledge (i.e. the “ignorant”) with voters who possess it (e.g. Gilens 2001). But as
Kuklinski et al (2000) note, some voters may unknowingly hold incorrect beliefs,
3
especially on contemporary policy issues on which politicians and other political elites
may have an incentive to misrepresent factual information.
In addition, the factual matters that are the subject of contemporary political
debate are rarely as black and white as standard political knowledge questions. As Gaines
et al write, “Very often such factual representations [about public policy] are not prior to
or independent of the political process but arise within it. Consequently, very few factual
claims are beyond challenge; if a fact is worth thinking about in making a policy choice,
it is probably worth disputing” (1998: 148). We must therefore rely on a less stringent
standard in evaluating people’s factual knowledge about politics in a contemporary
context. One such measure is the extent to which beliefs about controversial factual
matters square with the best available evidence and expert opinion. Accordingly, we
define misperceptions as cases in which people’s beliefs about factual matters are not
supported by clear evidence and expert opinion – a definition that includes both false and
unsubstantiated beliefs about the world.
To illustrate the point, it is useful to compare our definition with Gaines et al
(2007), an observational study that analyzed how students update their beliefs about the
war in Iraq over time. They define the relevant fact concerning Iraqi WMD as knowing
that weapons were not found and describe the (unsupported) belief that Iraq hid or moved
its WMD before the U.S. invasion as an “interpretation” of that fact. Our approach is
different. Based on the evidence presented in the Duelfer Report, which was not directly
disputed by the Bush administration, we define the belief that Saddam moved or hid
WMD before the invasion as a misperception.
4
PREVIOUS RESEARCH ON CORRECTIONS
Surprisingly, only two major studies in political science consider the effects of attempts
to correct factual ignorance or misperceptions. First, Kuklinski et al (2000) conducted
two experiments attempting to counter misperceptions about federal welfare programs. In
the first, which was part of a telephone survey of Illinois residents, randomly selected
treatment groups were given either a set of relevant facts about welfare or a multiple-
choice quiz about the same set of facts. These groups and a control group were then
asked for their opinions about two welfare policy issues. Kuklinski and his colleagues
found that respondents had highly inaccurate beliefs about welfare generally; that the
least informed people expressed the highest confidence in their answers; and that
providing the relevant facts to respondents had no effect on their issue opinions (nor did
it in an unreported experiment about health care). In a later experiment conducted on
college students, they asked subjects how much of the national budget is spent on welfare
and how much should be spent. Immediately afterward, the experimental group was
provided with the correct answer to the first question. Unlike the first experiment, this
more blunt treatment did change their opinions about welfare policy.
Gilens (2001) also conducted an experiment in which survey interviewers
provided relevant facts to subjects before asking about their opinions on topical issues
(crime and foreign aid). Like the second Kuklinski et al experiment (but unlike the first
one), he found that this manipulation significantly changed respondents’ issue opinions.
(His study focused on factual ignorance and did not investigate misinformation as such.)
While both studies make significant contributions to our understanding of the
effect of factual corrections on issue opinions, neither considers the effectiveness of
5
corrective information in causing subjects to revise their factual beliefs. In addition, the
corrective information in both studies was presented directly to subjects as truth. Under
normal circumstances, however, citizens are rarely provided with such definitive
corrections. Instead, they typically receive corrective information in news reports that are
less authoritative and direct. As a result, we believe it is imperative to study the
effectiveness of corrections in news reports, particularly given the increasing demands
from press critics for a more aggressive approach to fact-checking (e.g. Cunningham
2003). While it is important to establish that preference change can happen after an
authoritative correction, we seek to investigate a more fundamental question – do citizens
revise their factual beliefs after receiving corrective information in a realistic format?
THEORETICAL EXPECTATIONS
A wide array of research indicates that the way citizens process information frequently
varies depending on their previous beliefs. In particular, information that is perceived to
be incongruent with subjects’ views is likely to be resisted. For instance, numerous
studies in psychology have shown that people display biases in evaluating arguments and
evidence, favoring those that reinforce their existing views and disparaging those that
contradict their existing views (see, e.g., Lord, Ross, and Lepper 1979; Edwards and
Smith 1996). Similarly, subjects with high levels of belief in a just world (the belief that
people get what they deserve) are more likely to see innocent victims as responsible for
bad outcomes (see Furnham 2003 for a review). Communications research on the “hostile
media effect” shows that people routinely see news reports as biased against their own
point of view (Arpan and Raney 2003; Schmitt, Gunther and Liebhart 2004; Gunther and
6
Schmitt 2004; Tsfati and Cohen 2005). Finally, in political science, Taber and Lodge
(2006) found that subjects tended to rate attitudinally congruent arguments as stronger
than incongruent ones and spent more time counter-arguing incongruent arguments.
In addition, numerous studies show that subjects who are exposed to information
that runs counter to their political preferences frequently come to support their original
opinion even more strongly – a “backlash” effect. For instance, in a dynamic process
tracing experiment, Redlawsk (2002) finds that subjects who were not given a memory-
based processing prime came to view their preferred candidate even more positively after
being exposed to negative information about the candidate. Peffley and Hurwitz (2007)
find that when whites are told that the death penalty is applied in a discriminatory faction
against blacks, they actually become more supportive of it. Finally, Howell and Kriner
(n.d.) find that hearing a Democrat argue against using military force in some cases
causes Republicans to become more supportive of doing so. We expect that such a
backlash will take place on some questions of fact as well. In other words, citizens who
received a correction that conflicts with their political views may actually shift their
factual beliefs in the wrong direction in response.2
We thus have three hypotheses about the effect of corrections on misperceptions:
Hypothesis 1: Motivated reasoning
The effect of corrections on misperceptions will be moderated by ideology.
2 It may be noted that our theory appears to correspond in some respects to that of Zaller (1992), who
proposed the Receive-Accept-Sample model of the survey response. In particular, our discussion of
resistance to contradictory evidence is analogous to his discussion of whether subjects accept a frame.
However, his model does not predict the potential backlash effect we describe above. The inclusion of a
correction after a misleading statement corresponds to what Zaller describes as a shift from a one-sided to a
two-sided information flow, but his model would predict that new information should decrease
misperceptions among the group that is ideologically favorable to the correction, not cause a backlash
among those who dislike it for ideological reasons (185-215).
7
Hypothesis 2a: Resistance to corrections
Corrections will fail to reduce misperceptions among the ideological subgroup
that is likely to hold the misperception.
Hypothesis 2b: Correction backfire
In some cases, the interaction between corrections and ideology will be so strong
that misperceptions will increase for the ideological subgroup in question.
To fix ideas, define our dependent variable Y as a measure of misperceptions (in
practice, a five-point Likert scale in which higher values indicate greater levels of
agreement with a statement of the misperception). We wish to estimate the effect of a
correction treatment to see if it will reduce agreement with the misperception. However,
we expect that the marginal effect of the correction will vary with ideology, which we
define as the relevant measure of predispositions in a general political context (in
practice, a a seven-point Likert scale from “very liberal” to “very conservative”). Thus,
we must include an interaction between ideology and the correction in our specification.
Finally, we include a control variable for political knowledge, which is likely to be
negatively correlated with misperceptions, to improve the efficiency of our statistical
estimation. We therefore estimate the following equation:
Y=
!
0
+
!
1
* Correction +
!
2
* Ideology +
!
3
* Correction * Ideology +
!
4
* Knowledge (1)
8
Using Equation 1, we can formalize the three hypotheses presented above.
Hypothesis 1, which predicts that the effect of the correction will be moderated by
ideology, implies that the coefficient for the interaction between correction and ideology
will not equal zero (
!
3"0
).3 Hypothesis 2a, which predicts that the correction will fail to
reduce misperceptions among the ideological subgroup that is likely to hold the
misperception, implies that the marginal effect of the correction will not be statistically
distinguishable from zero for the subgroup (
!
1
+
!
3
* Ideology = 0
for liberals or
conservatives). Alternatively, Hypothesis 2b predicts that the correction will sometimes
increase misperceptions for the ideological subgroup in question, implying that the
marginal effect will be greater than zero for the subgroup (
B1+B3* Ideology > 0
).4
All of these hypotheses are problematic from the perspective of democratic
theory, but the prospect that corrections can backfire is especially troubling. As shown
below, this threat is very real when salient issues and realistic stimuli are employed.
RESEARCH DESIGN
To evaluate the effects of corrective information, we conducted four experiments in
which subjects read mock newspaper articles containing a statement from a political
figure that reinforces a widespread misperception. Participants were randomly assigned
to read articles that either included or did not include corrective information immediately
after a false or misleading statement (see appendix for the full text of all four articles).
They were then asked to answer a series of factual and opinion questions.
3 The signs of the coefficients will vary in practice depending on whether misperceptions are more likely
among liberals or conservatives.
4 Specifically, we expect that the 95% confidence interval for the marginal effect will not include zero. Its
standard error is
var(
!
1
"
)+Ideology
2
#var(
!
3
"
)+2#Ideology #cov(
!
"
1
!
3
"
)
(Brambor, Clark, and Golder 2006).
9
Because so little is known about the effectiveness of corrective information in
contemporary politics, we designed the experiments to maximize external validity. First,
we focus on controversial political issues from contemporary American politics (the war
in Iraq, tax cuts, and stem cell research) rather than the hypothetical stories commonly
found in psychology research (e.g. Johnson and Seifert 1994). As a result, our
experiments seek to correct pre-existing misperceptions rather than constructing them
within the experiment. While this choice is likely to make misperceptions more difficult
to change, it increases our ability to address the motivating concern of this research
correcting misperceptions in the real world. In addition, we test the effectiveness of
corrective information in the context of news reports, one of the primary mechanisms by
which citizens acquire information. In order to maximize realism, we constructed the
mock news articles using text from actual articles whenever possible.
Given our focus on pre-existing misperceptions, it is crucial to use experiments,
which allow us to escape the endogeneity between factual beliefs and opinion that
plagues survey research on real-world misperceptions (e.g. Kull, Ramsay, and Lewis
2003). For instance, rather than simply noting that misperceptions about Iraqi WMD are
high among conservatives, we can randomize subjects across conditions (avoiding
estimation problems due to pre-existing individual differences in knowledge, ideology,
etc.) and test the effectiveness of corrections for that group and for subjects as a whole.
A final research design choice was to use a between-subjects design in which we
compared misperceptions across otherwise identical subjects who were randomly
assigned to different experimental conditions. This decision was made to maximize the
effect of the corrections. A within-subjects design in which we compared beliefs in
10
misperceptions before and after a correction would anchor subjects’ responses on their
initial response, weakening the potential for an effective correction or a backlash.
The experiments we present in this paper were all conducted in the Viewsflash
online survey environment with undergraduates at a Catholic university in the Midwest.5
Study 1, conducted in the Fall 2005 semester, tests the effect of a correction on the
misperception that Iraq had WMD immediately before the war in Iraq. Study 2, which
was conducted in the Spring 2006 semester, includes a second version of the Iraq WMD
experiment as well as experiments attempting to correct misperceptions about the effect
of tax cuts on revenue and federal policy toward stem cell research.
As noted above, we define misperceptions to include both false and
unsubstantiated beliefs about the world. We therefore consider two issues (the existence
of Iraqi WMD and the effect of tax cuts on revenue) in which misperceptions are
contradicted by the best available evidence, plus a third case (the belief that President
Bush “banned” stem cell research) in which the misperception is demonstrably incorrect.
STUDY 1: FALL 2005
The first experiment we conducted, which took place in fall 2005, tested the effect of a
correction embedded in a news report on beliefs that Iraq had weapons of mass
destruction immediately before the U.S. invasion. One of the primary rationales for war
5 Participants, who received course credit for participation, signed up via an online subject pool
management system for students in psychology courses and were provided with a link that randomly
assigned them to treatment conditions. Standard caveats about generalizing from a convenience sample
apply. In terms of external validity, college students are more educated than average and may thus be more
able to resist corrections (Zaller 1992). However, college students are also known to have relatively weak
self-definition, poorly formed attitudes, and to be relatively easily influenced (Sears 1986) – all
characteristics that would seem to reduce the likelihood of resistance and backfire effects. In addition, as
Druckman and Nelson note (2003: 733), the related literatures on framing, priming and agenda-setting have
found causal processes that operate consistently in student and non-student samples (Kühberger 1998: 36,
Miller and Krosnick 2000: 313).
11
offered by the Bush administration was Iraq’s alleged possession of biological and
chemical weapons. Perhaps as a result, many Americans failed to accept or did not find
out that WMD were never found inside the country. This misperception, which persisted
long after the evidence against it had become overwhelming, was closely linked to
support for President Bush (Kull, Ramsay, and Lewis 2003).6 One possible explanation
for the prevalence of the WMD misperception is that journalists failed to adequately fact-
check Bush administration statements suggesting the U.S. had found WMD in Iraq (e.g.
Allen 2003). As such, we test a correction condition (described below) in which a news
report on a statement by President Bush that could be interpreted to suggest that Iraq did
have WMD is followed by a clarification that WMD had not been found.
Another plausible explanation for why Americans were failing to update their
beliefs about Iraqi WMD is fear of death in the wake of September 11, 2001 terrorist
attacks. To test this possibility, we drew on terror management theory (TMT), which
researchers have suggested may help explain responses to 9/11 (Pyszczynski, Solomon,
and Greenberg 2003). TMT research shows that reminders of death create existential
anxiety that subjects manage by becoming more defensive of their cultural worldview
and hostile toward outsiders. Previous studies have found that increasing the salience of
subjects’ mortality increased support for President Bush and for U.S. military
interventions abroad among conservatives (Cohen et al 2005, Landau et al 2004,
Pyszczynski et al 2006) and created increased aggressiveness toward people with
differing political views (McGregor et al 1998), but the effect of mortality salience on
both support for misperceptions about Iraq and the correction of them has not been tested.
6 Evidence on WMD did not change appreciably after the October 2004 release of the Duelfer Report. No
other relevant developments took place until June 2006, when two members of Congress promoted the
discovery of inactive chemical shells from the Iran-Iraq War as evidence of WMD (see footnote 2).
12
We therefore employed a mortality salience manipulation to see if it increased WMD
misperceptions or reduced the effectiveness of the correction treatment.
Method
130 participants7 were randomly assigned to one of four treatments in a 2 (correction
condition) x 2 (mortality salience) design.8 The appendix provides the full text of the
article that was used in the experiment. Subjects in the mortality salience condition are
asked to “Please briefly describe the emotions that the thought of your own death arouses
in you” and to “Jot down, as specifically as you can, what you think will happen to you as
you physically die and once you are physically dead.” (Controls were asked versions of
the same questions in which watching television is substituted for death.)
After a distracter task, subjects were then asked to read a mock news article
attributed to the Associated Press that reports on a Bush campaign stop in Wilkes-Barre,
PA during October 2004. The article describes Bush’s remarks as “a rousing, no-retreat
defense of the Iraq war” and quotes a line from the speech he actually gave in Wilkes-
Barre on the day the Duelfer Report was released (Priest and Pincus 2004): “There was a
risk, a real risk, that Saddam Hussein would pass weapons or materials or information to
terrorist networks, and in the world after September the 11th, that was a risk we could not
afford to take.” Such wording may falsely suggest to listeners that Saddam Hussein did
have WMD that he could have passed to terrorists after September 11, 2001. In the
7 68 percent of respondents in Study 1 were female; 62 percent were white; 56 percent were Catholic. For a
convenience sample, respondents were reasonably balanced on both ideology (48 percent left of center, 27
percent centrist, 25 percent right of center) and partisanship (27 percent Republican or lean Republican, 25
percent independent, 48 percent Democrat or lean Democrat).
8 The experiment was technically a 3 x 2 design with two types of corrections, but we omit the alternative
correction condition here for ease of exposition. Future research will present the “causal” correction
approach we have developed based on Johnson and Seifert (1994, 1998). Excluding these data does not
substantively affect the key results presented in this paper.
13
correction condition, the story then discusses the release of the Duelfer Report, which
documents the lack of Iraqi WMD stockpiles or an active production program
immediately prior to the US invasion.9
After reading the article, subjects were asked to state whether they agreed with
this statement: “Immediately before the U.S. invasion, Iraq had an active weapons of
mass destruction program, the ability to produce these weapons, and large stockpiles of
WMD, but Saddam Hussein was able to hide or destroy these weapons right before U.S.
forces arrived.” Responses were measured on a five-point Likert scale ranging from
“strongly disagree” (1) to “strongly agree” (5).
Results
The results from Study 1 largely support the backfire hypothesis, as shown by two
ordered probit models that are presented in Table 1.
[Table 1]
Model 1 estimates the effect of the correction treatment; a centered seven-point ideology
scale ranging from strongly liberal (-3) to strongly conservative (3); an additive five-
question scale measuring political knowledge using conventional factual questions (Delli
Carpini and Keeter 1996); and the mortality salience manipulation. As expected, more
knowledgeable subjects were less likely to agree that Iraq had WMD (p < .01) and
conservatives were more likely to agree with the statement (p < .01). We also find that
9 While President Bush argued that the report showed that Saddam “retained the knowledge, the materials,
the means and the intent to produce” WMD, he and his administration did not dispute its conclusion that
Iraq did not have WMD or an active weapons program at the time of the U.S. invasion (Balz 2004).
14
correction treatment did not reduce overall misperceptions and the mortality salience
manipulation was statistically insignificant.10
In Model 2, we test whether the effect of the correction is moderated by subjects’
political views by including an interaction between ideology and the treatment condition.
As stated earlier, our hypothesis is that the correction will be increasingly ineffective as
subjects become more conservative (and thus more sympathetic to the claim that Iraq had
WMD). When we estimate the model, the interaction term is significant (p < .01),
suggesting that the effect of the correction does vary by ideology.
Because interaction terms are often difficult to interpret, we follow Brambor,
Clark, and Golder (2006) and plot the estimated marginal effect of the correction and the
95% confidence interval over the range of ideology in Figure 1.
[Figure 1]
For very liberal subjects, the correction worked as expected, making them more likely to
disagree with the statement that Iraq had WMD compared with controls. The correction
did not have a statistically significant effect on individuals who described themselves as
liberal, somewhat left of center, or centrist. But most importantly, the effect of the
correction for individuals who placed themselves to the right of center ideologically is
statistically significant and positive. In other words, the correction backfired –
conservatives who received a correction telling them that Iraq did not have WMD were
more likely to believe that Iraq had WMD than those in the control condition. (The
interpretation of other variables does not change in Model 2.)
10 In addition, interactions between mortality salience and the correction condition were not statistically
significant (results available upon request). As such, we do not discuss it further.
15
To illustrate the substantive effects of the correction/ideology interaction, Figure
2 plots predicted response probabilities across the dependent variable for four groups:
self-identified liberals (ideology = -2) and conservatives (ideology = 2) who received the
correction and those that do not.11
[Figure 2]
It is clear that responses to the correction differed dramatically by ideology. For liberals,
the correction increased the predicted probability that subjects would “strongly disagree”
that Saddam had WMD from .46 to .67 (p < .10). By contrast, the predicted probability
that conservatives would “somewhat agree” with the misperception increased from .30 to
.52 (p < .01) and the predicted probability that they would “somewhat disagree”
decreased from .22 to .08 (p < .01).
STUDY 2: SPRING 2006
In spring 2006, we conducted a series of additional experiments designed to extend our
findings and test the generality of the backfire effect found in Study 1. We sought to
assess whether it generalizes to other issues as well as other ideological subgroups
(namely, liberals). The latter question is especially important for the debate over whether
conservatism is uniquely characterized by dogmatism and rigidity (Greenberg and Jonas
2003; Jost et al 2003a, 2003b).
Another goal was to test whether the backfire effect was the result of perceived
hostility on the part of the news source. Though we chose the Associated Press as the
11 Subjects are assumed to have mean knowledge levels and to not have received the mortality salience
manipulation. All predicted probabilities are calculated using S-Post (Long 1997). Confidence intervals on
changes in predicted probabilities are estimated using the delta method in S-Post (Xu and Long 2005).
16
source for Study 1 due to its perceived neutrality, it is possible that conservatives felt that
the correction was a reflection of media bias. There is an extensive literature showing that
partisans and ideologues tend to view identical content as biased against them (Arpan and
Raney 2003; Christen, Kannaovakun and Gunther 2002; Gunther and Chia 2001; Gunther
and Schmitt 2004; Gussin and Baum 2004, 2005; Lee 2005; Vallone, Ross, and Lepper
1985). Perceptions of liberal media bias are especially widespread in the U.S., where 50
percent of the public recently described the media as liberal (Pew 2005). As such, we
manipulated the news source as described below.
In Study 2, we used a 2 (correction) x 2 (media source) design to test corrections
of three possible misperceptions: the beliefs that Iraq had WMD when the U.S. invaded,
that tax cuts increase government revenue, and that President Bush banned on stem cell
research. (The appendix presents the wording of all three experiments.) By design, the
first two tested misperceptions held predominantly by conservatives and the third tested a
possible liberal misperception.12 In addition, we varied the source of the news articles,
attributing them to either the New York Times (a source many conservatives perceive as
biased) or FoxNews.com (a source many conservatives perceive as favorable). 196
respondents participated in Study 2.13
12 We also conducted an experiment correcting a claim made by Michael Moore in the movie “Fahrenheit
9/11” that the war in Afghanistan was motivated by Unocal’s desire to build an natural gas pipeline through
the country. All results of substantive importance to this paper were insignificant. The full wording and
results of this experiment are available upon request.
13 62 percent of respondents to Study 2 were women; 59 percent were Catholic; and 65 percent were white.
The sample was again reasonably balanced for a convenience sample on both ideology (52 percent left of
center, 17 percent centrist, 31 percent right of center) and partisanship (46 percent Democrat or lean
Democrat, 20 percent independent, 33 percent Republican or lean Republican).
17
Method – Iraq WMD
In our second round of data collection, we conducted a modified version of the
experiment from Study 1 to verify and extend our previous results. For the sake of clarity,
we simplified the stimulus and manipulation for the Iraq WMD article, changed the
context from a 2004 campaign speech to a 2005 statement about Iraq, and used a simpler
question as the dependent variable (see appendix for exact wording).
Results – Iraq WMD
Ordered probit analyses for the second version of the Iraq WMD experiment, which are
presented in Table 2, differ substantially from the previous iteration.
[Table 2]
Interestingly, we could not reject the null hypothesis that the news source did not change
the effect of the correction in this or the two following experiments (results available
upon request). As such, it is excluded from all reported results.14
Model 1 indicates that the WMD correction again fails to reduce overall
misperceptions. However, we again add an interaction between the correction and
ideology in Model 2 and find a statistically significant result. This time, however, the
interaction term is negative – the opposite of the result from Study 1. Figure 3 plots the
marginal effect of the correction over the range of ideology.
[Figure 3]
14 Three-way interactions between news source, the correction, and ideology were also insignificant (results
available upon request).
18
Unlike the previous experiment, the marginal effect of the correction is negative for
individuals who placed themselves to the right of center, meaning that the correction
made conservatives more likely to believe that Iraq did not have WMD.15
It is unclear why the correction was effective for conservatives in this experiment.
One possibility is that conservatives may have shifted their grounds for supporting the
war in tandem with the Bush administration, which sought to distance itself over time
from the WMD rationale for war. The correlation between belief that George Bush “did
the right thing” in invading Iraq and belief in Iraqi WMD among conservatives declined
from .68 in Study 1 to .35 in Study 2. This was driven by the reaction to the correction;
the correlation increased in Study 1 from .41 among controls to .72 in the correction
condition, whereas in Study 2 it decreased from .54 to .10.16 The second possibility is that
the shift in the context of the article from the 2004 campaign to a 2005 statement by Bush
(which are reflected in the wording of the manipulation) made ideology less salient in
answering the question about Iraqi WMD. Finally, it is possible that the simpler wording
of the dependent variable reduced ambiguity that previously allowed for counter-arguing.
Even though a backfire effect did not take place among conservatives, we
conducted a post hoc analysis to see if conservatives who are the most intensely
committed to Iraq would still persist in resisting the correction. Model 3 therefore
includes a dummy variable for those respondents who rated Iraq as the most important
problem facing the country today as well as the associated two- and three-way
interactions with ideology and the correction condition. This model pushes the data to the
15 Figure 3 suggests that the correction slightly increased misperceptions among individuals who rated
themselves as very liberal, but this appears to be an anomaly – all four “very liberal” subjects who received
the correction strongly disagreed with the claim that Iraq had WMD before the invasion.
16 65 subjects from Study 1 were asked this question, which was added to the instrument partway through
its administration.
19
limit since only 34 respondents rated Iraq the most important issue (including eight who
placed themselves to the right of center ideologically). However, the results are consistent
with our expectations – there is a positive, statistically significant interaction between
ideology, the correction, and issue importance (p < .02), indicating that the correction
failed for conservatives who viewed Iraq as most important. Thus, even an effective
correction may be resisted by highly committed subgroups.
Figure 4 illustrates this finding using predicted response probabilities from model
3 for liberals and conservatives with mean knowledge levels.
[Figure 4]
The predicted probability that conservatives who chose other issues as most important
would “somewhat agree” with the misperception that Iraq had WMD before the invasion
decreased from .46 to .25 (p < .05). However, the predicted probabilities of responding
“somewhat agree” among those who viewed Iraq as most important increased from .25 to
.47 (p < .01) – another backfire effect. Thus, while the correction was more effective than
in Study 1, its effects were reversed for the most strongly committed subjects.
Method – Tax cuts
The second experiment in Study 2 tests subjects’ responses to the claim that cutting taxes
stimulates so much economic growth that it actually has the effect of increasing
government revenue over what it would otherwise be. The claim, which originates in
supply-side economics and is frequently made by Bush administration officials,
Republican members of Congress, and conservative elites, implies that tax cuts literally
pay for themselves. However, the overwhelming consensus among professional
20
economists – including current and former Bush administration officials – is that this
claim is implausible in the U.S. context (Hill 2006, Mankiw 2003, Milbank 2003).
Subjects read an article on the tax cute debate attributed to either the New York
Times or FoxNews.com (see appendix for text). In all conditions, it included a passage in
which President Bush said “The tax relief stimulated economic vitality and growth and it
has helped increase revenues to the Treasury.” As in Study 1, this quote – which implies
that tax cuts increase revenue over what would have otherwise been received – is taken
from an actual Bush speech. Subjects in the correction condition received an additional
paragraph clarifying that tax revenues declined sharply as a proportion of GDP between
2001 and 2005 (Bush passed major tax cuts in 2001 and 2003). The dependent variable is
agreement with the claim that “President Bush's tax cuts have increased government
revenue” on a Likert scale ranging from strongly disagree (1) to strongly agree (5).
Results – tax cuts
The two ordered probit models in Table 3 indicate that the tax cut correction generated
another backfire effect.
[Table 3]
In Model 1, we find (as expected) that conservatives are more likely to believe that tax
cuts increase government revenue (p < .01) and more knowledgeable subjects are less
likely to do so (p < .05). More importantly, the correction again fails to cause a
statistically significant decline in overall misperceptions. As before, we again estimate an
interaction between the treatment and ideology in Model 2. The effect is positive and
statistically significant (p < .05), indicating that conservatives who received the treatment
21
were significantly more likely to agree with the statement that tax cuts increased revenue
than conservatives in the non-correction condition.
Figure 5 displays how the marginal effect of the correction varies by ideology.
[Figure 5]
As in the first Iraq experiment, the correction increases misperceptions among
conservatives, with a positive and statistically significant marginal effect for self-
described conservative and very conservative subjects (p < .05). Figure 6 illustrates this
effect by plotting the predicted response probabilities for liberals and conservatives with
mean knowledge levels.
[Figure 6]
The predicted probabilities are virtually identical for liberals across the control and
correction conditions, while the predicted probability that conservatives will “somewhat
agree” that tax cuts increase revenue increasing from .35 to .48 (p < .01). This finding
provides additional evidence that efforts to correct misperceptions can backfire.
Conservatives presented with evidence that tax cuts do not increase government revenues
ended up believing this claim more fervently than those who did not receive a correction.
Method – Stem cell research
While previous experiments considered issues on which conservatives are more likely to
be misinformed, our expectation was that many liberals hold a misperception about the
existence of a “ban” on stem cell research, a claim that both Senator John Kerry and
Senator John Edwards made during the 2004 presidential campaign (Weiss and
22
Fitzgerald 2004). In fact, while federal funding of stem cell research is limited to stem
cell lines that had been created before August 2001, no limitations have been placed on
privately funded research (Fournier 2004).
In the experiment, subjects read a mock news article attributed to either the New
York Times or FoxNews.com that reported statements by Edwards and Kerry suggesting
the existence of a stem cell research “ban.” In the treatment condition, a corrective
paragraph was added to the end of the news story explaining that Bush’s policy does not
limit privately funded stem cell research. The dependent variable is agreement that
“President Bush has banned stem cell research in the United States” on a scale ranging
from “strongly disagree” (1) to “strongly agree” (5). (See appendix for wording.)
Results – Stem cell research
Table 4 reports results from two ordered probit models that offer support for the
resistance hypothesis.
[Table 4]
In Model 1, we find a negative overall correction effect (p < .07), indicating that
subjects who received the correction were less likely to believe that Bush banned stem
cell research. We also find that subjects with more political knowledge were less likely to
agree that a ban existed (p < .07). In Model 2, we again interact the correction treatment
with ideology. The interaction is in the expected direction (negative) but just misses
statistical significance (p < .16). However, as Brambor, Clark, and Golder point out
(2006: 74), it is not sufficient to consider the significance of an interaction term on its
own. The marginal effects of the relevant independent variable need to be calculated for
23
substantively meaningful values of the modifying variable in an interaction. Thus, as
before, we estimate the marginal effect of the correction by ideology in Figure 7.
[Figure 7]
The figure shows that the stem cell correction has a negative and statistically significant
marginal effect on misperceptions among centrists and individuals to the right of center,
but fails to significantly reduce misperceptions among those to the left of center. Thus,
the correction works for conservatives and moderates, but not for liberals.
In addition, we plot the substantive effects of the correction in Figure 8, which
plots the predicted responses for liberals and conservatives with mean knowledge levels.
[Figure 8]
We find that the predicted probability that subjects “strongly disagree” with the stem cell
misperception increases from .09 to .23 for conservatives (p < .05), but predicted
responses do not change appreciably for liberals. While in this case we do not find a
backfire effect, the effect of the correction is again neutralized for the relevant
ideological subgroup. This finding provides additional evidence that the effect of
corrections is likely to be conditional on one’s political predispositions.
Conclusion
The experiments reported in this paper help us understand why factual misperceptions
about politics are so persistent. We find that responses to corrections in mock news
articles differ significantly according to subjects’ ideological views. As a result, the
corrections fail to reduce misperceptions for the most committed participants. Even
24
worse, they actually strengthen misperceptions among ideological subgroups in several
cases. Additional results indicate that these conclusions are not specific to the Iraq war;
not related to the salience of death; and not a reaction to the source of the correction.
Our results thus contribute to the literature on correcting misperceptions in three
important respects. First, we provide the first direct test of corrections on factual beliefs
about politics. Second, we show that corrective information in news reports may fail to
reduce misperceptions and can sometimes even increase them. Finally, we establish these
findings in the context of contemporary political issues that are salient to ordinary voters.
These findings seem to provide further support for the growing literature showing
that citizens engage in motivated reasoning. While our experiments focused on assessing
the effectiveness of corrections, the results show that ideological commitments can
override direct factual contradictions – an empirical finding with important theoretical
implications. Previous research on motivated reasoning has largely focused on the
evaluation and usage of factual evidence in constructing opinions and evaluating
arguments (e.g. Taber and Lodge 2006). By contrast, our research – the first to directly
measure the effectiveness of corrections in a realistic context – suggests that it would be
valuable to directly study the cognitive and affective processes that take place when
subjects are confronted with discordant factual information. Gaines et al (2007) take an
important first step in this direction by highlighting the construction of interpretations of
relevant facts, including those that may be otherwise discomforting, as a coping strategy.
It would also be helpful to test additional corrections of liberal misperceptions.
Currently, all of our backfire results come from conservatives – a finding that may
provide support for the hypothesis that conservatives are especially dogmatic (Greenberg
25
and Jonas 2003; Jost et al 2003a, 2003b). However, without conducting more studies, it is
impossible to determine if the results we observe are systematic or the result of the
specific misperceptions tested.
In addition, it would be valuable to replicate these findings with non-college
students or a representative sample of the general population. Testing the effectiveness of
corrections using a within-subjects design would also be worthwhile, though achieving
meaningful results may be difficult for reasons described above. In either case,
researchers must be wary of changing political conditions. Unlike other research topics,
contemporary misperceptions about politics are a moving target that can change quickly
(as the difference between the Iraq WMD experiments in Study 1 and Study 2 suggests).
Most importantly, however, future work should seek to distinguish the conditions
under which corrections reduce misperceptions from those under which they fail or
backfire. Many citizens seem or unwilling to revise their beliefs in the face of corrective
information, and attempts to correct those mistaken beliefs may only make matters worse.
Determining the best way to provide corrective information will advance understanding
of how citizens process information and help to strengthen democratic debate and public
understanding of the political process.
26
Works cited
Allen, Mike. 2003. “Bush: 'We Found' Banned Weapons.” Washington Post. May 31,
2003. Page A1.
Althaus, Scott L. 1998. “Information Effects in Collective Preferences.” American
Political Science Review, 92(3): 545-558.
Arpan, Laura M., and Arthur A. Raney. 2003. “An experimental investigation of news
source and the Hostile Media Effect.” Journalism and Mass Communication Quarterly, 80(2):
265-281.
Balz, Dan. “Candidates Use Arms Report to Make Case.” Washington Post, October 8, 2004.
Brambor, Thomas, William Roberts Clark, and Matt Golder. 2006. “Understanding
Interaction Models: Improving Empirical Analyses.” Political Analysis, 14: 63-82.
Christen, Cindy T., Prathana Kannaovakun, and Albert C. Gunther. 2002. “Hostile Media
Perceptions: Partisan Assessments of Press and Public during the 1997 United Parcel Service
Strike.” Political Communication, 19(4): 423-436.
Cohen, Florette, Daniel M. Ogilvie, Sheldon Solomon, Jeff Greenberg, and Tom
Pyszczynski. 2005. “American Roulette: The Effect of Reminders of Death
on Support for George W. Bush in the 2004.” Analyses of Social Issues and Public Policy, 5(1):
177-187.
Cunningham, Brent. 2003. “Re-thinking objectivity.” Columbia Journalism Review,
July/August 2003, 24-32.
Delli Carpini, Michael X. and Scott Keeter. 1996. What Americans Know about Politics
and Why It Matters. New Haven: Yale University Press.
Druckman, James N. and Kjersten R Nelson. 2003. “Framing and Deliberation: How Citizens'
Conversations Limit Elite Influence.” American Journal of Political Science, 47(4): 729–745.
Edwards, Kari, and Edward E. Smith. 1996. “A Disconfirmation Bias in the Evaluation of
Arguments.” Journal of Personality and Social Psychology, 71(1): 5-24.
Fournier, Ron. 2004. “First Lady Bashes Kerry Stem Cell Stance.” Associated Press,
August 9, 2004.
Furnham, Adrian. 2003. “Belief in a just world: research progress over the past decade.” Personality and
Individual Differences, 34(5):795-817.
Gaines, Brian J., James H. Kuklinski, Paul J. Quirk, Buddy Peyton and Jay Verkuilen. 2007. “Interpreting
Iraq: Partisanship and the Meaning of Facts.” Journal of Politics, 69(4): 957-974.
Gilens, Martin. 2001. “Political Ignorance and Collective Policy Preferences.” American
Political Science Review, 95(2):379-396.
Greenberg, Jeff, and Eva Jonas. 2003. “Psychological Motives and Political
Orientation – The Left, the Right, and the Rigid: Comment on Jost et al. (2003).” Psychological
Bulletin, 129(3): 376-382.
Gunther, Albert C. and Stella Chih-Yun Chia. 2001. “Predicting Pluralistic Ignorance:
27
The Hostile Media Perception and Its Consequences.” Journalism and Mass Communication
Quarterly, 78(4): 688-701.
Gunther, Albert C. and Kathleen Schmitt. 2004. “Mapping Boundaries of the Hostile
Media Effect.” Journal of Communication, 54(1): 55-70.
Gussin, Phil and Matthew A. Baum. 2004 “In the Eye of the Beholder: An Experimental
Investigation into the Foundations of the Hostile Media Phenomenon.” Paper presented at 2004
Meeting of the American Political Science Association, Chicago, IL.
Gussin, Phil and Matthew A. Baum. “Issue Bias: How Issue Coverage and Media Bias
Affect Voter Perceptions of Elections.” Paper presented at 2005 Meeting of the
American Political Science Association, Washington D.C.
Harris Poll. 2006. “Belief that Iraq Had Weapons of Mass Destruction Has Increased
Substantially.” Poll conducted July 5-11 and released July 21, 2006. Downloaded June 12, 2007
from http://www.harrisinteractive.com/harris_poll/index.asp?PID=684
Hill, Patrice. 2006. “House or Senate shake-up likely to end tax cuts.” Washington Times,
October 5, 2006.
Howell, William and Douglas Kriner. N.d. “Political Elites and Public Support for War.” Unpublished
manuscript.
Johnson, Holly M. and Colleen M. Seifert. 1994. “Sources of the Continued Influence
Effect: When Misinformation in Memory Affects Later Inferences.” Journal of
Experimental Psychology: Learning, Memory, and Cognition, 20(6): 1420-1436.
Johnson, Holly M. and Colleen M. Seifert. 1998. “Updating Accounts Following a
Correction of Misinformation.” Journal of Experimental Psychology: Learning, Memory, and
Cognition, 24(6): 1483-1494.
Jost, John T., Jack Glaser, Arie W. Kruglanski, and Frank J. Sulloway. 2003a. “Political
conservatism as motivated social cognition.” Psychological Bulletin, 129(3): 339-375.
Jost, John T., Jack Glaser, Arie W. Kruglanski, and Frank J. Sulloway. 2003b.
“Exceptions That Prove the Rule-Using a Theory of Motivated Social Cognition to Account for
Ideological Incongruities and Political Anomalies: Reply to Greenberg and Jonas (2003).”
Psychological Bulletin, 129(3): 383-393.
Koriat, Asher, Morris Goldsmith, and Ainat Pansky. 2000. “Toward a Psychology of
Memory Accuracy.” Annual Review of Psychology, 51:481-537.
Kühberger, Anton. 1998. “The Influence of Framing on Risky Decisions: A Meta-analysis.”
Organizational Behavior and Human Decision Processes, 75(1): 23-55.
Kuklinski, James H. and Paul J. Quirk. 2000. “Reconsidering the rational public:
cognition, heuristics, and mass opinion.” In Arthur Lupia, Mathew D. McCubbins, and Samuel L.
Popkin, eds., Elements of Reason: Understanding and Expanding the Limits of Political
Rationality. London: Cambridge University Press.
Kuklinski, James H., Paul J. Quirk, David Schweider, and Robert F. Rich. 1998. “‘Just the
Facts, Ma'am’: Political Facts and Public Opinion.” Annals of the American Academy of Political
and Social Science, 560: 143—154.
Kuklinski, James H., Paul J. Quirk, Jennifer Jerit, David Schweider, and Robert F. Rich.
28
2000. “Misinformation and the Currency of Democratic Citizenship.” The Journal of Politics,
62(3):790-816.
Kull, Steven, Clay Ramsay, and Evan Lewis. 2003. “Misperceptions, the Media, and the
Iraq War.” Political Science Quarterly, 118(4):569-598.
Landau, Mark J., Sheldon Solomon, Jeff Greenberg, Florette Cohen, Tom Pyszczynski,
Jamie Arndt, Claude H. Miller, Daniel M. Ogilvie and Alison Cook. “Deliver Us From Evil: The
Effects of Mortality Salience and Reminders of 9/11 on Support for President George W. Bush.”
Personality and Social Psychology Bulletin, 30(9): 1136-1150.
Lau, Richard R. and David Redlawsk. 2001. “Advantages and Disadvantages of
Cognitive Heuristics in Political Decision Making.” American Journal of Political Science.
45(4):951-971.
Lee, Tien-Tsung. 2005. “The Liberal Media Myth Revisited: An Examination of Factors
Influencing Perceptions of Media Bias.” Journal of Broadcasting & Electronic Media, 49(1): 43-
64.
Lodge, Milton, and Charles S. Taber. 2000. “Three Steps Toward a Theory of Motivated
Political Reasoning.” In Arthur Lupia, Mathew D. McCubbins, and Samuel L. Popkin, eds.,
Elements of Reason: Understanding and Expanding the Limits of Political Rationality. London:
Cambridge University Press.
Lodge, Milton, and Charles S. Taber. 2005. “The Automaticity of Affect for Political
Leaders, Groups, and Issues: An Experimental Test of the Hot Cognition Hypothesis.” Political
Psychology, 26 (3): 455-482.
Long, J. Scott. 1997. Regression Models for Categorical and Limited Dependent
Variables. Thousand Oaks, CA: Sage Publications.
Lord, Charles G., Lee Ross, and Mark R. Lepper. 1979. “Biased Assimilation and
Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence.”
Journal of Personality and Social Psychology, 37(11): 2098-2109.
Lupia, Arthur. 1994. “Shortcuts Versus Encyclopedias: Information and Voting
Behavior in California Insurance Reform Elections.” American Political Science Review, 88(1),
63-76.
Lupia, Arthur and Matthew D. McCubbins. 1998. The Democratic Dilemma:
Can Citizens Learn What They Need to Know? New York: Cambridge
University Press.
Mankiw, Greg. 2003. Testimony before the U.S. Senate Committee on Banking,
Housing, and Urban Affairs. May 13, 2003.
Mayo, Ruth, Yaacov Schul and Eugene Burnstein. 2004. “‘I am not guilty’ vs ‘I am innocent’: Successful
negation may depend on the schema used for its encoding.” Journal of Experimental Social
Psychology, 40(4): 433-449.
McGregor, Holly A., Joel D. Lieberman, Jeff Greenberg, Sheldon Solomon, Jamie Arndt, Linda Simon,
and Tom Pyszczynski. 1998. “Terror Management and Aggression: Evidence That Mortality
Salience Motivates Aggression Against Worldview-Threatening Others.” Journal of Personality
& Social Psychology. 74(3):590-605,
Milbank, Dana. 2003. “For Bush Tax Plan, A Little Inner Dissent.” Washington Post,
29
February 16, 2003. Page A4.
Miller, Joanne M. and Jon A. Krosnick. 2000. “News Media Impact on the Ingredients of Presidential
Evaluations: Politically Knowledgeable Citizens Are Guided by a Trusted Source.” American
Journal of Political Science, 44 (2): 301-315.
Peffley, Mark and Jon Hurwitz. 2007. “Persuasion and Resistance: Race and the Death Penalty in
America.” American Journal of Political Science, 51(4): 996-1012.
Pew Research Center for the People & the Press. 2005. “Public More Critical of Press, But Goodwill
Persists: Online Newspaper Readership Countering Print Losses.” Poll conducted June 8-12, 2005
and released June 26, 2005. Results downloaded June 8, 2007 from http://people-
press.org/reports/print.php3?PageID=972
Popkin, Samuel. 1991. The Reasoning Voter. Chicago: University of Chicago Press.
Priest, Dana and Walter Pincus. “U.S. 'Almost All Wrong' on Weapons.” Washington Post, October 7,
2004.
Pyszczynski, Tom, Abdolhossein Abdollahi, Sheldon Solomon, Jeff Greenberg, Florette
Cohen, and David Weise. 2006. “Mortality Salience, Martyrdom, and Military Might: The Great
Satan Versus the Axis of Evil.” Personality and Social Psychology Bulletin, 32(4): 525-537.
Pyszczynski, Tom, Sheldon Solomon, and Jeff Greenberg. 2003. In the Wake of 9/11:
The Psychology of Terror. Washington, DC: American Psychological Association.
Redlawsk, David. 2002. “Hot Cognition or Cool Consideration? Testing the Effects of Motivated
Reasoning on Political Decision Making.” Journal of Politics, 64(4):1021-1044.
Schacter, Daniel L. 1999. “The Seven Sins of Memory: Insights From Psychology and
Cognitive Neuroscience.” American Psychologist, 54(3): 182-203.
Schmitt, Kathleen M., Albert C. Gunther, and Janice L. Liebhart. 2004. “Why Partisans See Mass Media
as Biased.” Communication Research, 31(6): 623-641.
Sears, David O. 1986. “College sophomores in the laboratory: Inuences of a narrow data base on social
psychology’s view of human nature.” Journal of Personality and Social Psychology 51 (3): 515-
30.
Sniderman, Paul M., Richard A. Brody, and Philip E. Tetlock. 1991. Reasoning and
Choice: Explorations in Political Psychology. New York: Cambridge University
Press.
Taber, Charles, Milton Lodge, and Jill Glather. 2001. “The Motivated Construction of
Political Judgments.” In James H. Kuklinski, ed., Citizens and Politics: Perspectives from
Political Psychology. New York: Cambridge University Press.
Taber, Charles S. and Milton Lodge. 2006. “Motivated Skepticism in the Evaluation of
Political Beliefs.” American Journal of Political Science, 50(3): 755-769.
Task Force on Campaign Reform. 1998. “Campaign Reform: Insights and Evidence.”
Tsfati, Yariv and Jonathan Cohen. 2005. “Democratic Consequences of Hostile Media Perceptions: The
Case of Gaza Settlers.” The Harvard International Journal of Press/Politics, 10(4): 28-51.
Vallone, Robert P., Lee Ross, and Mark R. Lepper. 1985. “The Hostile Media
30
Phenomenon: Biased Perception and Perceptions of Media Bias in Coverage of the ‘Beirut
Massacre.’” Journal of Personality and Social Psychology, 49(3): 577-585.
Weiss, Rick, and Mary Fitzgerald. 2004. “Edwards, First Lady at Odds on Stem Cells.”
Washington Post, August 10, 2004.
Wilkes, A.L., and M. Leatherbarrow. 1988. “Editing episodic memory following the identification of
error.” Quarterly Journal of Experimental Psychology: Human Experimental Psychology. 40(A):
361-387.
Wilkes, A.L. and D.J. Reynolds. 1999. “On Certain Limitations Accompanying Readers
Interpretations of Corrections in Episodic Text.” Quarterly Journal of Experimental Psychology:
Human Experimental Psychology, 52A(1): 165-183.
Xu, Jun and J. Scott Long. 2005. “Confidence Intervals for Predicted Outcomes in
Regression Models for Categorical Outcomes.” The Stata Journal, 5(4): 537-559.
Zaller, John R. 1992. The Nature and Origins of Mass Opinion. New York, NY: Cambridge University
Press.
Appendix
Study 1 (WMD): News text
Wilkes-Barre, PA, October 7, 2004 (AP) -- President Bush delivered a hard-hitting
speech here today that made his strategy for the remainder of the campaign crystal clear:
a rousing, no-retreat defense of the Iraq war.
Bush maintained Wednesday that the war in Iraq was the right thing to do and that Iraq
stood out as a place where terrorists might get weapons of mass destruction.
“There was a risk, a real risk, that Saddam Hussein would pass weapons or materials or
information to terrorist networks, and in the world after September the 11th, that was a
risk we could not afford to take,” Bush said.
[Correction]
While Bush was making campaign stops in Pennsylvania, the Central Intelligence
Agency released a report that concludes that Saddam Hussein did not possess stockpiles
of illicit weapons at the time of the U.S. invasion in March 2003, nor was any program to
produce them under way at the time. The report, authored by Charles Duelfer, who
advises the director of central intelligence on Iraqi weapons, says Saddam made a
decision sometime in the 1990s to destroy known stockpiles of chemical weapons.
Duelfer also said that inspectors destroyed the nuclear program sometime after 1991.
[All subjects]
The President travels to Ohio tomorrow for more campaign stops.
Study 1 (WMD): Dependent variable
Immediately before the U.S. invasion, Iraq had an active weapons of mass destruction
program, the ability to produce these weapons, and large stockpiles of WMD, but
Saddam Hussein was able to hide or destroy these weapons right before U.S. forces
arrived.
-Strongly disagree [1]
-Somewhat disagree [2]
-Neither agree nor disagree [3]
-Somewhat agree [4]
-Strongly agree [5]
Study 2, Experiment 1 (WMD): News text
[New York Times/FoxNews.com]
December 14, 2005
During a speech in Washington, DC on Wednesday, President Bush maintained that the
war in Iraq was the right thing to do and that Iraq stood out as a place where terrorists
might get weapons of mass destruction.
“There was a risk, a real risk, that Saddam Hussein would pass weapons or materials or
information to terrorist networks, and in the world after September the 11th, that was a
risk we could not afford to take,” Bush said.
[Correction]
In 2004, the Central Intelligence Agency released a report that concludes that Saddam
Hussein did not possess stockpiles of illicit weapons at the time of the U.S. invasion in
March 2003, nor was any program to produce them under way at the time.
[All subjects]
The President travels to Ohio tomorrow to give another speech about Iraq.
Study 2, Experiment 1 (WMD): Dependent variable
Immediately before the U.S. invasion, Iraq had an active weapons of mass destruction
program and large stockpiles of WMD.
-Strongly disagree [1]
-Somewhat disagree [2]
-Neither agree nor disagree [3]
-Somewhat agree [4]
-Strongly agree [5]
Study 2, Experiment 2 (Tax cuts): News text
[New York Times/FoxNews.com]
August 6, 2005
President George W. Bush urged Congress to make permanent the tax cuts enacted
during his first term and draft legislation to bolster the Social Security program, after the
lawmakers return from their August break.
“The tax relief stimulated economic vitality and growth and it has helped increase
revenues to the Treasury,” Bush said in his weekly radio address. “The increased
revenues and our spending restraint have led to good progress in reducing the federal
deficit.”
The expanding economy is helping reduce the amount of money the U.S. government
plans to borrow from July through September, the Treasury Department said on
Wednesday. The government will borrow a net $59 billion in the current quarter, $44
billion less than it originally predicted, as a surge in tax revenue cut the forecast for the
federal budget deficit.
The White House’s Office of Management and Budget last month forecast a $333 billion
budget gap for the fiscal year that ends Sept. 30, down from a record $412 billion last
year.
[Correction]
However, even with the recent increases, revenues in 2005 will remain well below
previous projections from the Congressional Budget Office. The major tax cut of 2001
and further cuts in each of the last three years were followed by an unprecedented three-
year decline in nominal tax revenues, from $2 trillion in 2000 to $1.8 trillion in 2003.
Last year, revenues rebounded slightly to $1.9 trillion. But at 16.3 percent of the gross
domestic product, last year’s revenue total, measured against the size of the economy,
was the lowest level since 1959.
Study 2, Experiment 2 (Tax cuts): Dependent variable
President Bush’s tax cuts have increased government revenue.
-Strongly disagree [1]
-Somewhat disagree [2]
-Neither agree nor disagree [3]
-Somewhat agree [4]
-Strongly agree [5]
Study 2, Experiment 3 (Stem cell research): News text
[New York Times/FoxNews.com]
August 10, 2004
Sen. John Edwards (D-N.C.) yesterday slammed President Bush and promised that a
Kerry administration would support the promising young field of embryonic stem cell
research.
The vice presidential contender's comments came on the third anniversary of President
Bush's televised address to the nation announcing a funding policy for the controversial
research, which relies on human embryos as a source of cells.
The much-debated but still experimental field of study has become an unanticipated
wedge issue in this fall’s election. Edwards’s running mate on the Democratic ticket, Sen.
John F. Kerry (Mass.), mentioned the topic in a number of speeches last week. Kerry also
devoted a large chunk of the Democrats’ weekly radio address Saturday to it, saying that
science should not be sacrificed for ideology.
“We’re going to lift the ban on stem cell research,” Kerry said. “We’re going to listen to
our scientists and stand up for science. We’re going to say yes to knowledge, yes to
discovery and yes to a new era of hope for all Americans.”
[Correction]
However, experts pointed out that Bush's action does not limit private funding of stem
cell research. He is actually the first president to allow the use of federal funds to study
human embryonic stem cells, but his policy limits federal support of such research to
colonies derived from embryos already destroyed by August 2001.
Study 2, Experiment 3 (Stem cell research): Dependent variable
President Bush has banned stem cell research in the United States.
-Strongly disagree [1]
-Somewhat disagree [2]
-Neither agree nor disagree [3]
-Somewhat agree [4]
-Strongly agree [5]
Figure 1
21 0 1 2
Ordered probit coefficient
Very liberal Centrist Very conservative
WMD correction 95% confidence interval
Estimated marginal effect by ideology: Fall 2005
Effect of correction on WMD misperception
Figure 2
0 .1 .2 .3 .4 .5 .6 .7
Predicted response probability
Strongly disagree Strongly agree
No correction Correction
Liberal
0 .1 .2 .3 .4 .5 .6 .7
Strongly disagree Strongly agree
No correction Correction
Conservative
Predicted opinion: Fall 2005
Did Iraq have WMD?
Figure 3
21 0 1 2
Ordered probit coefficient
Very liberal Centrist Very conservative
WMD correction 95% confidence interval
Estimated marginal effect by ideology: Spring 2006
Effect of correction on WMD misperception
Figure 4
0 .25 .5 .75
Strongly disagree Strongly agree
No correction Correction
Liberal, Iraq not most important issue
0 .25 .5 .75
Strongly disagree Strongly agree
No correction Correction
Liberal, Iraq most important issue
0 .25 .5 .75
Strongly disagree Strongly agree
No correction Correction
Conservative, Iraq not most important issue
0 .25 .5 .75
Strongly disagree Strongly agree
No correction Correction
Conservative, Iraq most important issue
Predicted response probability
Predicted opinion: Spring 2006
Did Iraq have WMD?
Figure 5
1.5 0 .5 1 1.5
Ordered probit coefficient
Very liberal Centrist Very conservative
Tax/revenue correction 95% confidence interval
Estimated marginal effect by ideology: Spring 2006
Effect of correction on tax/revenue misperception
Figure 6
0 .1 .2 .3 .4 .5
Predicted response probability
Strongly disagree Strongly agree
No correction Correction
Liberal
0 .1 .2 .3 .4 .5
Strongly disagree Strongly agree
No correction Correction
Conservative
Predicted opinion: Spring 2006
Do tax cuts increase revenue?
Figure 7
1.5 1.5 0 .5
Ordered probit coefficient
Very liberal Centrist Very conservative
Stem cell correction 95% confidence interval
Estimated marginal effect by ideology: Spring 2006
Effect of correction on stem cell ban misperception
Figure 8
0 .1 .2 .3 .4
Predicted response probability
Strongly disagree Strongly agree
No correction Correction
Liberal
0 .1 .2 .3 .4
Strongly disagree Strongly agree
No correction Correction
Conservative
Predicted opinion: Spring 2006
Is stem cell research banned?
Table 1 – Ordered probit models of WMD misperception (fall 2005)
* p < .10, ** p < .05, *** p < .01 (two-sided)
Model 1 Model 2
Correction 0.050 0.199
(0.193) (0.201)
Ideology 0.358*** 0.221***
(0.068) (0.084)
Political knowledge -1.138*** -1.122***
(0.376) (0.377)
Mortality salience 0.278 0.275
(0.194) (0.195)
Correction * ideology 0.367***
(0.136)
(Cutpoint 1) -1.392*** -1.373***
(0.346) (0.347)
(Cutpoint 2) -0.739** -0.699**
(0.336) (0.338)
(Cutpoint 3) -0.029 0.048
(0.334) (0.337)
(Cutpoint 4) 1.377*** 1.509***
(0.386) (0.393)
Log-likelihood -172.24 -168.59
N 130 130
Table 2 – Ordered probit models of WMD misperception (spring 2006)
* p < .10, ** p < .05, *** p < .01 (two-sided)
Model 1 Model 2 Model 3
Correction -0.069 -0.141 -0.159
(0.159) (0.162) (0.177)
Ideology 0.356*** 0.487*** 0.525***
(0.053) (0.074) (0.078)
Political knowledge -1.287*** -1.247*** -1.290***
(0.325) (0.327) (0.335)
Correction * ideology -0.270*** -0.389***
(0.104) (0.112)
Iraq most important -0.346
(0.324)
Correction * most important 0.405
(0.455)
Ideology * most important -0.304
(0.237)
Correction * ideology * most important 0.797**
(0.321)
(Cutpoint 1) -1.628*** -1.659*** -1.767***
(0.296) (0.297) (0.318)
(Cutpoint 2) -0.961*** -0.985*** -1.074***
(0.285) (0.286) (0.307)
(Cutpoint 3) -0.269 -0.278 -0.344
(0.280) (0.281) (0.301)
(Cutpoint 4) 0.910*** 0.973*** 0.935***
(0.306) (0.311) (0.327)
Log-likelihood -255.59 -252.17 -248.06
N 195 195 195
Table 3 – Ordered probit models of tax cut/revenue misperception (spring 2006)
* p < .10, ** p < .05, *** p < .01 (two-sided)
Model 1 Model 2
Correction 0.096 0.176
(0.152) (0.157)
Ideology 0.180*** 0.070
(0.049) (0.071)
Political knowledge -0.634** -0.596*
(0.317) (0.318)
Correction * ideology 0.210**
(0.097)
(Cutpoint 1) -1.959*** -1.919***
(0.305) (0.306)
(Cutpoint 2) -1.128*** -1.076***
(0.288) (0.290)
(Cutpoint 3) -0.141 -0.071
(0.282) (0.284)
(Cutpoint 4) 1.230*** 1.312***
(0.299) (0.301)
Log-likelihood -265.5 -263.18
N 195 195
Table 4 – Ordered probit models of stem cell ban misperception (spring 2006)
* p < .10, ** p < .05, *** p < .01 (two-sided)
Model 1 Model 2
Correction -0.276* -0.329**
(0.151) (0.156)
Ideology 0.027 0.091
(0.048) (0.066)
Political knowledge -0.578* -0.555*
(0.311) (0.312)
-0.138
(0.096)
(Cutpoint 1) -1.590*** -1.607***
(0.288) (0.288)
(Cutpoint 2) -0.712*** -0.728***
(0.275) (0.275)
(Cutpoint 3) -0.112 -0.123
(0.272) (0.272)
(Cutpoint 4) 0.810*** 0.809***
(0.282) (0.282)
Log-likelihood -296.65 -295.62
N 195 195
Correction
*
ideolo
gy
... It is perhaps precisely this lack of overarching social pressure that enables climate skepticism to remain intact. As prior research shows, individuals are not easily persuaded away from their preexisting worldviews (Nyhan and Reifler 2010;Taber and Lodge 2006). Thus, when world society challenges deeply engrained personal beliefs about themselves and their identity in relation to how the world operates, rather than breaking down collective barriers, it is less likely to yield success. ...
... Consistent with our findings, existing studies suggest that individuals see their political identity as an intrinsic part of who they are (rather than simply what they think), with such identities exhibiting remarkable stability (Sears and Funk 1999), and in some cases, overriding in salience several immutable, seemingly more fundamental identifiers, including race, gender, and religious affiliation (Westwood et al. 2018). Psychological studies additionally find that individuals exhibit greater hostility toward new information when it undercuts their worldview (Nyhan and Reifler 2010;Taber and Lodge 2006). It is perhaps these deeply engrained narratives about the self and society which make right-wing respondents so impervious to liberal pressures. ...
Article
Full-text available
Although climate change remains a top environmental threat, significant portions of the global population continue to exhibit climate change skepticism. Currently, an extensive literature identifies the micro-level determinants of climate skepticism, often manifesting as a form of populist “backlash” to the adverse effects of globalization. However, the potential of macro-level global cultural forces—particularly embeddedness in liberal world society—to counter such pushback is unclear. Using multilevel modeling to analyze International Social Survey Program data spanning 37 countries from 2000 to 2020, we find that in general, increased embeddedness is linked to reduced climate skepticism. However, when global liberal forces encounter anti-liberal undercurrents within nation-states, a situation we refer to as cultural dissonance, the impact of liberal world society on tempering skepticism varies. Embeddedness mitigates skepticism at the national level, particularly within authoritarian regimes, but not at the individual level, especially among right-wing individuals. Paradoxically, world society also heightens ideological polarization of individual worldviews on climate change. By illuminating the contradictory role of liberal world society, which simultaneously exacerbates and inhibits anti-liberal, populist attitudes about climate change, we advance existing work examining the post-liberal turn and holds promise for making sense of other issue domains where liberal perspectives are contested.
... In the political domain, a study by Bullock (2007) indicated that incorrect information about a political candidate's position on relevant political themes leads to continued disapproval, even after their actual (more favorable) position on these issues was later clarified. Similarly, Cobb (2007) found that a politician accused of a campaign finance violation was rated more negatively even when this was later shown to result from an accounting error committed by a third party (for similar findings, see Nyhan & Reifler, 2010). Interestingly, however, in the scientific domain, Greitemeyer and Sagioglou (2015) found that the credibility of a researcher who was suggested to be under investigation for academic misconduct could be completely restored when this claim was recanted. ...
Article
Full-text available
Organizational decisions often are critically dependent on reputational information. However, such reputational information is not always accurate. In this research, we examine the lingering negative effects of false accusations on trust and decision-making in an organizational hiring context. In four experimental studies, we assess how false accusations against an applicant that are later refuted (vs. confirmed) may continue to exert negative effects on trust and hiring decisions. The results demonstrate that refuting a false accusation can significantly increase—but may not fully restore—trust in the accused. Moreover, false accusations may not only continue to undermine trust but can also compromise hiring decisions. Critically, however, these “stickiness” effects only occurred when false accusations questioned the applicant’s integrity, rather than their competence. This was explained not by a decreased ability to rebuild damaged trust, but rather by its greater initial erosion: exonerating information in fact repaired more trust after false integrity accusations (compared to false competence accusations), but this was insufficient to restore the greater initial adverse effects of such accusations on trust. Implications for research and practice are discussed.
... In recent years, fact-checking has gained interest in the field of journalism and media studies. A range of the research focuses on the rise of fact-checking (Graves, 2016), how fact-checking became a global movement (Graves, 2018), how it has grown as a transnational field (Lauer & Graves, 2024), the institutional logic, and the diversity of the fact-checking landscape (Lowrey, 2015) as well as the effectiveness (or lack thereof) of fact-checking (Amazeen et al., 2018;Nyhan & Reifler, 2010;Porter et al., 2018;Weeks & Garrett, 2014;Young et al., 2018). Another part of the research is dedicated more specifically to the practice of fact-checking, with Graves (2017) describing the five steps of a fact-check and Steensen et al. (2023) examining the benefits and limitations of live fact-checking. ...
Article
Full-text available
This study explores fact-checking practices in Ethiopia and Mali in times of conflict and in a context marked by increasing restrictions to press freedom. The objective is to understand how, in this hostile environment, fact-checkers in these two countries manage to carry out their activities. Our findings reveal that fact-checkers are often victims of online bullying and harassment and fear reprisal from governments. This pushes them to self-censor, avoiding working on sensitive topics, such as military issues in Mali. In addition, fact-checking organizations in both countries highlight the difficulty of accessing reliable sources. Consequently, they focus more on debunking viral social media content, thus effectively becoming content moderators who have turned away from the mission of holding leaders accountable, one of the primary functions of fact-checking. Regarding their role conception, fact-checkers in Ethiopia and Mali see themselves more as guides helping navigate the information disorder than “guardians of truth” or “truth keepers.”
... Even a term as familiar as socialism can prompt reactions different from those prompted by its technical definition (Liebertz, and Giersch 2022). Political science research demonstrates that politicians can easily lead the public to adopt a particular attitude and it is difficult to correct resulting misperceptions (Nyhan, and Reifler 2010). ...
Article
Full-text available
Political wrangling over Critical Race Theory (CRT) in the United States has produced policies banning its teaching in jurisdictions across the country. However, laws touted as “anti-CRT” have little in common with the original, academic origins of the phrase. In this study, we use a Qualtrics-based survey experiment to assess how participants’ support for a ban will change depending on whether the ban reflects core tenets of academic researchers’ use of CRT, the phrase itself, or elements common to many of the laws intended to ban it. We find that these three different frames do indeed change support for such policies, and the effects are dependent upon partisanship. We interpret our results to be empirical evidence of the phrase “Critical Race Theory” complicating political discourse.
... This possibility also challenges perspectives that assume that people rarely change their worldviews, especially in relation to core moral convictions (Ochoa & Vaisey, 2024). This assumption might seem intuitive, especially given a history of work documenting people's resisting challenges to their worldviews (e.g., Kahan, 2013;Nyhan & Reifler, 2010). However, people might resist persuasion not because they are stuck with fixed worldviews, but because they are invested in worldviews that they have adopted to serve their everyday goals and relationships. ...
Article
Full-text available
People do not form a coherent understanding of the world in isolation. Instead, people cocreate a sense of what is real and true through shared reality—the experience of shared attitudes and judgments about the world. However, little is known about when and with whom shared reality emerges. Building on the people-as-means goal framework, we tested in three studies (N = 851) whether people are motivated to experience more shared reality with instrumental others (those who support their goals) than with noninstrumental others. Experimentally manipulating another person’s instrumentality led people to create more shared reality with them, as assessed by an implicit behavioral measure (Study 1). People reported greater shared reality with those in their social network who were instrumental for more (vs. fewer) of their goals (Study 2). Finally, experimentally heightening goal importance increased generalized shared reality with instrumental (vs. noninstrumental) others (Study 3). We discuss implications for relationships, shared reality, and motivated cognition research.
Article
Full-text available
Future epidemics are perceived as inevitable. Dissemination of information can enhance awareness, serving as an initial stride towards fostering desired epidemic-controlling actions among the public. In this study, a qualitative content analysis of Covid-19- and Switzerland-related social media and blog contributions points at a limited adoption of public health key messages and a negative reputation of the informing authorities. The authorities are to a marginal extent the source of information and a controversial sentiment towards vaccination emerges. In addition, we find a large share of disseminated information that is not conducive to pandemic containment. Within this, a substantial volume of misinformation emerges in statements on Covid-19-related issues. The misinformation consists primarily of unsubstantiated health consequences of the Covid-19 vaccination (both efficacy and side effects), and, less often, of trivialisation or denial of the pandemic. Furthermore, in a phase of political campaigning on a Covid-law referendum in Switzerland, social media contributors often portray pandemic containment as an undue, unlawful, or autocratic imposition on individual and collective freedom, and as a tool deployed for political repression. In addition, the pandemic or its containment are embedded in various conspiracies by users and containment measures are contested with religious, naturopathic or esoteric arguments.
Article
Making misleading statements may benefit a politician, for example, during an election campaign. However, there are potentially also negative consequences; political misinformation can taint democratic debate, voters may be misled into forming false beliefs, and being fact‐checked may damage a politician's reputation. Previous research has found that correcting misleading statements made by established politicians reduces topical misperceptions, but hardly affects voter feelings and support. Here, we examined the impact of political misinformation and fact‐checking when politicians are unfamiliar. Participants ( N = 406) were engaged in a simulated election campaign set in an unfamiliar country, featuring statements from fictional candidates. Participants indicated their feelings toward the candidates, cast a vote, and rated their belief in the fact‐checked statements. Misleading statements that were not corrected positively affected feelings toward and voting for (right‐leaning) politicians. Corrective fact‐checks had large effects, reducing belief in misinformation, and fact‐checked candidates were viewed much less favorably and attracted far fewer votes. This demonstrates that in the absence of strong pre‐existing attitudes, corrective fact‐checks can negatively impact misinformation‐spreading politicians who are not (yet) well known.
Article
Full-text available
Against philosophical orthodoxy, Kornblith (2012) has mounted an empirically grounded critique of the epistemic value of reflection. In this paper, I argue that this recent critique fails even if we concede that the empirical facts are as Kornblith says they are, and assume that reliability is the only determinant of epistemic value. The critique fails because it seeks to undermine the reliability of reflection in general but targets only one of its variants, namely individual reflection, while neglecting social reflection. This critique comprises two arguments which have a common structure: they both impose a requirement on the reliability of reflection, but deny, on empirical grounds, that the requirement is met. One argument imposes an introspection requirement, which I reject as superfluous. I show how reflection can proceed without introspection. The other argument imposes an efficacy requirement. This requirement concerns whether reflection is causally efficacious i.e., whether it leads us to change our minds for the better. I accept this as a genuine requirement. Even if we concede that individual reflection fails to meet this requirement, I argue that we have not been given sufficient evidence to believe that social reflection is bound to violate this requirement. Furthermore, my analysis of the conditions under which social reflection works best provides us with prima facie grounds for optimism regarding the reliability of social reflection. Ultimately, then, these arguments fail to undermine the epistemic value of reflection in general.
Article
Chatbots are increasingly used to correct health misinformation. However, few studies have investigated whether and how certain design features could enhance their effectiveness. We developed four chatbots and conducted an experiment that examined whether chatbots with interactive and empathetic conversational cues could outperform a basic chatbot at correcting unvaccinated participants’ COVID-19 vaccination misperceptions and increasing their vaccination intentions. Perceived chatbot interactivity was associated with lower levels of misperception, which in turn were linked to greater vaccination intention. Perceived chatbot empathy did not reduce misperception, yet was directly and positively associated with vaccination intention. Implications of these findings are discussed.
Chapter
Citizens and Politics: Perspectives from Political Psychology brings together some of the research on citizen decision making. It addresses the questions of citizen political competence from different political psychology perspectives. Some of the authors in this volume look to affect and emotions to determine how people reach political judgements, others to human cognition and reasoning. Still others focus on perceptions or basic political attitudes such as political ideology. Several demonstrate the impact of values on policy preferences. The collection features chapters from some of the most talented political scientists in the field.
Book
Drawing on a multitude of data sets and building on analyses carried out over more than a decade, this book offers a major new theoretical explanation of how ordinary citizens figure out what they favour and oppose politically. Reacting against the conventional wisdom, which stresses how little attention the general public pays to political issues and the lack of consistency in their opinions, the studies presented in this book redirect attention to the processes of reasoning that can be discerned when people are confronted with choices about political issues. These studies demonstrate that ordinary people are in fact capable of reasoning dependably about political issues by the use of judgmental heuristics, even if they have only a limited knowledge of politics and of specific issues.
Article
Public opinion research demonstrates that citizens' opinions depend on elite rhetoric and interpersonal conversations. Yet, we continue to have little idea about how these two forces interact with one another. In this article, we address this issue by experimentally examining how interpersonal conversations affect (prior) elite framing effects. We find that conversations that include only common perspectives have no effect on elite framing, but conversations that include conflicting perspectives eliminate elite framing effects. We also introduce a new individual level moderator of framing effects-called "need to evaluate"-and we show that framing effects, in general, tend to be short-lived phenomena. In the end, we clarify when elites can and cannot use framing to influence public opinion and how interpersonal conversations affect this process.
Conference Paper
An important dimension of the future of fact is the status of political facts in research on public opinion. Analyzing the public's factual knowledge about public policy is central to addressing citizen competence yet more problematic than scholars have acknowledged. To show this, the authors first summarize a study of theirs that uses typical measures of citizens' information. In a survey of Illinois citizens, they measured factual perceptions about welfare policy. They found that citizens are not only uninformed about welfare but often misinformed-confident in erroneous perceptions. Such misinformation apparently has significant effects on attitudes toward welfare. The authors then consider some conceptual difficulties in research on citizens' information about policy. If the purpose is to ascertain how much information citizens possess, then the researcher must stipulate the relevant facts about an area of policy. But political facts are in large part politically determined, and the researcher often cannot identify precisely what the true and relevant facts are. Finally, the authors suggest a research approach in which citizens, in effect, choose the relevant facts themselves.
Article
Framing health problems in terms of the social determinants of health aims to shift policy attention to nonmedical strategies to improve population health, yet little is known about how the public responds to these messages. We conducted an experiment to test the effect of a news article describing the social determinants of type 2 diabetes on the public's support for diabetes prevention strategies. We found that exposure to the social determinants message led to a divergence between Republicans' and Democrats' opinions, relative to their opinions after viewing an article with no message about the causes of diabetes. These results signify that increasing public awareness of the social determinants of health may not uniformly increase public support for policy action.
Article