ArticlePDF Available

Political Extremism Is Supported by an Illusion of Understanding


Abstract and Figures

People often hold extreme political attitudes about complex policies. We hypothesized that people typically know less about such policies than they think they do (the illusion of explanatory depth) and that polarized attitudes are enabled by simplistic causal models. Asking people to explain policies in detail both undermined the illusion of explanatory depth and led to attitudes that were more moderate (Experiments 1 and 2). Although these effects occurred when people were asked to generate a mechanistic explanation, they did not occur when people were instead asked to enumerate reasons for their policy preferences (Experiment 2). Finally, generating mechanistic explanations reduced donations to relevant political advocacy groups (Experiment 3). The evidence suggests that people's mistaken sense that they understand the causal processes underlying policies contributes to political polarization.
Content may be subject to copyright.
Psychological Science
24(6) 939 –946
© The Author(s) 2013
Reprints and permissions:
DOI: 10.1177/0956797612464058
Research Article
The opinions that are held with passion are always
those for which no good ground exists.
—Bertrand Russell (1928/1996, p. 3)
Extremism is so easy. You’ve got your position and
that’s it. It doesn’t take much thought.
—Clint Eastwood (quoted in Schickel, 2005)
Many of the most important issues facing society—from
climate change to health care to poverty—require com-
plex policy solutions about which citizens hold polarized
political preferences. A central puzzle of modern American
politics is how so many voters can maintain strong politi-
cal views concerning complex policies yet remain rela-
tively uninformed about how such policies would bring
about desired outcomes (for review, see Delli Carpini &
Keeter, 1996).
One possible cause of this apparent paradox is that
voters believe that they understand how policies work
better than they actually do. In the research reported
here, we explored two questions. First, do people really
have unjustified confidence in their understanding of
how complex policies work? Second, does this illusion
of understanding contribute to attitude polarization? We
predicted that asking people to explain how a policy
works would make them aware of how poorly they
understood the policy, which would cause them to
subsequently express more moderate attitudes and
Rozenblit and Keil (2002) have demonstrated that peo-
ple tend to be overconfident in how well they understand
how everyday objects, such as toilets and combination
locks, work; asking people to generate a mechanistic
explanation shatters this sense of understanding (see also
464058PSSXXX10.1177/0956797612464058Fernbach et al.Political Extremism
Corresponding Author:
Philip M. Fernbach, University of Colorado, Leeds School of Business,
419 UCB, Boulder, CO 80309-0419
Political Extremism Is Supported by
an Illusion of Understanding
Philip M. Fernbach1, Todd Rogers2, Craig R. Fox3,4,
and Steven A. Sloman5
1Leeds School of Business, University of Colorado, Boulder; 2Center for Public Leadership,
Harvard Kennedy School; 3Anderson School of Management, University of California,
Los Angeles; 4Department of Psychology, University of California, Los Angeles; and
5Department of Cognitive, Linguistic, and Psychological Sciences, Brown University
People often hold extreme political attitudes about complex policies. We hypothesized that people typically know less
about such policies than they think they do (the illusion of explanatory depth) and that polarized attitudes are enabled
by simplistic causal models. Asking people to explain policies in detail both undermined the illusion of explanatory
depth and led to attitudes that were more moderate (Experiments 1 and 2). Although these effects occurred when
people were asked to generate a mechanistic explanation, they did not occur when people were instead asked to
enumerate reasons for their policy preferences (Experiment 2). Finally, generating mechanistic explanations reduced
donations to relevant political advocacy groups (Experiment 3). The evidence suggests that people’s mistaken sense
that they understand the causal processes underlying policies contributes to political polarization.
explanation, illusion of explanatory depth, political psychology, attitudes, polarization, extremism, moderation,
public policy, causal models, mechanism, causality, policymaking, decision making, judgment
Received 6/11/12; Revision accepted 9/17/12
940 Fernbach et al.
Alter, Oppenheimer, & Zemla, 2010; Keil, 2003). The
attempt to explain makes the complexity of the causal sys-
tem more apparent, leading to a reduction in judges’
assessments of their own understanding. Prior research on
the illusion of explanatory depth has focused primarily on
feelings of understanding, but this phenomenon is likely
to have downstream effects on preferences and behaviors.
For instance, consumers’ willingness to pay for products is
influenced by their perceived understanding of how those
products work (Fernbach, Sloman, St. Louis, & Shube,
2013). Moreover, people are more likely to change their
attitudes about a policy when they have less confidence in
their knowledge about it (Krosnick & Petty, 1995). We con-
jectured, therefore, that extreme policy preferences often
rely on people’s overestimation of their mechanistic under-
standing of the complex systems those policies are
intended to influence. If this is true, then merely asking
people to generate an explanation of relevant mechanisms
should decrease their sense of understanding and subse-
quently lead them to express more moderate political
Our prediction is consistent with research on how the
complexity with which people think about an object
affects the extremity of their evaluation of that object. For
instance, Linville (1982) asked participants to evaluate
either six or two dimensions of a chocolate-chip cookie
(e.g., chewiness, butteriness, number of chocolate chips).
Participants who were induced to think about the cookie
complexly by rating six dimensions reported less extreme
evaluations of the cookie than did participants who were
induced to think about the cookie simply by rating only
two dimensions. Related work has shown that more com-
plex representations of the self lead to smaller affective
swings in the face of stressful events (Linville, 1985)
and less vulnerability to depression and illness (Linville,
On its surface, our prediction appears to contradict
research suggesting that people’s attitudes become more
extreme when they are asked to justify or deliberate
about a position (Hirt & Markman, 1995; Ross, Lepper,
Strack, & Steinmetz, 1977; Tesser, 1978). Moreover, polit-
ical discussions among like-minded people typically
lead them to become more extreme in their views
(Schkade, Sunstein, & Hastie, 2010). We reconcile these
opposing predictions by suggesting that the nature of
the elaboration is critical in determining whether it will
lead to polarization or moderation. For instance, whereas
asking people to provide reasons for their position on
a policy may cause them to selectively access a support-
ive rationale, thereby increasing their commitment to
the position, asking them to explain the mechanisms
by which the policy works may force them to confront
their lack of understanding, thereby decreasing their
Experiment 1: Effect of Explanation on
Understanding and Position Extremity
In our first study, we asked participants to rate how well
they understand six political policies. After participants
judged their understanding of each issue, we asked them
to explain how two of the policies work and then to
rerate their level of understanding. We expected that ask-
ing participants to explain the mechanisms underlying
the policies would expose the illusion of explanatory
depth and lead to lower ratings of understanding, extend-
ing prior findings (e.g., Alter et al., 2010; Rozenblit & Keil,
2002) to the domain of political attitudes.
We predicted further that exposing the illusion of
explanatory depth would lead people to express more
moderate support for policies. We tested this prediction
in two ways. First, we had one group of participants pro-
vide ratings of their positions both before and after they
generated mechanistic explanations. We examined how
their degree of support changed and how this change
was associated with self-rated understanding of relevant
mechanisms. Recognizing that this within-subjects com-
parison could give rise to a demand effect, such that
some participants may have felt obliged to report a less
extreme judgment after providing a poor explanation,
we asked a second group to rate their policy support
only after generating explanations. This allowed for a
between-participant comparison in which we compared
the postexplanation ratings of this second group with the
preexplanation ratings of the first group.
Participants and design.One hundred ninety-eight
U.S. residents were recruited using Amazon’s Mechanical
Turk and participated in return for a small payment. Par-
ticipants were 52% male and 48% female, with an aver-
age age of 33.3 years. Participants’ reported political
affiliations were 40% Democrat, 20% Republican, 36%
independent, and 4% other.
In the preexplanation-rating conditions (n = 87), par-
ticipants rated their position on policies both before and
after generating mechanistic explanations for them. In the
no-preexplanation-rating conditions (n = 111), participants
rated their position only after generating explanations.
Each participant generated mechanistic explanations for
two of the six policies. The six policies were blocked into
three groups of two each so that there were a total of six
conditions to which participants were randomly assigned
(three preexplanation-rating conditions and three no-pre-
explanation-rating conditions).
Materials and procedure.After answering demograph-
ic questions, participants in the preexplanation-rating
Political Extremism 941
conditions were asked to state their position on six political
policies; responses were made using 7-point scales from 1,
strongly against, to 7, strongly in favor. The policies were
(a) imposing unilateral sanctions on Iran for its nuclear
program, (b) raising the retirement age for Social Security,
(c) transitioning to a single-payer health care system, (d)
establishing a cap-and-trade system for carbon emissions,
(e) instituting a national flat tax, and (f) implementing
merit-based pay for teachers. Participants in the no-preex-
planation-rating conditions skipped these initial position
Next, all participants were trained to use a rating scale
to quantify their level of understanding of the policies.
Instructions were modeled on instructions used in
Rozenblit and Keil (2002), but rather than describing dif-
ferent levels of understanding for an object, our instruc-
tions described different levels of understanding for a
political issue (immigration reform) that was not included
as one of the issues in our experiment (see Rating-Scale
Instructions in the Supplemental Material available
online). After reading the instructions, participants were
asked to judge their level of understanding of the six
policies (e.g., “How well do you understand the impact
of imposing unilateral sanctions on Iran for its nuclear
program?”). Responses were made using a 7-point scale,
with higher scores indicating greater understanding.
After judging their understanding of all six policies,
participants were asked to provide a mechanistic expla-
nation for one of the six policies. Instructions for this
measure were also adapted from Rozenblit and Keil
(2002; see Example Instructions for Explanation- and
Reason-Generation Tasks in the Supplemental Material).
Participants were then asked to rerate their understand-
ing of the policy; to rate or rerate their position on the
policy; and to rate how certain they were of their posi-
tion, using a 5-point scale from 1, not at all certain, to 5,
extremely certain.
After completing these measures, participants repeated
the process for a second issue. The policies were blocked
such that participants explained either (a) the Iran issue
followed by the merit-pay issue, (b) the health care issue
followed by the Social Security issue, or (c) the cap-and-
trade issue followed by the flat-tax issue.
Understanding.We analyzed judgments of under-
standing using a repeated measures analysis of variance
(ANOVA) with timing of judgment (preexplanation vs.
postexplanation) and issue number (first issue vs. second
issue) as within-subjects factors. All participants provided
both preexplanation and postexplanation ratings of
understanding, so these analyses used the full data set.
Our first prediction was that we would observe a decrease
in understanding judgments following mechanistic expla-
nation. This prediction was confirmed by a significant
main effect of judgment timing: Postexplanation ratings
of understanding (M = 3.45, SE = 0.12) were lower than
preexplanation ratings (M = 3.82, SE = 0.11), F(1, 197) =
34.69, p < .001, ηp
2 = .15. We found the same pattern
across all six policies. To test whether the effect general-
ized across stimuli, we collapsed over participants and
compared average change in understanding due to
explanation across the six policies. This effect was also
significant, t(5) = 5.74, p < .01. There was also an unex-
pected main effect of issue number, such that partici-
pants reported having a better understanding of the
first issue than of the second, F(1, 197) = 76.18, p < .001,
2 = .28. However, issue number did not interact with
judgment timing, F(1, 197) = 1.45, p > .23.
Position extremity.We transformed raw ratings of
positions on policies into a measure of position extremity
by subtracting the midpoint of the scale (4) and taking
the absolute value. We first compared position-extremity
scores before and after explanation for participants in
the preexplanation-rating conditions. We conducted a
repeated measures ANOVA with timing of judgment (pre-
explanation vs. postexplanation) and issue number (first
issue vs. second issue) as within-subjects factors. We pre-
dicted that positions would become more moderate fol-
lowing explanation. This prediction was confirmed, with
the main effect of judgment timing significant (preexpla-
nation-rating conditions: M = 1.41, SE = 0.07; postexpla-
nation-rating conditions: M = 1.28, SE = 0.08), F(1, 86) =
6.10, p = .016, ηp
2 = .066. As with understanding, the
pattern for position extremity was the same across all six
policies, and the test of the moderation effect over the six
policies was significant, t(5) = 3.93, p = .011. However,
two of the policies (merit pay and Social Security) showed
very small differences. Also consistent with our findings
regarding judgments of understanding, results revealed
an unexpected main effect of issue number, such that
extremity scores for the first issue were lower than those
for the second, F(1, 86) = 10.10, p < .01, ηp
2 = .11. Again,
issue number did not interact with judgment timing, F(1,
86) = 0.21, p > .64.
We also conducted a between-subjects comparison of
extremity of policy support by comparing initial position
ratings made by the preexplanation-rating group with the
postexplanation ratings made by the no-preexplanation-
rating group. We conducted an ANOVA with issue num-
ber as a within-subjects factor and judgment timing
(before explanation vs. after explanation) as a between-
subjects factor. As predicted, there was a significant effect
of judgment timing: Judgments made after explanations
were less extreme than were judgments made before
explanations (preexplanation-rating condition: M = 1.41,
942 Fernbach et al.
SE = 0.07; postexplanation-rating condition: M = 1.19,
SE = 0.08), F(1, 196) = 3.97, p < .05, ηp
2 = .020, a result
that replicated the moderation effect observed for partici-
pants who did not give preexplanation ratings of their
Relation between understanding and position
extremity.Finally, we assessed correlations between
postexplanation position extremity and change in
reported understanding to provide evidence that reduc-
ing the illusion of depth led participants to express more
moderate views. Indeed, an analysis of participant-item
pairs revealed a significant negative correlation between
the average magnitude of the change in reported under-
standing and the extremity of the position after explana-
tion, r = .19, p < .01. We also examined participants’
judgments of how certain they were of their positions
after explanation. Uncertainty (i.e., reverse-coded cer-
tainty) was negatively correlated with position extremity,
r = .75, p < .001, and positively correlated with the mag-
nitude of change in understanding, r = .31, p < .001. Our
interpretation of this pattern is that attempting to explain
policies made people feel uncertain about them, which
in turn made them express more moderate views. This
interpretation was supported by mediation analysis
(Preacher & Hayes, 2008), which revealed that the effect
of change in understanding on extremity was mediated
by a significant indirect effect of uncertainty, with a 95%
confidence interval excluding 0 [.113, .309].
As predicted, asking people to explain how policies work
decreased their reported understanding of those policies
and led them to report more moderate attitudes toward
those policies. We observed these effects both within and
between participants. Change in understanding correlated
with position extremity, such that participants who exhib-
ited greater decreases in reported understanding also
tended to exhibit greater moderation of their positions.
Results from a mediation analysis suggested that this rela-
tionship was mediated by position uncertainty. Taken
together, these results suggest that initial extreme posi-
tions were supported by unjustified confidence in under-
standing and that asking participants to explain how
policies worked decreased their sense of understanding,
leading them to endorse more moderate positions.
Experiment 2: Generating Mechanistic
Explanations Versus Enumerating
The goal of Experiment 2 was to examine whether the
attitude-moderation effect observed in Experiment 1 was
driven specifically by an attempt to explain mechanisms
or merely by deeper engagement and consideration of
the policies. To induce some participants to deliberate
without explaining mechanisms, we asked one group to
enumerate reasons why they held the policy attitude they
did. Listing reasons why one supports or opposes a pol-
icy does not necessarily entail explaining how that policy
works; for instance, a reason can appeal to a rule, a
value, or a feeling. Prior research has suggested that
when people think about why they hold a position, their
attitudes tend to become more extreme (for a review, see
Tesser, Martin, & Mendolia, 1995), in contrast to the
results observed in Experiment 1. Thus, we predicted that
asking people to list reasons for their attitudes would
lead to less attitude moderation than would asking them
to articulate mechanisms.
For participants in the mechanism conditions, methods
were almost identical to those used for the preexplana-
tion-rating conditions of Experiment 1. For participants in
the reasons conditions, we modified instructions for the
explanation task so that participants were asked to enu-
merate reasons for their position rather than generate a
mechanistic explanation of it (see Example Instructions
for Explanation- and Reason-Generation Tasks in the
Supplemental Material). We made two additional changes
from Experiment 1, omitting the measure of certainty and
adding an attention filter to the end of the questionnaire.
We also dropped conditions that involved the Iran and
merit-pay issues because of a programming error.
One hundred forty-one individuals were recruited
using Amazon’s Mechanical Turk and participated in
return for a small payment. Participants were assigned to
the remaining four conditions (two reasons and two
mechanism conditions covering either the Social Security
and health care issues or the flat-tax and cap-and-trade
issues); 112 of these passed the attention filter (mecha-
nism conditions: n = 47; reasons conditions: n = 65) and
were included in the analyses. These participants were
50% male and 50% female, and their average age was
33.9 years. Participants’ reported political affiliations were
43% Democrat, 19% Republican, 36% independent, and
4% other.
Replication of Experiment 1.To examine whether
results from the mechanism conditions replicated our
results from Experiment 1, we submitted judgments of
understanding to the same repeated measures ANOVA,
which yielded similar results. There was a significant main
effect of judgment timing on reported understanding,
Political Extremism 943
such that reported understanding decreased following
mechanistic explanation, F(1, 46) = 20.39, p < .001, ηp
2 =
.31. As in Experiment 1, we found the same pattern across
all policies. The unexpected main effect of issue number
was also significant and, again, there was no significant
interaction. Also replicating Experiment 1, results revealed
that participants endorsed more moderate positions fol-
lowing mechanistic explanations, F(1, 46) = 7.32, p < .01,
2 = .14. Finally, change in understanding and extremity
change were again significantly correlated, r = .34,
p < .05, which suggests that larger reductions in rated
understanding following explanation led to less extreme
Mechanistic explanations versus reasons.We next
compared the magnitude of change in reported under-
standing and position extremity across the mechanism
and reasons conditions (see Figs. 1a and 1b). We observed
a small effect on judgments of understanding in the
reasons conditions: Reported understanding slightly de-
creased after participants enumerated reasons, F(1, 64) =
7.51, p < .01, ηp
2 = .11. Analysis of the individual reasons
given by participants showed that this trend was driven
by participants who could provide no reason for their
position (see Analysis of Reasons Given in Experiment 2
in the Supplemental Material for further details). More
important, and as predicted, the decrement in under-
standing after enumerating reasons was smaller than the
decrement following mechanistic explanation, as reflected
by a significant interaction between judgment timing and
condition, F(1, 110) = 6.64, p < .01, ηp
2 = .057. With regard
to extremity of positions, there was no change after enu-
merating reasons, F(1, 64) < 1, n.s. Moreover, as predicted,
the change in position in the reasons conditions was
smaller than in the mechanism conditions, as reflected by
a significant interaction between judgment timing and
condition on extremity scores, F(1, 110) = 3.90, p < .05,
2 = .034.
Experiment 2 replicated the results of Experiment 1 and
showed further that reductions in rated understanding of
policies were less pronounced among participants who
enumerated reasons for their positions than among par-
ticipants who generated causal explanations for them.
Moreover, enumerating reasons did not lead to any
change in position extremity. Contrary to findings from
some previous studies, the results showed that reason
generation did not increase overall attitude extremity,
although an analysis of individual reasons suggested that
it did increase overall attitude extremity when partici-
pants provided a reason that was an evaluation of the
policy. Other types of reasons led to no change (see
Analysis of Reasons Given in Experiment 2 in the
Supplemental Material).
Experiment 3: Decision Making
In Experiment 3, we examined whether the moderating
effect of mechanistic explanations on political attitudes
demonstrated in Experiments 1 and 2 would extend to
political decisions. As in Experiment 2, participants first
rated their position on a given policy and then provided
either a mechanistic explanation of it or reasons why
they supported or opposed it. Next, they chose whether
or not to donate a bonus payment to a relevant advocacy
group. We predicted that participants’ initial level of sup-
port for the policy would be more weakly associated
with their subsequent likelihood of donating in the
mechanism condition than in the reasons condition
because articulating mechanisms attenuates attitude
Mechanism Reasons
Judged Understanding
Elaboration Condition
Preexplanation Rating
Postexplanation Rating
Preexplanation Rating
Postexplanation Rating
Mechanism Reasons
Position Extremity
Elaboration Condition
Fig. 1.Results from Experiment 2: (a) judged understanding of policies and (b) extremity of positions on policies as a function of condition and
timing of judgment. Understanding was rated on scales from 1 to 7, with higher scores indicating greater understanding. Extremity scores could
range from 0 to 3, with higher scores reflecting stronger attitudes in favor of or against a given policy. Error bars represent ±1 SE.
944 Fernbach et al.
extremity more than does listing reasons. Thus, we pre-
dicted an interaction between the extremity of initial pol-
icy support and condition (reasons vs. mechanism) on
likelihood of donation.
We recruited 101 U.S. residents (59.0% male, 41% female;
average age = 37.3 years) using the same methods used
for participant recruitment in Experiment 1. Nine partici-
pants did not pass the attention filter and were excluded
from subsequent analysis. Participants first provided their
position on the six policies, as in the two previous exper-
iments. They were then assigned to one of four condi-
tions and asked to elaborate on one of two policies:
cap and trade or flat tax. Depending on condition, par-
ticipants were asked either to generate a mechanistic
explanation (n = 45) or to enumerate reasons for their
position (n = 47), following the same instructions used in
Experiment 2. Next, participants were told that they
would receive a bonus payment (20 cents; equal to 20%
of their compensation for completing the experiment)
and that they had four options for what they could do
with this bonus payment. They could (a) donate it to
a group that advocated in favor of the issue in question,
(b) donate it to a group that advocated against the issue,
(c) keep the money for themselves (after answering a few
additional questions), or (d) turn it down.
Results and discussion
Figure 2 illustrates the likelihood of donating as a function
of initial level of policy support for the mechanism and
reasons conditions (no participants chose to donate to a
group that advocated against their stated position). Our
key prediction was that there would be an interaction
between initial extremity of policy support and condition,
such that greater extremity would lead to a greater likeli-
hood of donation among participants in the reasons con-
dition but that this tendency would be attenuated in the
mechanism condition. We tested this prediction using
logistic regression. The dependent variable was whether
the participant chose to donate. The independent vari-
ables were initial extremity of policy support, condition
(reasons vs. mechanism), and their interaction. As pre-
dicted, there was a significant interaction between initial
extremity of policy support and condition, Waldman’s
χ2(1) = 6.05, p = .014.
To interpret this interaction, we used spotlight tests
(Irwin & McClelland, 2002) at the high and low levels
of initial extremity. At the lowest level of initial support,
there was no difference in likelihood of donating between
the mechanism and reasons conditions, Waldman’s χ2(1) =
1.78, p > .18, but at the highest level of initial support,
participants in the reasons condition were more likely to
donate than were those in the mechanism condition,
Waldman’s χ2(1) = 6.74, p < .01.
The results of Experiment 3 suggest that among par-
ticipants who initially held a strong position, attempting
to generate a mechanistic explanation attenuated their
positions, thereby making them less likely to donate.
Consistent with our findings showing a lack of attitude
moderation in the reasons condition of Experiment 2,
results revealed that initial position extremity was corre-
lated with likelihood of donation in the reasons condi-
tion of Experiment 3, which suggests that enumerating
reasons did not have the same moderating effect as
mechanistic explanation.
General Discussion
Across three studies, we found that people have unjusti-
fied confidence in their understanding of policies.
Attempting to generate a mechanistic explanation under-
mines this illusion of understanding and leads people
to endorse more moderate positions. Mechanistic-
explanation generation also influences political behavior,
making people less likely to donate to relevant advocacy
groups. These moderation effects on judgment and deci-
sion making do not occur when people are asked to
enumerate reasons for their position. We propose that
generating mechanistic explanations leads people to
endorse more moderate positions by forcing them to
confront their ignorance. In contrast, reasons can draw
on values, hearsay, and general principles that do not
require much knowledge.
Previous research has shown that intensively educat-
ing citizens can improve the quality of democratic deci-
sions following collective deliberation and negotiation
(Fishkin, 1991). One reason for the effectiveness of this
strategy may be that educating citizens on how policies
Likelihood of Donating
Initial Position Extremity
Elaboration Condition
Fig. 2.Results from Experiment 3: likelihood of donating to an advo-
cacy group as a function of condition and initial extremity of position
toward a policy.
Political Extremism 945
work moderates their attitudes, increasing their willing-
ness to explore opposing views and to compromise.
More generally, the present results suggest that political
debate might be more productive if partisans first engaged
in a substantive and mechanistic discussion of policies
before engaging in the more customary discussion of
preferences and positions. However, fostering productive
discourse among people who have different political
stances faces obstacles and can have consequences that
fall outside the scope of the current research. Future
research should explore the benefits of mechanistic
explanation in more ecologically valid civil-discourse
Our results suggest a corrective for several psychologi-
cal phenomena that make polarization self-reinforcing.
People often are unaware of their own ignorance (Kruger
& Dunning, 1999), seek out information that supports
their current preferences (Nickerson, 1998), process new
information in biased ways that strengthen their current
preferences (Lord, Ross, & Lepper, 1979), affiliate with
other people who have similar preferences (Lazarsfeld &
Merton, 1954), and assume that other people’s views are
as extreme as their own (Van Boven, Judd, & Sherman,
2012). In sum, several psychological factors increase
extremism, and attitude polarization is therefore hard to
avoid. Explanation generation will by no means eliminate
extremism, but our data suggest that it offers a means of
counteracting a tendency supported by multiple psycho-
logical factors. In that sense, it promises to be an effective
debiasing procedure.
The authors thank Julia Kamin, Julia Shube, and Jacob Cohen
for help with data collection and John Lynch, Jake Westfall,
Donnie Lichtenstein, Pete McGraw, Bart De Langhe, Meg
Campbell, and Ji Hoon Jhang for helpful conversations.
Declaration of Conflicting Interests
The authors declared that they had no conflicts of interest with
respect to their authorship or the publication of this article.
Supplemental Material
Additional supporting information may be found at http://pss
Alter, A. L., Oppenheimer, D. M., & Zemla, J. C. (2010). Missing
the trees for the forest: A construal level account of the
illusion of explanatory depth. Journal of Personality and
Social Psychology, 99, 436–451.
Delli Carpini, M. X., & Keeter, S. (1996). What Americans know
about politics and why it matters. New Haven, CT: Yale
University Press.
Fernbach, P. M., Sloman, S. A., St. Louis, R., & Shube, J. N.
(2013). Explanation fiends and foes: How mechanistic
detail determines understanding and preference. Journal of
Consumer Research, 39, 1115–1131.
Fishkin, J. S. (1991). Democracy and deliberation: New direc-
tions for democratic reform (Vol. 217). New Haven, CT:
Yale University Press.
Hirt, E. R., & Markman, K. D. (1995). Multiple explanation: A
consider-an-alternative strategy for debiasing judgments.
Journal of Personality and Social Psychology, 69, 1069–
Irwin, J. R., & McClelland, G. H. (2001). Misleading heuristics
and moderated multiple regression models. Journal of
Marketing Research, 38, 100–109.
Keil, F. C. (2003). Folkscience: Coarse interpretations of a com-
plex reality. Trends in Cognitive Sciences, 7, 368–373.
Krosnick, J. A., & Petty, R. E. (1995). Attitude strength: An over-
view. In R. E. Petty & J. A. Krosnick (Eds.), Attitude strength:
Antecedents and consequences (pp. 1–24). Mahwah, NJ:
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it:
How difficulties in recognizing one’s own incompetence
lead to inflated self-assessments. Journal of Personality and
Social Psychology, 77, 1121–1134.
Lazarsfeld, P. F., & Merton, R. K. (1954). Friendship as social
process: A substantive and methodological analysis. In M.
Berger, T. Abel, & C. H. Page (Eds.), Freedom and control
in modern society (pp. 18–66). New York, NY: Octagon
Linville, P. W. (1982). The complexity-extremity effect and
age-based stereotyping. Journal of Personality and Social
Psychology, 42, 193–211.
Linville, P. W. (1985). Self-complexity and affective extremity:
Don’t put all of your eggs in one cognitive basket. Social
Cognition, 3, 94–120.
Linville, P. W. (1987). Self-complexity as cognitive buffer against
stress-related illness and depression. Journal of Personality
and Social Psychology, 52, 663–676.
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation
and attitude polarization: The effects of prior theories on
subsequently considered evidence. Journal of Personality
and Social Psychology, 37, 2098–2109.
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phe-
nomenon in many guises. Review of General Psychology,
2, 175–220.
Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resam-
pling strategies for assessing and comparing indirect effects
in multiple mediator models. Behavior Research Methods,
40, 879–891.
Ross, L., Lepper, M. R., Strack, F., & Steinmetz, J. (1977). Social
explanation and social expectation: Effects of real and
hypothetical explanations on subjective likelihood. Journal
of Personality and Social Psychology, 35, 817–829.
Rozenblit, L., & Keil, F. C. (2002). The misunderstood limits of
folk science: An illusion of explanatory depth. Cognitive
Science, 26, 521–562.
Russell, B. (1928/1996). Sceptical essays. New York, NY:
Routledge Classics.
946 Fernbach et al.
Schickel, R. (2005, Feb. 20). Clint Eastwood on “Baby.” Time
Magazine. Retrieved from
Schkade, D., Sunstein, C. R., & Hastie, R. (2010). When delib-
eration produces extremism. Critical Review: A Journal of
Politics and Society, 22, 227–252.
Tesser, A. (1978). Self-generated attitude change. In L. Berkowitz
(Ed.), Advances in experimental social psychology (Vol. 11,
pp. 289–338). New York, NY: Academic Press.
Tesser, A., Martin, L., & Mendolia, M. (1995). The impact of
thought on attitude extremity and attitude-behavior consis-
tency. In R. E. Petty & J. A. Krosnick (Ed.), Attitude strength:
Antecedents and consequences (pp. 73–92). Mahwah, NJ:
Van Boven, L., Judd, C. M., & Sherman, D. K. (2012). Political
polarization projection: Social projection of partisan attitude
extremity and attitudinal processes. Journal of Personality
and Social Psychology, 103, 84–100.
... Extremity on certain issues seems interlinked with a false perception that one understands something they do not. Fernbach et al. [39] found that inducing uncertainty by asking participants to provide mechanistic explanations of policies reduced their self-reported understanding of these policies, and led to lower position extremity on them as a result. Misinformation may thus also exert its influence through a mistaken belief that the receiver understands a topic to a much greater extent than they actually do [136]. ...
... People's psychological cost in adhering to misinformation may be increased when corrections comprehensively resolve their gaps in understanding, when they realize that the prominence of their misinformed opinions is fringe [59,83,134,171], rather than widespread, and when they are indirectly made to understand that certain issues that they are misinformed on are much more complex than they initially thought [39,160]. Generally, we can say with a fair degree of certainty that fact-checks and corrections are successful in increasing the psychological cost of adopting misinformation [150]. ...
... In line with this, the research we cover would suggest that people are made aware of these misinformation effects on memory [34,84] and that corrections to misinformation are comprehensive enough to avoid further ambiguities (which may leave room to fill these ambiguities with further misinformation) [169]. Further, in cases where users hold strong opinions on issues they do not fully understand, probing this (lack of) understanding may be a good indirect way of reducing the confidence in their opinion [39]. This should make them more receptive to corrections. ...
Full-text available
Previous work suggests that people's preference for different kinds of information depends on more than just accuracy. This could happen because the messages contained within different pieces of information may either be well-liked or repulsive. Whereas factual information must often convey uncomfortable truths, misinformation can have little regard for ve-racity and leverage psychological processes which increase its attractiveness and proliferation on social media. In this review, we argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation by reducing, rather than increasing, the psychological cost of doing so. We cover how attention may often be shifted away from accuracy and towards other goals, how social and individual cognition is affected by misinformation and the cases under which debunking it is most effective, and how the formation of online groups affects information consumption patterns, often leading to more polarization and rad-icalization. Throughout, we make the case that polarization and misinformation adherence are closely tied. We identify ways in which the psychological cost of adhering to misinfor-mation can be increased when designing anti-misinformation interventions or resilient affordances, and we outline open research questions that the CSCW community can take up in further understanding this cost.
... In a series of studies, people overestimated their self-reported knowledge of a policy less after writing a detailed explanation of how that policy works, thereby recognizing that their knowledge of the policy was less complete than they originally thought (overcoming the 'illusion of understanding') 63,121,122 . Likewise, people reported less confidence when answering a question if they first identified their 'known unknowns' by listing two things they did not know 123 . ...
... Two exceptions are the self-distancing effect, which has been replicated in several studies, and research on the illusion of understanding. In the latter domain, the original study showed that writing a detailed explanation of how a policy worked reduced both overestimation of knowledge and attitude extremity 121 . A close replication of the original study revealed that the manipulation reduced overestimation of knowledge but did not change people's extreme attitudes 124 . ...
Full-text available
In a time of societal acrimony, psychological scientists have turned to a possible antidote — intellectual humility. Interest in intellectual humility comes from diverse research areas, including researchers studying leadership and organizational behaviour, personality science, positive psychology, judgement and decision-making, education, culture, and intergroup and interpersonal relationships. In this Review, we synthesize empirical approaches to the study of intellectual humility. We critically examine diverse approaches to defining and measuring intellectual humility and identify the common element: a meta-cognitive ability to recognize the limitations of one’s beliefs and knowledge. After reviewing the validity of different measurement approaches, we highlight factors that influence intellectual humility, from relationship security to social coordination. Furthermore, we review empirical evidence concerning the benefits and drawbacks of intellectual humility for personal decision-making, interpersonal relationships, scientific enterprise and society writ large. We conclude by outlining initial attempts to boost intellectual humility, foreshadowing possible scalable interventions that can turn intellectual humility into a core interpersonal, institutional and cultural value. Intellectual humility involves acknowledging the limitations of one’s knowledge and that one’s beliefs might be incorrect. In this Review, Porter and colleagues synthesize concepts of intellectual humility across fields and describe the complex interplay between intellectual humility and related individual and societal factors.
... • Attitude extremity defined as the absolute difference between attitude rating and the midpoint of the scale following Fernbach et al. (2013) Results (continued.) ...
Full-text available
We investigate how causal explanation can expose people to gaps in their knowledge on sociopolitical issues, and this can also have moderation effect. Furthermore, we see how adopting concrete or abstract level of construal style of thinking is influential in whether to be exposed to Illusion of explanatory Depth (IoED).
... One option may be to encourage people to try to explain the mechanisms underlying the complex scientific phenomena at issue. This has been shown to reduce subjective knowledge (33,44) and increase deference to experts (45). Another way to potentially make feelings of ignorance more salient to people is to give them reference points. ...
Full-text available
Public attitudes that are in opposition to scientific consensus can be disastrous and include rejection of vaccines and opposition to climate change mitigation policies. Five studies examine the interrelationships between opposition to expert consensus on controversial scientific issues, how much people actually know about these issues, and how much they think they know. Across seven critical issues that enjoy substantial scientific consensus, as well as attitudes toward COVID-19 vaccines and mitigation measures like mask wearing and social distancing, results indicate that those with the highest levels of opposition have the lowest levels of objective knowledge but the highest levels of subjective knowledge. Implications for scientists, policymakers, and science communicators are discussed.
... And truly, the complexities of public policy are quite large. Individuals who are asked to explain how a given policy would be enacted reduce the extremity of their policy supporting position, leading to a more moderate stance (Fernbach et al., 2013). By giving a face to a complex issue, the issue is more tangible, and "getting into the weeds," or diving into deep policy -can be avoided. ...
In the previous four chapters, I have outlined the theoretical tenets of a cultural political psychology: the psychological meaning-making of values, policy, and power dynamics. Throughout, I have emphasized the importance of focusing on the process instead of the product, the story in place of the statistics, and the individual in place of the institution. Values were examined through the process of scienti c creation, turning us to directly consider what kind of peer-review process promotes and inhibits various types of scienti c ndings and theories. Policy was examined by considering not just the impact of policy on individuals, but also thinking about the psychological individual who creates, justi es, and in uences policy at the staff level. Finally, power dynamics were discussed in terms of thinking beyond seeing the result of who shows up to vote, or who acts in a certain way, but the stories behind who does not turn up to vote, or the notion of nding joy amid oppression and discrimination. Throughout the three chapters, the importance of values, power, and policy all came out through the discussion. Yet, this book would not be complete without testing the tenets of the book. We must be able to learn something new from this approach to political psychology that political psychology on its own cannot provide. While small indicators have been present throughout this Brief, this chapter discusses in more detail how a meaning-making-focused political psychology can bring about new theoretical innovations and experimental novelties. To do this, I task myself with simultaneously considering a current political psychological concept and pairing it with another. Together, I will apply the three tenets of a cultural political psychology-of values, power, and policy-to see what can be gained from both individually, but also in unison. I will allow both concepts to dialogue with each other and provide new theoretical considerations in how both concepts speak to new avenues of research and theory development.
... And truly, the complexities of public policy are quite large. Individuals who are asked to explain how a given policy would be enacted reduce the extremity of their policy supporting position, leading to a more moderate stance (Fernbach et al., 2013). By giving a face to a complex issue, the issue is more tangible, and "getting into the weeds," or diving into deep policy -can be avoided. ...
... And truly, the complexities of public policy are quite large. Individuals who are asked to explain how a given policy would be enacted reduce the extremity of their policy supporting position, leading to a more moderate stance (Fernbach et al., 2013). By giving a face to a complex issue, the issue is more tangible, and "getting into the weeds," or diving into deep policy -can be avoided. ...
In the United Kingdom in 2014, 21% of individuals over the age of 65 stated that they had no interest in politics, and this number increased to 42% of people aged 16-24 (Randall, 2014). In 2016, the United States placed 30th out of 35 countries a part of the Organization for Economic Cooperation and Development in terms of voting-age population percent turnout, with 55.7% of voting-eligible individuals casting a ballot (Desilver, 2020). People constantly report various levels of voter apathy, saying things like "I'm not very interested in politics" or "I just didn't feel like voting." Yet, one never hears "I have no interest in breathing" or "one third of America chooses to go without food." Such statements would be considered either bizarre or worrying enough to ask them if they've sought medical help for what could be a behavioral indicator of self-harm since the necessity of oxygen and food is rooted in our ability to function. While not as extreme as our necessity for oxygen and sustenance , politics itself is also inescapable in life. To choose to live an apolitical life is to choose not to live in society. Political standpoints and public policy determine every facet of our lives-from who our neighbors are (from immigration to housing subsidies), to the prices and functionality of groceries [the feasibility of various bartering-based systems to the acceptance of (digital) currencies], to the movement of its people through society (facing lockdowns, travel bans, and wartime curfews). In considering cases like COVID-19, lockdowns, and masks mandates, it quickly became apparent that public policy is not a panacea. Emergency declarations, laws, executive orders, vaccination goals-all are for nothing if the individual interprets these things as overreaching, dictatorial, or nd a new conspiracy theory to add discord and doubt to the situation. To understand the effect of a public policy like a lockdown is to study psychology. The intersection of psychology and public policy is in the process of policy creation and policy implementation. Therefore, I rst provide a brief overview of what it meant when discussing public policy, and then consider where one can nd psychology's presence when studying public policy.
... And truly, the complexities of public policy are quite large. Individuals who are asked to explain how a given policy would be enacted reduce the extremity of their policy supporting position, leading to a more moderate stance (Fernbach et al., 2013). By giving a face to a complex issue, the issue is more tangible, and "getting into the weeds," or diving into deep policy -can be avoided. ...
... And truly, the complexities of public policy are quite large. Individuals who are asked to explain how a given policy would be enacted reduce the extremity of their policy supporting position, leading to a more moderate stance (Fernbach et al., 2013). By giving a face to a complex issue, the issue is more tangible, and "getting into the weeds," or diving into deep policy -can be avoided. ...
This book takes an insider perspective of the psychological issues of creating policy. Instead of considering what the products of policy are - often the case in psychological and political science work - this book examines the individual processes present in proposing and engaging with policy. The individual who engages with the policy and its meanings, the individual who resists the policy through conformity, and the individual who writes the policy for their own ideological purposes are all political actors in a psychological system. This book puts forward a cultural political psychology as the psychological study of the process of values, policy, and power dynamics. Through exploring public policy through private policy generation and individual interaction, this book pushes theoretical understandings of policy and activism in new ways. Centering on an individual’s own values in facing various policy restrictions from governments, parents, or peers, the importance of examining collective actions and also collective inactions of individuals is noted and expanded on in the text. The book provides applications of its arguments through examining the processes of unionization and actualized democracy. It seeks to point out new research avenues, including the hypogeneralization of values, one’s exclusion through activism, and everyday revolutions. This book addresses the centrality of the individual and meaning-making systems when considering where policy, politics, and psychology intersect. This book is primarily addressed to psychologists and political scientists interested in how to make change in public policy. While the experiences within the book are United States-centric, the thoughts and theories behind them are meant to be applicable to a wide variety of political systems. As there is currently very little literature on the topic, this book seeks to fill the gap and offer concise information on such an important dimension of cultural and political psychology. It is expected that the book will be of great interest for researchers in these areas, as well as for graduate-level students. In particular, this book will be relevant to researchers and students working on political psychology, public policy, development, community psychology, social representations, semiotics, activism, and social movements, to name a few.
Full-text available
Nationalism is one of the most influential political ideologies in human history. In fact, the sentiment has drastically reshaped the international political map in the two centuries since its first formulation in the writings of European philosophers in the 18th century, notably in regions accommodating ethnic or cultural minorities imperfectly integrated into their respective national societies. Thus, the problems and frequent failures of national integration are issues of central importance in the contemporary world, namely, the United States of America. The present paper examines the question of whether nationalism is detrimental to the civilizational structure of the USA; if so, then to what extent? The present paper will attempt to answer this question by conducting an objective investigation of different historical events and evaluation of their respective ramifications on the American community and politics
Full-text available
Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.
Full-text available
This chapter discusses self-generated attitude change. The thought about some nonneutral attitude object in the absence of any new external information or change in overt behavior often results in attitude polarization. Attitude polarization seems to be predicated on cognitive changes such as the addition of consistent cognitions and the reinterpretation of existing inconsistent cognitions. Such changes in cognitions and affect are expected to occur only to the extent that persons have a developed cognitive schema for thinking about the object. One's attitude is a function of those salient cognitions and inferences. The data suggest that persons can tune in more than one schema for thinking about a particular object. Thus, persons have the potential for more than one attitude toward the same object. Some implications for the relationship between attitudes and behavior and for psychotherapy were touched upon. The relationship between this research and group polarization research and mere exposure research are briefly explored. It points out a number of aspects of the present approach that are in need of further elaboration.
Full-text available
People differ in their threshold for satisfactory causal understanding and therefore in the type of explanation that will engender understanding and maximize the appeal of a novel product. Explanation fiends are dissatisfied with surface understanding and desire detailed mechanistic explanations of how products work. In contrast, explanation foes derive less understanding from detailed than coarse explanations and downgrade products that are explained in detail. Consumers’ attitude toward explanation is predicted by their tendency to deliberate, as measured by the cognitive reflection test. Cognitive reflection also predicts susceptibility to the illusion of explanatory depth, the unjustified belief that one understands how things work. When explanation foes attempt to explain, it exposes the illusion, which leads to a decrease in willingness to pay. In contrast, explanation fiends are willing to pay more after generating explanations. We hypothesize that those low in cognitive reflection are explanation foes because explanatory detail shatters their illusion of understanding.
Full-text available
Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed. When men wish to construct or support a theory, how they torture facts into their service! (Mackay, 1852/ 1932, p. 552) Confirmation bias is perhaps the best known and most widely accepted notion of inferential error to come out of the literature on human reasoning. (Evans, 1989, p. 41) If one were to attempt to identify a single problematic aspect of human reasoning that deserves attention above all others, the confirma- tion bias would have to be among the candidates for consideration. Many have written about this bias, and it appears to be sufficiently strong and pervasive that one is led to wonder whether the bias, by itself, might account for a significant fraction of the disputes, altercations, and misun- derstandings that occur among individuals, groups, and nations.
People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of the participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
This book proposes a new kind of democracy for the modern era, one that not only gives citizens more power but also allows them more opportunities to exercise this power thoughtfully. James S. Fishkin here suggests an innovative solution to the problem of inadequate deliberation, in particular within our presidential nomination system. His reform involves a well-publicized national caucus in which a representative sample of American citizens would interact directly with presidential contenders in order to reflect and vote on the issues and candidates. In adapting democracy to the large scale nation state, says Fishkin, Americans have previously had two choices. They could participate directly through primaries and referenda or they could depend on elite groups-such as party conventions and legislatures-to represent them. The first choice offers political equality but little chance for deliberation; the second offers the participants an opportunity to deliberate but provides less political equality for the electorate. The national caucus that Fishkin proposes-an example of what he calls a "deliberative opinion poll"-combines deliberation with political equality and reveals what the public would think if it had better conditions and information with which to explore and define the issues with the candidates. Arguing persuasively for the usefulness of deliberative opinion polls, Fishkin places them within the history of democratic theory and practice, exploring models of democracy ranging form ancient Athens and the debates of the American founders to contemporary transitions toward democracy in Eastern Europe.
This research develops and tests a model relating complexity of self-representation to affective and evaluative responses. The basic hypothesis is that the less complex a person's cognitive representation of the self, the more extreme will be the person's swings in affect and self-appraisal. Experiment 1 showed that those lower in self-complexity experienced greater swings in affect and self-appraisal following a failure or success experience. Experiment 2 showed that those lower in self-complexity experienced greater variability in affect over a 2-week period. The results are discussed, first, in terms of self-complexity as a buffer against the negative effects of stressful life events, particularly depression; and, second, in terms of the thought patterns of depressed persons. The results reported here suggest that level of self-complexity may provide a promising cognitive marker for vulnerability to depression.