ArticlePDF Available

Abstract

While they usually should, people do not revise their beliefs more to expert (economist) opinion than to lay opinion. The present research sought to better understand the factors that make it more likely for an individual to change their mind when faced with the opinions of expert economists versus the general public. Across five studies we examined the role that overestimation of knowledge plays in this behavior. We replicated the finding that people fail to privilege the opinion of experts over the public across two different (Study 1) and five different (Study 5) economic issues. We further find that undermining an illusion of both topic-relevant (Studies 2-4) and-irrelevant knowledge (Studies 3 and 4) leads to greater normative belief revision in response to expert rather than lay opinion. We suggest one reason that people fail to revise their beliefs more in response to experts is because people think they know more than they really do.
Judgment and Decision Making, Vol. 15, No. 6, November 2020, pp. 909–925
Inducing feelings of ignorance makes people more receptive to expert
(economist) opinion
Ethan A. MeyersMartin H. TurpinMichał BiałekJonathan A. Fugelsang
Derek J. Koehler
Abstract
While they usually should, people do not revise their beliefs more to expert (economist) opinion than to lay opinion.
The present research sought to better understand the factors that make it more likely for an individual to change their mind
when faced with the opinions of expert economists versus the general public. Across five studies we examined the role that
overestimation of knowledge plays in this behavior. We replicated the finding that people fail to privilege the opinion of experts
over the public across two different (Study 1) and five different (Study 5) economic issues. We further find that undermining
an illusion of both topic-relevant (Studies 2–4) and -irrelevant knowledge (Studies 3 and 4) leads to greater normative belief
revision in response to expert rather than lay opinion. We suggest one reason that people fail to revise their beliefs more in
response to experts is because people think they know more than they really do.
Keywords: belief revision, expertise, overestimation, explaining, ignorance
1 Introduction
The whole problem of the world is that fools and
fanatics are always so certain of themselves, but
wiser people so full of doubts. (attributed to
Bertrand Russell)
Are wiser people more doubtful, or does experiencing
doubt make one wiser? This is an old debate, and thinkers as
far back as those in ancient Greece have weighed in on this
fundamental question. In the opinion of arguably the wisest
man in Greece, Socrates, the feature which makes one wise is
recognizing the limits of one’s knowledge: I am wiser than
this man, for neither of us appears to know anything great
and good; but he fancies he knows something, although he
knows nothing; whereas I do not know anything, so I do not
fancy I do.” (Apology, 21d). The key feature of wisdom in the
opinion of one great ancient thinker is to recognize what one
knows, does not know, and adapt behavior to be in line with
these limitations. Experts in an topic may provide a useful
measuring stick against which non-experts can compare their
understanding. Indeed, the degree to which we are willing
to defer to the opinion of experts demonstrates the wisdom
that comes with understanding the limits of our knowledge.
Nevertheless, people often disregard the opinion of ex-
perts in favor of their own unlearned intuition, or the opinion
Copyright: © 2020. The authors license this article under the terms of
the Creative Commons Attribution 3.0 License.
University of Waterloo. Email: emeyers@uwaterloo.ca.
University of Waterloo.
University of Waterloo, University of Wrocław
of people similarly unknowledgeable to themselves. That is,
people should defer more to experts than to lay opinion but,
puzzlingly, they often do not (Johnston & Ballard, 2016).
What underlies this behavior, and more pressingly, how can
we help people weight the opinion of those with demon-
strated expertise more heavily when making decisions?
The highly specialized world of today should dictate that
decisions of epistemic authority, choosing when to think
for oneself versus deferring to experts, would usually favor
deferring (Pierson, 1994), especially when we lack neces-
sary background to understand information we receive (Keil,
2010). People tend to behave, however, in a manner that sug-
gests that experts possess an authority on decisions (i.e., how
to do things), but not necessarily on beliefs and values (i.e.,
which things to do) (Zagzebski, 2012). For example, peo-
ple might defer to experts on how to efficiently trade with a
foreign country, but not on whether that country should or
should not be traded with. In the latter case people tend to
be influenced by the opinions of the general public as much
as if not more than the opinions of professional economists
(Johnston & Ballard, 2016). That is, people appear to find
the views of their peers just as convincing as those of experts
when considering how to adjust their normative beliefs1in
response to new information.
Why are experts not more influential than the average cit-
izen when it comes to normative belief adjustment? One
idea is that humans have an intuitive tendency to conflate
the knowledge of others with their own (Rabb et al., 2019).
1Throughout this paper we refer to normative beliefs as beliefs about the
optimal course of action when presented with a choice.
909
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 910
People explicitly recognize the division of cognitive labor
(Bromme et al., 2010; Kitcher ,1990), that is, they under-
stand, even at a young age, that people differ in their lev-
els of obtained expertise (Keil et al., 2008; Landrum &
Mills, 2015). When tasked to judge their own understand-
ing of a complex phenomenon, people judge it to be greater
when also instructed that experts fully understand the phe-
nomenon, compared to when experts do not fully understand
the phenomenon (Sloman & Rabb, 2016). This effect may
arise because people tend to implicitly conflate their mark-
ers of who is possessive of such technical knowledge with
their own actual knowledge of the topic (Rabb et al., 2019).
So, even though humans can identify economists as having
privileged knowledge about an intricate process (e.g., the
effects of international trade on citizenry), the economists’
very possession of such knowledge leads many non-experts
to mistakenly believe they understand it too (at in least in
some part). An economist (expert) in this case has little then
to offer to non-experts in terms of specialized knowledge
because they already feel as if they possess the knowledge.
There is no shortage of cases in which people overestimate
how much they know about a particular topic (Dunning et
al., 2003; Fernbach, Rogers, et al., 2013; Fernbach, Sloman,
et al., 2013; Keil, 2003; Kruger & Dunning, 1999; Moore
& Healy, 2008; Rozenblit & Keil, 2002) even in the context
of economics (Ortoleva & Snowberg, 2015). A collection
of these cases is reflected in the tendency to believe that one
understands (and is capable of explaining in detail) both in-
herently complex as well as ostensibly simple phenomena.
This has been called the “Illusion of Explanatory Depth”
(Rozenblit & Keil, 2002). In these instances, one reason
people may mistake their superficial knowledge for in-depth
knowledge of a phenomenon is because they internally fail
to distinguish between their markers for that knowledge and
the exact knowledge it marks (Rabb et al., 2019). So, in
cases where people hold an illusion of explanatory depth,
they are unlikely to credit experts with possessing privileged
information that they themselves do not possess. One con-
sequence could be that people fail to revise their normative
beliefs more to opinion from experts than opinion from ran-
dom members of the public.
Importantly, the illusion of explanatory depth can be eas-
ily exposed. When asked to explain the mechanics of a
process in detail, people become aware of the gaps in their
knowledge of the causal structure and are then confronted
with the actual limits of their expertise. (This effect may
be analogous to making people aware of known unknowns,
see Walters et al., 2016.) This leads to a recalibration of
their perceived knowledge, as people tend to adjust their
understanding claim downward. Such a process has been
demonstrated to apply to everyday objects like the mechan-
ics underpinning the function of toilets and toasters (e.g.,
Rozenblit & Keil, 2002), and to complex social policies like
immigration and trade (e.g., Fernbach et al., 2013; Vitriol
& Marsh, 2018). Moreover, people may not even have to
explicitly generate an explanation for the illusion of knowl-
edge to be exposed as simply reflecting on how well one can
explain the mechanistic process of how something works re-
duces overestimation of knowledge (Johnson et al., 2016).
This result suggests that having to provide a casual expla-
nation for some phenomenon reveals the gaps in one’s own
knowledge. Recognizing the limits of one’s own knowledge
can have downstream effects such as reducing political ex-
tremism (Fernbach et al., 2013) or, speculatively, increasing
the tendency to privilege expert consensus when given an
opportunity to change our opinion on a matter of economic
policy.
In this research we tested whether overestimation of
knowledge can explain how people revise their normative
beliefs given expert and laymen consensus. We hypothe-
sized that inducing a feeling of ignorance might be an effi-
cient method for getting people to rely on more valid sources
of information (i.e., from experts) over less valid ones (i.e.,
from the public). In particular, if people believe they al-
ready understand something to a much greater extent than
they really do, they may not appreciate the vast difference in
expertise between laypeople and experts. Thus, it is possible
that participants will find the utility of expert opinion to be
equivalent to that of members of the public unless their illu-
sions of explanatory depth have been exposed. We propose
that lowering confidence in perceived understanding by ex-
posing an illusion of explanatory depth would increase the
perceived utility of experts by making people aware that their
markers for the knowledge (e.g., economists know X) were
not representative of their actual knowledge (e.g., I know
X), and thus be more willing to credit people who are likely
to possess that specialized knowledge of X. We would then
expect this to lead to greater normative belief revision in
response to expert, rather than lay opinion
We devised five studies to test the claim that exposing
an illusion of knowledge will increase the influence of ex-
perts. The first study replicated the main finding of Johnston
and Ballard (2016) that people fail to adjust their normative
beliefs more to expert rather than lay consensus. The sec-
ond study introduced the explanation paradigm: participants
were asked to provide a mechanistic, step-by-step explana-
tion for exactly how something worked. This procedure led
to greater normative belief revision in response to experts,
as opposed to lay opinion. The third study replicated the
results of second and provided evidence for the claim that
undermining an illusion of even topic-irrelevant knowledge
can lead to greater normative belief revision in response
to expert rather than lay opinion. The fourth study repli-
cated this finding across five different economic issues. The
fifth study, also using the expanded set of issues, included a
control condition that again replicated the main finding that
people do not revise more to experts than to lay people.
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 911
2 Study 1
We first attempted to replicate the original finding of John-
ston and Ballard (2016) that people fail to revise normative
beliefs to be more in line with expert opinion, instead pre-
ferring the opinions of lay people. However, we made an
important design change from the original work: we im-
plemented a pre-post design to test whether the findings are
consistent across a within-subjects manipulation. Further,
participants in our study responded to more than one eco-
nomic issue.
2.1 Method
The materials and data for each study can be found on the
Open Science Framework here: https://osf.io/2pzbe/.
2.1.1 Participants
We recruited 2042participants via Mechanical Turk who
were required to be United States citizens above the age of
18 and have a HIT approval rating of at least 90%. No other
recruitment restrictions were applied. All studies reported in
this paper followed this restriction criteria. Participants were
mostly white (79%), male (60%), had obtained at least some
level of post-secondary education (83%), and were between
the ages of 18 to 69 (M = 33.12, SD = 9.54).
2.1.2 Procedure
A brief overview of the procedure of this study (and all
studies reported in the paper) can be found in Table 1 be-
low. The design of the study was a 2 Time (Pre-Consensus
Judgment/ Post-Consensus Judgment) by 2 Source of Con-
sensus (Economists/ General Public) mixed design. Time
was a within-subjects factor while Source of Consensus
was a between-subjects factor. In the study, participants
were asked to rate their agreement with an economic is-
sue statement twice each for two separate issues. For the
first agreement rating, participants were presented the state-
ment plainly (e.g., “Trade with China makes most Ameri-
cans better off”) and asked to rate their agreement on a 1
(strongly disagree) to 5 (strongly agree) scale with 3 rep-
resenting uncertainty. This judgment was labelled the Pre-
Consensus Judgment. The participants were then presented
with the “consensus information” (described below, also see
Table 2) said to be from their assigned source (either Pro-
fessional Economists or General Public). With this informa-
tion present, the participants then re-rated their agreement
2This sample size was determined via a power calculation affording us
80% power to detect an effect of d= .2. We based our effect size estimation
off of previous research (e.g., Coppock, 2018; Johnston & Ballard, 2016)
and setting a reasonable, smallest effect size of interest (Lakens, 2017).
with the statement. This judgment was labelled the Post-
Consensus Judgment. After providing the agreement judg-
ments for one statement, they repeated this process for the
other statement. Finally, each participant responded to two
“trust in economists” questions.
The economic statements used in this experiment were
two selected from Johnston and Ballard (2016): a “gold
standard” statement and a “trade with China” statement. Ta-
ble 2 contains all the economic issues used throughout the
presented work and their accompanying consensus informa-
tion. As suggested by the original authors, the key distinction
between these two statements is the prior beliefs held by par-
ticipants. Johnston and Ballard (2016) found that most of
their sample had an opinion regarding the benefit of trading
with China on the US economy, but few had prior opinions on
whether the US should or should not be on a gold standard.
The consensus information was provided in terms of re-
sponses to the same statement participants were judging but
said to have been made by 100 members with varying polit-
ical preferences of their assigned source (either Professional
Economists or General Public). The economic statements
and the levels of consensus used by Johnston and Ballard
(2016) (and adapted for the current work) were taken from
the Initiative on Global Markets’ (IGM) panel of economists,
consensus on each issue represents the opinions of actual
economists and the diversity of opinions for each issue is
unique (Table 2).
For the “trust in economists” questions, the first assessed
the extent to which the participant trusted the opinions of
professional economists when thinking about economic pol-
icy issues. The second assessed the extent to which the
participant thought that members of Congress should rely
on the opinions of professional economists when crafting
public policy on economic issues.
2.2 Results
To test whether participants’ normative beliefs were revised
in accordance with consensus information and whether the
source of the information mattered, we assessed the differ-
ence in agreement judgments across Time. In other words,
we examined the change in agreement with the economic
statement from the Pre-Consensus judgment to the Post-
Consensus judgment. For both statements, we found a main
effect of Time such that there was a shift in agreement with
the statement (consistent with consensus information) af-
ter having been provided consensus information (both p’s
< .001). However, we did not find a Source of Consensus
by Time interaction for either statement (both p’s > .135),
indicating that participants did not exhibit greater change in
agreement to the opinion of experts compared to the opinion
of laypeople. Table 3 contains the descriptive statistics and
inferential test statistics for this study. Together these results
demonstrate that while people do revise their normative be-
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 912
Table 1: Overview of the procedural steps for each study.
Writing Task (Illusion of Explanatory Depth Paradigm)
Components
(in order)
Understanding Judgment 1 Explanation Generation Understanding Judgment 2
Description Participants rate their understanding
of [Topic] on 1 (little
understanding) to 7 (thorough
understanding scale
Participants generate explanation of
[How] [Topic] works, [Why] they
hold their position on [Topic], or copy
a block of text (control condition)
Participants re-rate their
understanding
Agreement Rating Task
Components
(in order)
Pre-Consensus Judgment Consensus Information Provided Post-Consensus Judgment
Description Participants judge agreement with
economic issue on a 1 (strongly
disagree) to 5 (strongly agree) scale
Consensus information said to be
from [professional economists/
members of general public] provided
Participants judge agreement
with economic issue again
Step 1 Step 2 Step 3
Study 1 Agreement Rating Task Agreement Rating Task for second
economic issue
Study 2 Pre-Writing agreement judgment
(For half of sample only)
Writing Task:
Related How
Agreement Rating Task
Study 3 Pre-Writing agreement judgment Writing Task:
Related How, or Unrelated How
Agreement Rating Task
Study 4 Pre-Writing agreement judgment Writing Task:
Related How, Unrelated How, or
Related Why
Agreement Rating Task
Study 5 Pre-Writing agreement judgment Writing Task:
Related How, or Control Task
Agreement Rating Task
liefs in accordance with consensus information, the source
of the consensus (Professional Economists or General Pub-
lic) appears to play no role in such revision, replicating the
results of Johnston and Ballard (2016).
We next analyzed the role of trust. Just over half of respon-
dents (55%) stated that when thinking about economic policy
issues they trust the opinions of professional economists (to
varying degrees). Similarly, 67% of respondents agreed that
members of Congress should rely on the opinions of profes-
sional economists when crafting public policy. Importantly,
each form of stated trust in experts was not related to change
in agreement for either issue (all p’s > .348), nor was it
predictive of change in agreement in response to (expert or
non-expert) consensus information for either issue (all ps >
.394).
2.3 Discussion
Study 1 replicated the finding of Johnston and Ballard (2016)
that people fail to revise their normative beliefs more to
expert than lay opinion. Moreover, we found no evidence that
trust in the opinion of economists influences such updating
behavior. We now turn to answer the question of why in
Study 2 — why do people behaviorally fail to privilege the
opinion of experts over the public? One possible reason is
that people overestimate how much they know and therefore
undervalue the opinions of experts. People might revise
more to the opinion of experts if they were less confident in
how much they think they know.
The data of Study 1 suggest that people understand the
value of experts in an abstract sense (as a majority of the
participants reported trusting the opinions of professional
economists when making their own economic decisions),
however, this was not reflected in behavior where they would
be expected to give expert opinion greater weight than the
opinion of non-experts. Here, people may be implicitly
failing to disqualify themselves – and, by extension, their
fellow members of the public – as experts. They may be
aware of the value that experts provide but unaware that they
are conflating the expert’s knowledge with their own. That is
to say, on these economic issues people implicitly consider
themselves to be experts. Participants could understand that
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 913
Table 2: Economic issue statements and corresponding consensus information.
Issue Description Consensus
Strongly
Disagree
Disagree Uncertain Agree Strongly
Agree
Gold Standard If the US replaced its discretionary monetary
policy regime with a gold standard, defining a
‘dollar’ as a specific number of ounces of gold,
the price-stability and employment outcomes
would be better for the average American.
66 34 0 0 0
Immigration The average US citizen would be better off if a
larger number of highly educated foreign
workers were legally allowed to immigrate to the
US each year.
0 0 0 46 49
Medicare/ Medicaid Long run fiscal sustainability in the US will
require cuts in currently promised Medicare and
Medicaid benefits and/or tax increases that
include higher taxes on households with
incomes below $250,000.
0 0 0 35 56
Taxes A cut in federal income tax rates in the US right
now would raise taxable income enough so that
the annual total tax revenue would be higher
within five years than without the cut.
57 39 4 0 0
Trade With China Trade with China makes most American better
off.
0 0 0 41 59
an expert is an expert, but in this case may not believe that the
experts’ specialized knowledge exceeds their own and other
members of the public. Exposing the illusion of explanatory
depth could increase the salience of the difference between
the topical knowledge of ordinary individuals and experts,
and could thus increase normative belief revision in response
to expert opinion to a greater degree than to public opinion.
We conducted a second study to test this prediction.
3 Study 2
People tend to fail to correctly assess how much they really
know about how the world works. Often, we think we can ex-
plain even ordinary phenomena (e.g., how recycling works)
in more detail than we really can. When asked to mechanis-
tically explain how something works in full detail, however,
we become aware of our apparent lack of knowledge, and
often experience humility at our overconfident assessment
of our knowledge (Rozenblit & Keil, 2002). Importantly,
recognition of our lack of knowledge happens without being
provided any external feedback on the explanations provided.
That is, without being told we do not know as much as we
think we do, we realize it entirely by ourselves. Known as
the illusion of explanatory depth, this paradigm reveals the
false beliefs that many of us have regarding our knowledge
of a topic.
It is this paradigm that was implemented in Study 2 as
an attempt to make participants more aware of the discrep-
ancy of the knowledge they possess, and that of an expert in
economics.
3.1 Method
3.1.1 Participants
Three hundred and ninety-nine participants were recruited
via Mechanical Turk and were mostly white (77%), male
(56%), had obtained at least some level of post-secondary
education (86%), and were between the ages of 18 to 68 (M=
35.36, SD = 10.51). In addition to the recruitment restrictions
outlined in Study 1, potential recruits were also barred if
they had participated in the previous study. Furthermore, we
limited participation to unique IP addresses such that only
one participant per IP address could complete the study.
3.1.2 Procedure
The procedure of this study expanded upon the procedure of
Study 1. Table 1 provides a brief overview of the procedural
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 914
Table 3: Descriptive statistics and inferential tests of main analyses of Study 1.
Issue Consensus Pre-Consensus
Judgment Mean (SD)
Post-Consensus
Judgment Mean (SD)
Time Main Effect Source of Consensus
x Time interaction
Trade with China Public 3.40 (1.07) 3.51 (1.15) F(1, 202) = 15.74, F(1, 202) < 1
Economists 3.44 (1.03) 3.61 (1.05) p< .001 p= .358
Gold Standard Public 2.97 (1.02) 3.10 (1.11) F(1, 202) = 13.14 F(1, 202) = 2.25
Economists 3.02 (1.03) 3.33 (1.14) p< .001 p= .135
Note. Each agreement judgement was made on a 1 (Strongly Disagree) to 5 (Strongly Agree) scale with 3 representing
uncertainty. For the Trade With China issue, consensus information agreed with the issue, while for the Gold Standard
issue, consensus information disagreed with the issue.
steps of the study. In this study, participants rated their agree-
ment with a single economic issue twice. Prior to providing
their agreement judgments, each participant completed an
illusion of explanatory depth exercise analogous to Rozen-
blit and Keil (2002). Here, participants would be given a
topic (i.e., the impact that trading with China has on the US
economy). First, participants were asked to rate how well
they thought they understood the topic. This rating was made
on a 1 (little understanding) to 7 (thorough understanding)
scale that participants were provided instructions on how to
use. Second, they were asked to explain in as much detail as
possible how their topic (i.e., trading with China affects the
US economy) worked. Finally, each participant rated their
understanding of the topic again.
Once each participant finished the Writing Task (rate un-
derstanding, generate explanation, re-rate understanding),
they would proceed to provide their agreement with the eco-
nomic statement. Similar to Study 1, participants would
rate their agreement with the economic issue. Then they
would provide this judgment once again although the second
time featured consensus information from their randomly
assigned source (see Table 2 for the issue statement and its
corresponding consensus). Unlike Study 1, all participants
provided only their judgment for the “Trade With China”
issue. This also meant that each participant’s Writing Task
asked them to rate their understanding of and explain the
impact of trading with China on the US economy.
One key detail was that half of the participants were asked
to make a third agreement judgment with the economic issue
statement. However, this judgment occurred before com-
pleting the Writing Task (this judgment is referred to as the
“Pre-Writing” judgment) as opposed to after like the other
two agreement judgments. Following the procedure used
by Fernbach et al., (2013) who demonstrated that expos-
ing an illusion of knowledge can reduce position extremism
on political issues (although this has not been consistently
demonstrated, see Voekel et al., 2018) this additional judg-
ment would allow us to examine whether exposing an illusion
of knowledge would reduce position extremism on economic
issues.
3.2 Results
We first assessed whether exposing an illusion of knowledge
would decrease participant’s position extremity in their pre-
viously held economic beliefs. To this end, we tested whether
the extremity of the Pre-Writing judgment (that half, n= 198,
of the sample provided) was greater than the extremity of the
Pre-Consensus judgment. To conduct this analysis we cre-
ated an index of Polarity that was expressed as the absolute
distance of one’s opinion from the “uncertain” response (the
middle of the scale) for the Pre-Writing (M= 0.93, SD = 0.59)
and Pre-Consensus judgments (M= 0.88, SD = 0.62). We
found no significant decrease in Polarity after being asked
to generate a mechanistic explanation (t(197) = 1.12, p=
.132).3
Next, we tested the effect of exposing an illusion of ex-
planatory depth on normative belief revision in response
to consensus information. Table 4 contains the descriptive
statistics for each agreement judgment. We found a main
effect of Time, such that people changed their agreement in
response to receiving consensus information regardless of
the source (F(1, 396) = 59.82, p< .001).4We also found
a Source of Consensus by Time interaction (F(1, 396) =
14.12, p< .001). Further analysis revealed that participants
still changed their agreement in response to the opinion of
laypeople (t(206) = 3.11, p= .002, d= 0.22), but exhibited
far greater change in response to the opinion of professional
economists t(190) = 7.40, p< .001, d= 0.54). Figure 1
contains graphical depictions of this analysis as well as com-
parable analyses for Studies 3–5.
3Where applicable the statistical tests in this paper were conducted as
one-tailed tests at the 𝛼= .05 significance level.
4The reported analyses collapse across the Pre- and no Pre-Writing
judgment conditions. These reported effects remain significant if tested
within each of these conditions.
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 915
Figure 1: Main analyses graphs for studies 2–5 demonstrating change in agreement with the economic issue after being
provided consensus information after completing the Writing Task. In each case, greater agreement with the issue reflected
the opinion of the consensus information. In Studies 2 and 3 only the Trade With China issue was provided. In Studies 4
and 5 one of five possible economic issues were provided (the corresponding figures collapse across issue). In Study 3, the
Unrelated How writing task asked participants to explain how modern recycling worked in the U.S. city. The Related How
writing task asked participants to explain how trading with China affects the U.S. economy. In Study 4, the Unrelated How
writing task asked participants to explain how a helicopter takes flight. The Related How writing task asked participants to
explain how their assigned economic issue worked. The Related Why writing task asked participants to explain why they
held their position on the economic issue. In Study 5, the Control writing task had participants reproduce a block of text that
was displayed as an image. The Related How writing task asked participants to explain how their assigned economic issue
worked. Error bars in each graph represent ±1 standard error of the mean.
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 916
Table 4: Descriptive statistics for the agreement judgments
of Study 2.
Consensus Pre-Writing
Judgment
Pre-
Consensus
Judgment
Post-
Consensus
Judgment
Economists 3.42 (1.01) 3.17 (1.06) 3.50 (1.17)
Public 3.47 (1.01) 3.27 (1.05) 3.38 (1.07)
Note. Each agreement judgement was made on a 1 (Strongly
Disagree) to 5 (Strongly Agree) scale with 3 representing
uncertainty. The Pre-Writing judgment contains only half
of the sample (n = 198) while the Pre-Consensus and Post-
Consensus judgments contain the entire sample. In this case,
only the Trade With China issue was rated and consensus
information agreed with the issue.
3.3 Discussion
In Study 2, we found that, after being asked to explain the
mechanisms of foreign trade, people became far more influ-
enced by the opinions of economists than those of laypeople.
While the participants still adjusted their normative beliefs
to both sources of consensus, they did so to a far greater
extent when presented with economist opinion than with lay
opinion.
We next sought to explore why exposing an illusion of
knowledge led to an increase in receptivity to expert opinion
with a third study. To this end we generated two competing
explanations. The first hypothesis suggests that exposure
made participants aware of how little they know about the
particular economic issue (the effects of foreign trade). As
such, they were more willing to revise their beliefs to be in
line with experts who likely possessed topic-relevant knowl-
edge. The second hypothesis suggests that exposing the il-
lusion of knowing induced a general feeling of ignorance in
participants, and in turn, made them less convinced of their
general expertise in any topic. Thus, they would be more
influenced by the opinions of experts than by the opinions of
their peers. If the second explanation is true (an induction
of ignorance), failing to explain any issue would produce a
similar willingness to revise their normative beliefs. If the
first explanation is true (lack of topic-relevant knowledge),
however, we should observe no revision after failing to ex-
plain an irrelevant issue (e.g., how modern recycling works).
The next study aimed to replicate the findings of Study 2
while testing these two competing explanations.
4 Study 3
Study 3 attempted to replicate the previous study’s findings
and to further test whether the content of the to-be-explained
material in the explanation paradigm mattered. The question
was whether is it necessary to make a participant experi-
ence a feeling of ignorance on a specific topic (in this case
an economics topic), or, alternatively, is failing to explain
a complicated procedure on any topic enought for partic-
ipants to privilege the opinion of experts? To do so, we
added a writing condition where participants would explain
the recycling process of a modern U.S. city rather than the
mechanisms of foreign trade. We believed that recycling is
a topic that would be familiar enough to subjects to appear
superficially simple while being complex in nature. As such,
we deemed it a likely candidate to produce an overestimation
of knowledge.5
4.1 Method
4.1.1 Participants
We recruited 401 participants via Mechanical Turk with the
same restrictions previously used in Study 2. Participants
must not have participated in either Study 1 or 2 to enter this
study. Respondents were mostly white (77%), male (55%),
had obtained at least some level of post-secondary education
(88%), and were between the ages of 18 to 77 (M= 35.96,
SD = 11.31).
4.1.2 Procedure
The procedure followed that of Study 2 except for two
changes. (Table 1 provides a brief overview of the proce-
dural steps all studies.) First, every participant (rather than
half, as in Study 2) made Pre-Writing agreement judgment.
That is, each participant mad three agreement judgments to-
tal, one before the Writing Task and two after. Second, in
an unrelated-content writing condition, half of the partici-
pants ratee their understanding of recycling and explain how
it works in a modern US city, instead of writing about the
impact of trading with China on the US economy. So, not
only were participants randomly assigned as to which Source
of Consensus they would receive (economists or the public)
they were also randomly assigned, orthogonality, a Writing
Task (related or unrelated).
To summarize the procedure. Participants first rated their
agreement with the Trade with China economic issue (Pre-
Writing Judgment). They then completed the Writing Task
regarding a related or unrelated topic. Then they pro-
vided their agreement with the economic issue again (Pre-
Consensus Judgment). Finally, they provided their agree-
ment rating for a third time except this time they did so
with the consensus information from their randomly assigned
source present (Post-Consensus Judgment).
5We also thought that recycling would be relatively unrelated to the
expertise of economists. However, an anonymous reviewer pointed out that
that expert economists could hold knowledgeable viewpoints on this issue.
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 917
Table 5: Descriptive statistics for the main judgments of Study 3 by cell.
Source of Consensus Writing Task Pre-Writing Judgment Pre-Consensus Judgment Post-Consensus Judgment
Economists Unrelated How 3.52 (0.97) 3.47 (0.94) 3.70 (0.91)
Related How 3.41 (0.97) 3.37 (1.05) 3.71 (1.02)
Public Unrelated How 3.39 (1.03) 3.44 (0.96) 3.46 (1.02)
Related How 3.50 (1.05) 3.38 (1.00) 3.49 (1.08)
Note. Each agreement judgement was made on a 1 (Strongly Disagree) to 5 (Strongly Agree) scale with 3
representing uncertainty. The Unrelated How writing task asked participants to explain how modern recycling
worked in the U.S. city. The Related How writing task asked participants to explain how trading with China affects
the U.S. economy.
4.2 Results
We first tested whether each Writing Task reduced position
extremity. To do so we again created a Polarity index to
measure the average degree of distance from uncertainty in
the Pre-Writing (M= 0.92, SD = 0.61) and Pre-Consensus
judgments (M= 0.87, SD = 0.63). We found a significant
reduction in Polarity after completing the writing task (F(1,
398) = 6.33, p= .012). Further, the Writing Task by Polarity
interaction was not significant (F(1, 398) < 1). It thus seems
that both writing topics (related and unrelated to trade with
China) had a similar effect on reducing position extremity.
We then tested whether people revised their normative be-
liefs differentially, dependent on both the source of the con-
sensus information and what topic they explained. That is,
we examined if people gave additional weight to the opinion
of experts when put through the explanation paradigm, as in
Study 2, and whether this paradigm was required to be topic-
relevant or not. We did not find a significant Time by Writing
Task by Source of Consensus interaction (F(1, 397) < 1; see
Figure 1), indicating that the pattern of agreement change to
the source of the information was not significantly different
across explanations. In other words, regardless of whether
participants explained how trading with China impacts the
US economy or how recycling in a modern US city works,
their subsequent normative belief revision to consensus in-
formation was similar. As such, these explanation conditions
were collapsed across to provide a higher-powered analysis
of whether participants revised their normative beliefs more
to experts than to laypeople.
Consistent with Study 2, we found a significant Time by
Source of Consensus interaction, such that people changed
their agreement more in response to expert opinion than to
public opinion (F(1, 397) = 12.96, p< .001; see Table 5 for
the descriptive statistics of the agreement judgments). Fur-
ther analyses revealed that participants significantly changed
their agreement to expert opinion (t(203) = 6.97, p< .001, d
= 0.98), while they did not do so to public opinion (t(196)
= 1.65, p= .102). After having an illusion of knowledge
exposed, participants revised their normative beliefs on an
economic issue to a far greater extent when presented with
the opinions of professional economists than with the (same)
opinions of the general public.
4.3 Discussion
This study replicated the finding that after attempting to
explain an economic issue mechanistically, people revise
their opinion of that economic issue more when they re-
ceive consensus information from economists (experts) than
when they receive consensus of the general public. We
also tested competing hypotheses targeting whether the illu-
sion of knowledge exposed needs to be topic-relevant or not.
We found that the effect of explaining on normative belief
revision occurred regardless of whether the written expla-
nation was about the exact issue (trading with China) or an
unrelated issue (recycling in a U.S. city). We also found
that position extremity was decreased after explaining how
something works, regardless of the topic of that explanation,
consistent with the findings of Fernbach et al., (2013).
One interpretation of the results of Study 3 is that, when
individuals are presented with the opinions of experts and
given the chance to update their normative beliefs, they do
not credit the experts with possessing privileged information
(or at least possessing information that the general public
does not). Instead, they may believe that since the experts
possesses that knowledge, they do too. When made aware of
their lack of both topic-relevant and -irrelevant knowledge,
people change their minds to a greater extent to expert than
to public opinion. So, we suggest that exposing an illusion
of knowledge shifts an individual’s mental model of what
knowledge an expert possesses relative to themselves, lead-
ing them to revise their normative beliefs more in response
to consensus from experts than from random members of
the general public. We refer to this as inducing a feeling
of ignorance. People may ordinarily maintain a feeling that
they are generally more knowledgeable than they truly are on
all topics, which the exposure to the explanation paradigm
undermines by making their ignorance directly salient to
them.
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 918
A shortcoming of the past two studies employing the writ-
ing paradigm is a lack of a true control group without in-
duction of a feeling of ignorance. Based on Studies 2 and
3, we cannot claim that exposing ignorance led to greater
belief revision than a group without the feeling of ignorance
induced. To address this limitation, however, we conducted
a cross-study analysis to test whether there was significantly
more adjustment to experts than laypeople in the second and
third studies compared to the first, treating the first study as
a control condition. We found a significant Time by Source
of Consensus by Study interaction, F(2, 997) = 4.99, p=
.007. Further probing of this interaction revealed that Stud-
ies 2 and 3 each featured significantly greater agreement
change in response to expert opinion than to public opin-
ion in comparison to Study 1. Moreover, Study 2 and 3
were not significantly different from each other in this man-
ner.6However, as this test was an internal meta-analysis of
non-pre-registered studies, caution should be applied when
interpreting this result (see Vosgerau et al., 2019). As a re-
sult, we introduced control conditions for the following two
studies.
Another valid criticism of the studies conducted so far is
the lack of variability in economic issue stimuli. We are
unable to rule out the possibility that our results depend
on something idiosyncratic to this specific issue, trade with
China. To make a broader claim we need to demonstrate
the effect across multiple economic issues. In addition to a
control condition, the next two studies attempt to address this
problem of stimulus sampling (Wells & Windschitl, 1999).
5 Study 4
To address the concern of stimulus sampling, this study at-
tempted to replicate Study 3’s findings across several eco-
nomic issues. The issues selected were the five used in the
original work by Johnston and Ballard (2016). Further, as it
is possible to suggest that the “unrelated” explanation con-
dition in the previous study (how recycling works in a US
city) is not unrelated enough, that is, it is an issue that a
professional economist could have a knowledgeable opinion
on, we changed the topic to be explained. The new unrelated
writing task would feature a topic used in early explanatory
depth research: how a helicopter takes flight (Keil, 2003).
Also, in an attempt to address the issue of having no control
6The three-way interaction was further probed using multiple compar-
isons. Comparing Study 1 and Study 2, we found a significant Time by
Condition by Study interaction (F(1, 598) = 8.78, p= .003). Comparing
Study 1 and Study 3, we found a significant Time by Condition by Study
interaction (F(1, 601) = 8.15, p= .004). Comparing Study 2 and Study 3,
we did not observe a significant Time by Condition by Study interaction
(F(1, 795) = 0.07, p= .799). Together, these indicate that the pattern of
belief revision to expert opinion found in Studies 2 and 3, while not sig-
nificantly different from each other, were both individually different from
Study 1. Thus, the effect in question appears robust when compared to a
pseudo-control condition.
condition in the previous two studies, we included a condi-
tion we hypothesized would work as a control: explaining
why you hold the belief you do about the issue, rather than
how it works. This method is based off the condition imple-
mented by Fernbach et al., (2013) who used it to demonstrate
that explaining how rather than why leads to a decrease in
political extremism. If we find that economic extremism is
reduced by how but not why, then this could represent a valid
control condition. If explaining why also reduces position
extremism then it is very unlikely it would produce a belief
revision effect discrepant from the how conditions. In sum,
participants would be writing about one of: how their one
economic issue works, why they hold the opinion they do of
that economic issue, or how a helicopter takes flight.
5.1 Method
5.1.1 Participants
We recruited 1000 participants via Mechanical Turk for this
study. Participants must not have completed any of the pre-
vious studies to participate. In addition to the recruitment
restrictions applied for Studies 2 and 3, we also blocked re-
sponding from suspicious geolocations associated with bot
farms via the Turk Prime feature (Litman, Robinson & Ab-
berbock, 2017). Respondents were mostly white (70%), of
evenly mixed gender (49% male), had obtained at least some
level of post-secondary education (71%), and were between
the ages of 18 to 77 (M= 36.92, SD = 11.86). Prior to anal-
ysis we excluded 12 participants for either failing to write
anything and/or failing to respond to any of the three agree-
ment judgments. This left 284 participants in the related how
condition, 376 participants in the unrelated how condition,
and 329 related why condition.
5.1.2 Procedure
Table 1 provides a brief overview of the procedural steps
of the study. This study’s procedure was nearly identical to
Study 3, as each participant provided an agreement judg-
ment, completed the Writing Task, and then provided their
agreement two more times. Two changes were made to the
Writing Task. First, a Related Why condition was added.
In this condition participants explained why they held the
position on the economic issue that they did (e.g., why they
agreed that trading with China makes most Americans bet-
ter off). Second, the Unrelated How writing condition had
its topic changed from how recycling works in a modern
US city to how a helicopter takes flight. So, participants
in this experiment would either rate their understanding of
and explain how their economic issue worked, why they held
the position on the economic issue that they did, or how an
unrelated issue works.
The pool of possible economic statements was expanded
from 1 to 5, but each participant was randomly assigned to
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 919
Table 6: Descriptive statistics for the main judgments of Study 4 by cell.
Source of Consensus Writing Task Pre-Writing Judgment Pre-Consensus Judgment Post-Consensus Judgment
Economists Related How 3.21 (1.13) 3.13 (1.14) 3.43 (1.25)
Related Why 3.17 (1.04) 3.10 (1.00) 3.41 (1.14)
Unrelated How 3.28 (1.01) 3.27 (0.97) 3.56 (1.05)
Public Related How 3.30 (1.00) 3.18 (1.01) 3.32 (1.06)
Related Why 3.13 (1.05) 3.15 (1.03) 3.23 (1.09)
Unrelated How 3.10 (1.09) 3.11 (1.08) 3.24 (1.12)
Note. Means and (Standard Deviation) are provided in the table. Each agreement judgement was made on a 1
(Strongly Disagree) to 5 (Strongly Agree) scale with 3 representing uncertainty. The Unrelated How writing task
asked participants to explain how a helicopter takes flight. The Related How writing task asked participants to
explain how their assigned economic issue worked. The Related Why writing task asked participants to explain
why they held their position on the economic issue.
respond to only a single issue (three times). As the opin-
ions of each economic issue were provided by professional
economists, the diversity and levels of agreement (and dis-
agreement) for each issue is unique (Table 2). However, for
the purposes of analyses, each issue was coded such that a
higher score reflected greater agreement with the consensus
information.
5.2 Results
We first tested whether explaining how but not why reduced
position extremity. In other words, we examined whether
Polarity was reduced from the Pre-Writing to Pre-Consensus
agreement judgments and whether this varied as a function
of Writing Task (why vs. how). We found a main effect of
Writing Task such that position extremity was reduced from
the Pre-Writing judgment (M= 0.84, SD = 0.66) to the Pre-
Consensus judgment (M= 0.80, SD = 0.68). Importantly, we
did not find a significant Writing Task by Polarity interaction,
F(2, 988) < 1, suggesting that the observed reduction in
position extremity did not differ across the various Writing
Task conditions.7As a result, one should not expect there
to be a difference in belief revision based on these writing
conditions.
Next, we tested whether generating a written explanation
would lead to greater revision to the opinion of experts com-
pared to laypeople. Table 6 contains the descriptive statistics
pertinent to this analysis. We found a Source of Consensus
by Time interaction (F(1, 985) = 12.69, p< .001)8, suggest-
ing that after generating a written explanation, participants
revised more to the opinion of experts than the opinion of
7The Writing Task by Polarity by Economic Issue three-way interaction
was also not significant (F(8, 988) = 1.35, p= .215). Consequently, the
results reported in this paragraph are collapsed across Economic Issue.
8This interaction did not vary as a function of Economic Issue as the
Source of Consensus by Time by Economic Issue three-way interaction was
not significant, F(4, 981) < 1.
laypeople. Consistent with the previously described Polarity
results, we found no evidence that the observed effect var-
ied as a function of Writing Task. The three-way Source of
Consensus by Time by Writing Task interaction was not sig-
nificant (F(2, 985) < 1; Figure 1). This result demonstrates
that after writing, regardless of what participants wrote, they
proceeded to revise their normative beliefs to be more con-
sistent with the opinions of experts than the opinions of
laypeople
The lack of difference between the originally planned writ-
ten control condition (writing about why they hold their be-
lief) and the written experimental conditions (writing about
how the economic issue works or how a helicopter takes
flight) is potentially problematic for the account we are pre-
senting. So, we decided to further explore whether the con-
dition we intended to serve as a control condition truly did.
With the benefit of hindsight, we realized that when queried
for reasons why someone holds a position on an economic
issue, they may start attempting to explain how it works in-
stead. In complex and technical economic issues like the
ones presented to participants here, it may be the case that a
consideration of “why” will tend to reduce to an explanation
of “how”. For instance, it would be difficult to find an ex-
planation to the question “why do you believe a ship floats
on the water” without necessarily appealing to its underlying
mechanisms. As a result, this may be why the written why
condition reduced position extremity – because participants
were writing about how it works in their natural explana-
tion of why they believed what they believe. To explore this
possibility we had two independent, hypothesis-blind coders
read each participant’s explanation and categorized them as
an attempt to: explain how something works, why someone
believed in something, or reported stating “I don’t know” or
wrote nonsense. The results of the coding analysis revealed
that the distribution of responses between the how and why
writing conditions were nearly identical. That is, for one
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 920
coder, 38% of participants in the related how condition ex-
plained “why” they believed what they believed, and 48% of
them explained “how” it worked. This is compared to the
42% who wrote “why” they believed it and 43% who wrote
“how” it worked in the related why condition.9This is direct
evidence that participants found it difficult to specifically
write about how something worked or why they believed it.
In addition to the main effect of Polarity (position extrem-
ity) reduction, the results of the independent coders suggest
that the substance of what was being written about in the
Related How and Related Why Writing Task conditions was
essentially the same thing. Therefore, we believe it as ap-
propriate to treat the why condition in this study as a further
experimental condition.
5.3 Discussion
Across five different economic issues, each with a unique
level of consensus, we replicated the finding that puncturing
an illusion of knowledge (inducing a feeling of ignorance)
leads to greater normative belief revision in response to the
opinion of experts than the opinion of laypeople. In addition,
we found further evidence for the generality of the effect
that inducing a feeling of ignorance has on normative belief
updating, as even generating a written explanation about an
irrelevant topic (i.e., how a helicopter takes flight) led to the
downstream revision effect.
While Study 4 helped address the concern of stimulus
sampling, the lack of a true control condition remained an
issue. As a result, we decided to run one more study that
would contain a dedicated control condition.
6 Study 5
6.1 Method
6.1.1 Participants
We recruited 653 participants via Mechanical Turk for this
study. In addition to the recruitment restrictions imple-
mented in Study 4, potential participants could not have
previously completed any of Studies 1–4. Respondents were
mostly white (75%), an even mix of gender (51% men),
had obtained at least some level of post-secondary education
(67%), and were between the ages of 18 to 75 (M= 36.33,
SD = 11.06). Prior to analysis we removed all participants
who wrote nothing (1% of the sample). This left 246 partic-
9For the other coder while the percentage results are slightly different,
the distribution again remains identical across the related how and related
why conditions. For example, the other coder’s numbers were 51% explain-
ing why and 37% explaining how in the related how condition and 51%
explaining why and 36% explaining how in the related why condition. The
ratings of the two coders were moderately reliable (k= .42).
ipants in the experimental condition and 403 participants in
the control condition.10
6.1.2 Procedure
This study’s procedure was nearly identical to Study 4 as each
participant provided an agreement judgment, completed the
Writing Task, and then provided their agreement two more
times (see Table 1 for a brief overview of the procedural steps
of this study). The only modifications to the procedure were
to the Writing Task as it was reduced to contain only two
conditions. The Related How condition where participants
would explain how their economic issue worked remained
unchanged. The Unrelated How and Related Why conditions
were removed and replaced with a Control condition. In this
condition, participants would copy the text from a descriptive
passage that was in image form (to prevent copy and pasting).
The length of the descriptive passage was approximately
equivalent to the amount of writing entered for an average
written explanation in the previous studies.
6.2 Results
We first tested whether explaining how an economic issue
worked led to a greater reduction in position extremity across
the Pre-Writing to Pre-Consensus judgments, compared to
writing out text displayed in an image. We did not find a
significant reduction in Polarity from the Pre-Writing (M=
0.84, SD = 0.67) to Pre-Consensus judgments (M= 0.84,
SD = 0.70, F(1, 648) < 1). Furthermore, we did not find a
significant Writing Task by Polarity interaction (F(1, 648)
= 1.82, p= .177). Thus the experimental condition did
not exhibit a greater reduction in position extremity across
judgments compared to the control condition (as overall, no
reduction in position extremity was observed).
We then tested whether participants in the control con-
dition changed their agreement more to expert versus lay
opinion and whether this difference was distinguishable from
the explanation condition. Table 7 contains the descriptive
statistics pertinent to this analysis. The Time by Source of
Consensus by Writing Task interaction was not significant
(F(1, 646) < 1 (see Figure 1).11 As indicated by the position
extremity results, this result suggests that those in the expla-
nation condition did not revise significantly more to experts
versus the public compared to those in the control condition.
However, we found a main effect of Source of Consensus
such that people changed their agreement with the economic
10The discrepancy in the number of participants in each condition might
reflect a greater attrition rate for those assigned to the experimental con-
dition. We believe this might be the case as the experimental condition’s
writing task required participants to generate an explanation themselves
while in comparison the control condition’s writing task required partici-
pants only to reproduce the text in an image.
11This result does not vary as a function of Economic issue as the four-way
interaction was not significant, F(4, 630) < 1.
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 921
Table 7: Descriptive statistics for the main judgments of Study 5 by cell.
Source of Consensus Writing Task Pre-Writing Judgment Pre-Consensus Judgment Post-Consensus Judgment
Economists Control 3.12 (1.06) 3.15 (1.07) 3.43 (1.29)
Related How 3.28 (1.05) 3.28 (1.01) 3.60 (1.08)
Public Control 3.16 (1.08) 3.08 (1.14) 3.24 (1.17)
Related How 3.17 (1.06) 3.11 (1.07) 3.26 (1.09)
Note. Means and (Standard Deviation) are provided in the table. Each agreement judgement was made on a 1
(Strongly Disagree) to 5 (Strongly Agree) scale with 3 representing uncertainty. The Control writing task had
participants reproduce a block of text that was displayed as an image. The Related How writing task asked
participants to explain how their assigned economic issue worked.
statement more to expert opinion than public opinion (F(1,
646) = 4.60, p= .032).12
In an attempt to determine whether the control condition
in this study replicated the results of Study 1 and of John-
ston and Ballard (2016), we examined whether those in the
control condition exhibited greater change in agreement in
response to expert versus lay consensus after having com-
pleted the copying-text Writing Task. Consistent with these
previous findings, the Time by Writing Task interaction was
not significant for this group (F(1, 402) = 28.42, p= .187).
This result suggests that participants in the control condition
did not privilege the opinion of experts over laypeople when
provided the opportunity for normative belief revision.
6.3 Discussion
Contrary to Studies 2–4, Study 5 failed to replicate the ef-
fect that after generating a mechanistic explanation for how
something works, people will revise their normative beliefs
more to expert consensus than public consensus. However,
when looking only at the control condition, we replicated
the finding of Study 1 and of Johnston and Ballard (2016)
that people fail to privilege the opinion of experts over the
opinion of laypeople. Apparently, Study 5 represents a case
of the experimental manipulation failing to work. However,
we aimed to provide the most comprehensive test for our
claim that exposing an illusion of knowledge leads to greater
normative belief revision to experts than when no illusion
of knowledge is punctured. To do this we compiled all the
data from our experiments and computed the main analyses
of interest.
7 Internal Meta-analysis
The compilation of Studies 1–5 produced a dataset that con-
tained responses from 2,862 unique participants. For the
12This result does not vary as a function of Economic issue as the three-
way Time by Source of Consensus by Economic issue interaction was not
significant (F(4, 630) = 1.01, p= .400).
purposes of analyses, each participant was grouped to either
the Experimental (n= 2,050) or Control (n= 812) condition.
The Experimental condition consisted of each participant in
Studies 2–4 who completed an experimental writing condi-
tion (which was all of them). This meant that participants
who explained how their economic issue worked (Related
How), how recycling in a modern US city worked (Unre-
lated How), how a helicopter takes flight (Unrelated How),
or why they held their stance on the economic issue (Related
Why), were compiled into the same group. The Experimen-
tal condition also featured the participants from Study 5 who
completed the Related How Writing Task. The Control con-
dition comprised of the participants from Study 1 and those
in the copying-text condition of Study 5. This meant that the
Control condition in this dataset represented participants in
either an “active” (Study 5) or “passive” (Study 1) control
condition. Table 8 shows the main results.
With this compiled dataset we tested our main hypoth-
esis: whether exposing an illusion of knowledge leads to
greater normative belief revision in response to expert ver-
sus public consensus than when an illusion of knowledge is
not punctured. We conducted this analysis first by looking
at respondents who received the Trade With China issue, but
the results reported below are robust when accounting for all
issues.
We found a significant Time by Source of Consensus by
Writing Task interaction (F(1, 1335) = 9.45 , p= .002; see
Figure 2).13 14 To unpack this interaction we tested whether
there was greater agreement change when provided expert
consensus compared to public consensus within each Writ-
ing Task condition (Control and Experimental). When ex-
amining the Control condition, we did not find a significant
Source of Consensus by Time interaction (F(1, 810) < 1),
13When analyzing all issues the result is highly similar (F(1, 2857) =
10.01, p= .002.)
14When conducting this test combining the data only from Studies 4 and
5 the result is non-significant (F(1, 1650) = 0.41, p= .522). However, the
results were in the expected direction as the Time by Source of Consensus
interaction for the Control condition was not significant (F(1, 402) = 1.75, p
= .187), while it was significant for the Experimental condition (F(1, 1248)
= 15.56, p< .001).
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 922
Figure 2: Main analyses graphs unpacking the significant Source of Consensus by Time by Writing Task three-way interaction
for the compiled data set containing studies 1 – 5. The figure demonstrates that after having an illusion of knowledge exposed
(Experimental Condition, n= 2,050) individuals change their agreement in accordance with consensus information to a greater
extent when that consensus information is said to have come from professional economists compared to members of the
public. When an illusion of knowledge is not exposed (Control Condition, n= 812) people do not revise more to experts than
members of the public. Error bars in each graph represent +/- 1 standard error of the mean.
Table 8: Descriptive statistics of the main analysis for the
compiled data set.
Source of
Consensus
Writing
Task
Pre-Consensus
Judgment
Post-Consensus
Judgment
Economists Control 3.17 (1.07) 3.37 (1.22)
Experimental 3.24 (1.03) 3.54 (1.10)
Public Control 3.15 (1.09) 3.36 (1.14)
Experimental 3.21 (1.04) 3.33 (1.08)
Note. Means and (Standard Deviation) are provided in the ta-
ble. Each agreement judgement was made on a 1 (Strongly
Disagree) to 5 (Strongly Agree) scale with 3 representing
uncertainty. Consensus information reflected agreement
(and as such, higher scores). So higher scores on the Post-
Consensus judgment reflect greater change in agreement to-
ward the consensus information.
demonstrating that these participants did not change their
agreement more to the consensus of experts than the con-
sensus of laypeople. When examining the Experimental
condition, we found a significant Source of Consensus by
Time interaction (F(1, 1052) = 34.74, p< .001). Further
analyses revealed that after having an illusion of knowledge
exposed, participants changed their agreement in response
to the opinion of laypeople (t(527) = 5.62, p< .001, d=
0.15), but changed far more in response to the opinion of
experts (t(525) = 11.78, p< .001, d= 0.51). Collectively,
these results support the conclusion that, in the absence of
any manipulations exposing gaps in knowledge, people do
not revise their normative beliefs more to expert opinion
than lay opinion. However, when an illusion of knowledge
is exposed, people revise far more to the experts.
8 General Discussion
The present research focused on how people revise their nor-
mative beliefs in response to the opinions of experts (profes-
sional economists) compared to the opinions of the general
public. Study 1 replicated the finding that people adjust their
normative beliefs in response to consensus information but
do not adjust more to economists’ opinion than lay opinion.
Studies 2 and 3 showed that when an illusion of explanatory
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 923
depth is exposed, people revise their normative beliefs far
more in response to learning the opinion of experts. In addi-
tion, Study 3 found that exposing the illusion of explanatory
depth is not topic-bound and that its exposure may induce
a general feeling of ignorance that leads to the downstream
effect of normative belief revision. Study 4 generalized the
effect of the writing manipulation across five different eco-
nomic issues each with its own unique level of consensus
and provided further evidence that it is a general feeling of
ignorance (rather than awareness of a lack of topic-relevant
knowledge) that creates the revision effect. Finally, Study 5
featured a control condition that also replicated the main find-
ing of Johnston and Ballard (2016) and Study 1. Collapsing
across all studies provides strong evidence for the contention
that one reason people do not privilege the opinion of ex-
perts is because people think that they, and by extension their
fellow members of the public, know more than they really
do.
Given the vast complexity of the world it is impossible
for any individual to know absolutely everything. Moreover,
compared to what they could know, a given individual knows
nearly nothing. Individuals must rely on the knowledge of
others if they want to obtain and maintain an accurate model
of the world. Through a web of epistemic dependence peo-
ple store their knowledge of the world in others (Hardwig
1985; Wagenknecht, 2015). One way individuals achieve
this is through transactive memory (Wegner, 1987; Wegner
et al., 1991), whereby they encode into memory not what
the exact details of a phenomenon are, but rather markers
for who is likely to hold that information. However, indi-
viduals can mistake knowing where that information might
be stored with actually understanding the information (Slo-
man & Rabb, 2016). This perhaps leads to an illusion of
explanatory depth in which people believe they can explain
phenomena to a far greater extent than they truly can (Rabb et
al., 2019). Our work is consistent with this model of human
knowledge. If individuals believe they possess the knowl-
edge of experts, there is little reason to update their beliefs
more in response to experts than to the public. They may
implicitly be asking themselves, “What does an expert know
that I do not?” Thus, while people do revise their beliefs
to consensus information somewhat, their updating behavior
suggests they fail to discriminate between experts and other
random members of the public (Coppock, 2018; Johnston &
Ballard, 2016).
Our findings suggest that confronting failure to gener-
ate a coherent explanation of a phenomenon leads people
to become aware that they are mistaking their markers of
knowledge with actual knowledge. When then provided in-
formation from more valid (experts) and less valid (general
public) sources of knowledge, people update their beliefs
more to valid sources. We found this to occur even when
the explanation failure concerned a topic unrelated to the
topic of the subsequent belief revision task. One question
that arises from this is why people revise more in response
to experts (and not just to any given opinion)? We have gen-
erated two possible explanations for the agreement-updating
behavior following the induction of a feeling of ignorance.
One possibility is that in this state, a person may ignore the
information presented from a source (rather than contrast
what they know versus what the source is saying), and in-
stead simply update toward those who more closely match
their markers for who should hold that sort of knowledge.
This is broadly consistent with evidence that suggests peo-
ple are cognitive misers and use simple heuristics to avoid
resource-intensive reflective processes (Dawes, 1976; Evans
& Stanovich, 2013; Gilovich et al., 2002; Stanovich, 2009).
Another possibility is that people flexibly integrate what
knowledge is being presented with who is presenting it. An
individual may not willingly update their beliefs in resonse
to an expert (the who) whose opinion (the what) is drasti-
cally different from the individual’s superficial knowledge
of the topic. This integration of both types of information
is consistent with research demonstrating that humans are
“good Bayesians” in a variety of domains (e.g., argumenta-
tion, Harris, et al., 2015; probability judgment, Krynski &
Tenenbaum, 2007; Turpin et al., 2020). These two contrast-
ing accounts are good candidates for future research.
Our work has implications for the behavioral conse-
quences of overestimating one’s knowledge. Much recent
research has provided timely examples of potentially insidi-
ous effects. For example, extreme opposition to genetically
modified foods has been linked to an increase in perceived
understanding and a decrease in objective knowledge about
science (Fernbach et al., 2019). In addition, people who
occupy extreme positions (as opposed to moderate) on both
the political left and right experience more certainty about
their domain-specific knowledge of an event independent of
their actual knowledge of it (which in terms of the 2016
European Union refugee crisis was not greater than that of
moderates: van Prooijen et al., 2017). People who report
knowing as much or more than doctors and scientists about
the causes of autism are highest among those with low levels
of actual knowledge about the causes of autism (Motta et al.,
2018). While exposing an illusion of explanatory depth has
been demonstrated to reduce position extremism (Fernbach
et al., 2013), our results suggest that in addition to lowering
perceived understanding, people may also be more willing
to change their minds when presented with information from
sources they deem valid. However, we are hesitant to claim
the generalizability of our findings as we have only yet pre-
sented evidence for its effectiveness within the domain of
economics.
If wisdom comes with recognizing the limits of one’s
knowledge, and the privileging of expert opinion indicates
that one does recognize these limits, then the results of these
studies indicate that experiencing doubt can indeed make
us wiser. The realization that we know much less than we
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 924
thought seems to trigger a change in behavior which causes
individuals to weight the opinion of experts over that of lay
people. It seems that without this experience of self-doubt
many of us too often resemble the self-certain “fools and
fanatics” lamented by the late Bertrand Russell.
References
Bromme, R., Kienhues, D., & Porsch, T. (2010). Who knows
what and who can we believe? Epistemological beliefs
are beliefs about knowledge (mostly) to be attained from
others. Personal Epistemology in the Classroom: Theory,
Research, and Implications for Practice. (L. D. Bendixen
& F. C. Feucht, eds.), pp. 163-193, Cambridge University
Press. https://doi.org/10.1017/CBO9780511691904.006
Coppock, A. (2018). Generalizing from survey studies con-
ducted on mechanical turk: A replication approach. Po-
litical Science Research and Methods, 1–16. https://doi.
org/DOI:10.1017/psrm.2018.10.
Dawes, R. M. (1976). Shallow psychology. In J. S. Carroll
& J. W. Payne (Eds.), Cognition and social behavior (pp.
3–11). Hillsdale: Erlbaum.
Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003).
Why people fail to recognize their own incompetence.
Current Directions in Psychological Science,12(3), 83–
87. https://doi.org/10.1111/1467-8721.01235
Evans, J. S. B. T., & Stanovich, K. E. (2013). Dual-process
theories of higher cognition: Advancing the debate. Per-
spectives on Psychological Science. https://doi.org/10.
1177/1745691612460685
Fernbach, P. M., Light, N., Scott, S. E., Inbar, Y., & Rozin,
P. (2019). Extreme opponents of genetically modified
foods know the least but think they know the most. Nature
Human Behavior,3(3), 251–256. https://doi.org/10.1038/
s41562-018-0520-3
Fernbach, P. M., Rogers, T., Fox, C. R., & Sloman, S. A.
(2013). Political extremism is supported by an illusion of
understanding. Psychological Science,24(6), 939–946.
https://doi.org/10.1177/0956797612464058.
Fernbach, P. M., Sloman, S. A., Louis, R. St., & Shube, J.
N. (2013). Explanation fiends and foes: How mechanistic
detail determines understanding and preference. Journal
of Consumer Research,39(5), 1115–1131. http://dx.doi.
org/10.1086/667782.
Gilovich, T., Griffin, D., & Kahneman, D. (Eds.). (2002).
Heuristics and biases: The psychology of intuitive judg-
ment. Cambridge University Press. https://doi.org/10.
1017/CBO9780511808098.
Hardwig, J. (1985). Epistemic dependence. The Journal of
Philosophy. https://doi.org/10.2307/2026523.
Harris, A. J. L., Hahn, U., Madsen, J. K., & Hsu, A. S. (2016).
The appeal to expert opinion: Quantitative support for a
bayesian network approach. Cognitive Science,40(6),
1496–1533. https://doi.org/10.1111/cogs.12276.
Johnson, D. R., Murphy, M. P., & Messer, R. M. (2016). Re-
flecting on explanatory ability: A mechanism for detecting
gaps in causal knowledge. Journal of Experimental Psy-
chology. General,145(5), 573–588. https://doi.org/10.
1037/xge0000161
Johnston, C. D., & Ballard, A. O. (2016). Economists
and public opinion: Expert consensus and economic pol-
icy judgments. The Journal of Politics,78(2), 443–456.
https://doi.org/10.1086/684629.
Keil, F. C. (2003). Folkscience: Coarse interpretations of a
complex reality. Trends in Cognitive Sciences,7(8), 368–
373. https://doi.org/10.1016/S1364-6613(03)00158-X.
Keil, F. C. (2010). The feasibility of folk science. Cognitive
Science,34(5), 826–862. https://doi.org/10.1111/j.1551-
6709.2010.01108.x
Keil, F. C., Stein, C., Webb, L., Billings, V. D., & Rozenblit,
L. (2008). Discerning the division of cognitive labor: An
emerging understanding of how knowledge is clustered in
other minds. Cognitive Science,32(2), 259–300. https://
doi.org/10.1080/03640210701863339
Kitcher, P. (1990). The division of cognitive labor. The
Journal of Philosophy,87(1), 5–22. https://doi.org/10.
2307/2026796
Kruger, J., & Dunning, D. (1999). Unskilled and unaware
of it: How difficulties in recognizing one’s own incompe-
tence lead to inflated self-assessments. Journal of Person-
ality and Social Psychology. US: American Psychologi-
cal Association. https://doi.org/10.1037/0022-3514.77.6.
1121.
Krynski, T. R., & Tenenbaum, J. B. (2007). The role
of causality in judgment under uncertainty. Journal of
Experimental Psychology. General,136(3), 430–450.
https://doi.org/10.1037/0096-3445.136.3.430
Lakens, D. (2017). Equivalence tests: A practical primer for
t tests, correlation, and meta-analyses. Social Psycholog-
ical and Personality Science,8(4), 355–362. https://doi.
org/10.1177/1948550617697177
Landrum, A. R., & Mills, C. M. (2015). Developing expec-
tations regarding the boundaries of expertise. Cognition,
134, 215–231. https://doi.org/10.1016/j.cognition.2014.
10.013
Lawson, R. (2006). The science of cycology: Failures
to understand how everyday objects work. Memory &
Cognition,34(8), 1667–1675. https://doi.org/10.3758/
BF03195929
Litman, L., Robinson, J., & Abberbock, T. (2017).
TurkPrime.com: A versatile crowdsourcing data acqui-
sition platform for the behavioral sciences. Behavior
research methods,49(2), 433–442. http://dx.doi.org/10.
3758/s13428-016-0727-z.
Moore, D. A., & Healy, P. J. (2008). The trouble with
overconfidence. Psychological Review, 115(2). 502–517.
Judgment and Decision Making, Vol. 15, No. 6, November 2020 Feelings of ignorance and openness to expert opinion 925
https://doi.org/10.1037/0033-295X.115.2.502
Motta, M., Callaghan, T., & Sylvester, S. (2018). Knowing
less but presuming more: Dunning-Kruger effects and
the endorsement of anti-vaccine policy attitudes. Social
Science & Medicine,211, 274–281. https://doi.org/10.
1016/j.socscimed.2018.06.032
Ortoleva, P., & Snowberg, E. (2015). Overconfidence in
political behavior. American Economic Review, 105(2):
504-535.https://doi.org/10.1257/aer.20130921
Pierson, R. (1994). The epistemic authority of exper-
tise. PSA: Proceedings of the Biennial Meeting of the
Philosophy of Science Association,1994(1), 398–405.
https://doi.org/10.1086/psaprocbienmeetp.1994.1.193044
Rabb, N., Fernbach, P. M., & Sloman, S. A. (2019). Individ-
ual representation in a community of knowledge. Trends
in Cognitive Sciences,23(10), 891–902. https://doi.org/
10.1016/j.tics.2019.07.011
Rozenblit, L., & Keil, F. (2002). The misunder-
stood limits of folk science: an illusion of ex-
planatory depth. Cognitive Science,26(5), 521–562.
https://doi.org/10.1207/s15516709cog2605_1
Sloman, S. A., & Rabb, N. (2016). Your understanding is
my understanding. Psychological Science, 27(11), 1451–
1460.http://dx.doi.org/10.1177/0956797616662271.
Turpin, M. H., Meyers, E. A., Walker, A.C., Bialek, M.,
Stolz, J. A., & Fugelsang, J. A., (2020) The environmental
malleability of base-rate neglect. Psychonomic Bulletin
&Review, 27, 385–391. https://doi.org/10.3758/s13423-
020-01710-1.
Stanovich, K. E. (2009). What intelligence tests miss: The
psychology of rational thought. Yale University Press.
van Prooijen, J. W., Krouwel, A. P. M., & Emmer, J.
(2017). Ideological responses to the EU refugee crisis:
The left, the right, and the extremes. Social Psycho-
logical and Personality Science. https://doi.org/10.1177/
1948550617731501
Vitriol, J. A., & Marsh, J. K. (2018). The illusion of ex-
planatory depth and endorsement of conspiracy beliefs.
European Journal of Social Psychology,48(7), 955–969.
https://doi.org/10.1002/ejsp.2504
Voelkel, J. G., Brandt, M. J. & Colombo, M. (2018). I
know that I know nothing: Can puncturing the illusion
of explanatory depth overcome the relationship between
attitudinal dissimilarity and prejudice? Comprehensive
Results in Social Psychology,3(1): 56-78.
Vosgerau, J., Nelson, L D., Simonsohn, Uri., Simmons, J
P. (2019) 99% Impossible: A valid, or falsifiable, in-
ternal meta-analysis. Journal of Experimental Psychol-
ogy: General,148(9). https://dx.doi.org/10.2139/ssrn.
3271372
Wagenknecht, S. (2015). Facing the incompleteness of epis-
temic trust: Managing dependence in scientific practice.
Social Epistemology,29(2), 160–184. https://doi.org/10.
1080/02691728.2013.794872
Walters, D. J., Fernbach, P. M., Fox, C. R., & Sloman, S.
A. (2016). Known Unknowns: A Critical Determinant
of Confidence and Calibration. Management Science,
63(12), 4298–4307. https://doi.org/10.1287/mnsc.2016.
2580.
Wegner, D. M. (1987). Transactive memory: A con-
temporary analysis of the group mind. In B. Mullen
& G. R. Goethals (Eds.), Theories of Group Behavior
(pp. 185–208). Springer. https://doi.org/10.1007/978-1-
4612-4634-3_9
Wegner, D. M., Erber, R., & Raymond, P. (1991). Transac-
tive memory in close relationships. Journal of Personality
and Social Psychology,61, 923–929.
Wells, G. L., & Windschitl, P. D. (1999). Stimulus sam-
pling and social psychological experimentation. Person-
ality and Social Psychology Bulletin, 25(9), 1115–1125.
https://doi.org/10.1177/01461672992512005
West, T. G., & Plato. (1979). Plato’s “Apology of Socrates”:
An interpretation with a new translation. Ithaca: N.Y.
Zagzebski, L. T. (2012). Epistemic Authority: A Theory of
Trust, Authority, and Autonomy in Belief. Oup USA.
... Despite not definitively understanding how it works mechanistically, researchers continue to examine the consequences of exposing an illusion of explanatory depth (e.g., Cadario et al., 2021;Crawford & Ruscio, 2021;Littrell et al., 2022;Meyers et al., 2020;Sloman & Vives, 2022). This is likely based on the intuitive plausibility of the assumptions underlying the illusion. ...
... Recent work from our lab challenges the specificity principle. In this work, we examined the downstream psychological consequences of exposing an illusion of explanatory depth (Meyers et al., 2020). We demonstrated that people revise their opinions on economic issues more to professional economists than to random members of the public only after failing to explain how the issue worked (compared to when no explanation was required). ...
... If the illusion of explanatory depth rests on the breadth principle rather than the specificity principle the consequences would extend beyond the literature on the illusion. For instance, if a student who fails to explain something (e.g., in a Chemistry course) generalizes their feelings of ignorance to the rest of what they think they know, they may be more willing to engage with unrelated subject matter (e.g., Philosophy readings), at least for a short time and to the extent that they had otherwise been overconfident (Meyers et al., 2020). This pattern should also exist broadly, such that explanation might be an effective, indirect way of challenging someone's beliefs. ...
Preprint
Full-text available
People often overestimate their understanding of how things work. For instance, people believe they can explain even ordinary phenomena such as the operation of zippers and speedometers in greater depth than they really can. This is called the illusion of explanatory depth. Fortunately, a person can expose the illusion by attempting to generate a causal explanation for how the phenomenon operates (e.g., how a zipper works). Researchers have assumed for two decades that explanation exposes the illusion because explanation makes salient the gaps in a person’s knowledge of that phenomenon. However, recent evidence suggests that people might be able to expose the illusion by instead explaining a different phenomenon. If true, this would challenge our fundamental understanding of how the illusion works. Across three preregistered studies we tested whether the process of explaining one phenomenon (e.g., how a zipper works) would lead someone to report knowing less about a completely different phenomenon (e.g., how snow forms). In each study we found that explaining led people to report knowing less about various phenomena, regardless of what was explained. For example, people reported knowing less about how snow forms after attempting to explain how a zipper works. We discuss alternative accounts of the illusion of explanatory depth that might better fit our results. We also consider the utility of explanation as an indirect, non-confrontational debiasing method in which a person generalizes a feeling of ignorance about one phenomenon to their knowledge base more generally.
... Attempting to generate these explanations requires engagement in a type of detail-oriented reflective thinking from which a person may be made aware of previously unrealized gaps in their knowledge. This realization can be an epistemically humbling experience for some people (Meyers et al., 2020;Rozenblit & Keil, 2002) and often results in a reduced (and arguably more accurate) self-assessment of knowledge immediately after the explanation task. ...
... Although early examinations of the illusion of explanatory depth (e.g., Mills & Keil, 2004;Rozenblit & Keil, 2002) largely focused on participants' knowledge of mechanical operations (e.g., how a helicopter flies), similar results have been found for other knowledge domains. For example, an illusion of understanding has been exposed for knowledge of economic and political issues (Fernbach et al., 2013;Meyers et al., 2020) as well as knowledge of natural phenomenon such as how snow forms and what causes earthquakes (Rozenblit & Keil, 2002). Given these findings, as well as bullshit receptivity's positive associations with different types of overconfidence and negative associations with analytic thinking (e.g., , it is reasonable to hypothesize that an illusion of understanding may also underlie receptivity to various types of misinformation. ...
... Receptivity to misinformation is positively related to several common forms of overconfidence and may result from either faulty, error-prone rationalizing or failing to engage in sufficient reflective thinking at all Lyons et al., 2021;Salovich & Rapp, 2021). Additionally, research has consistently shown that an illusion of understandinga type of epistemic overconfidencecan be exposed and attenuated by engaging in a form of guided, explanatory reflection (e.g., Meyers et al., 2020;Rozenblit & Keil, 2002;Vitriol & Marsh, 2018). However, the bulk of the evidence linking misinformation receptivity to reflective thinking is correlational and experimental examination of these putative associations is largely absent from the literature. ...
Preprint
Full-text available
Across four studies (N = 818), we present evidence that engaging in guided, explanatory reflection reduces receptivity to pseudo-profound bullshit but not scientific bullshit or fake news. We also found robust effects of source credibility, in that ratings for pseudo-profound and scientific bullshit attributed to authoritative sources were significantly inflated compared to bullshit attributed to anonymous sources. However, these effects did not extend to accuracy ratings of fake news headlines. These findings provide initial evidence that an illusion of understanding may underlie receptivity to some types of misinformation but not others and that the appeal of misinformation spread by perceived experts may be largely immune to the putative benefits of interventions that rely solely on reflective thinking. Taken together, our results suggest that while encouraging the public to be more reflective can certainly be helpful as a general rule, the effectiveness of this strategy in reducing the persuasiveness of misinformation is limited by the type of misinformation one is exposed to as well as the perceived credibility of the source spreading it.
... Even when a person desires to be intellectually humble, recognizing the limits of one's knowledge requires overcoming metacognitive limitations that distort self-appraisal. For example, people tend to confidently overestimate how much they know about various phenomena -such as how a zip fastener works, how snow forms or how a helicopter takes flight -and become aware of their lack of knowledge only after failing to explain the phenomenon [61][62][63][64] . Moreover, people often fail to distinguish their knowledge from the knowledge of other people. ...
... In a series of studies, people overestimated their self-reported knowledge of a policy less after writing a detailed explanation of how that policy works, thereby recognizing that their knowledge of the policy was less complete than they originally thought (overcoming the 'illusion of understanding') 63,121,122 . Likewise, people reported less confidence when answering a question if they first identified their 'known unknowns' by listing two things they did not know 123 . ...
Article
Full-text available
In a time of societal acrimony, psychological scientists have turned to a possible antidote — intellectual humility. Interest in intellectual humility comes from diverse research areas, including researchers studying leadership and organizational behaviour, personality science, positive psychology, judgement and decision-making, education, culture, and intergroup and interpersonal relationships. In this Review, we synthesize empirical approaches to the study of intellectual humility. We critically examine diverse approaches to defining and measuring intellectual humility and identify the common element: a meta-cognitive ability to recognize the limitations of one’s beliefs and knowledge. After reviewing the validity of different measurement approaches, we highlight factors that influence intellectual humility, from relationship security to social coordination. Furthermore, we review empirical evidence concerning the benefits and drawbacks of intellectual humility for personal decision-making, interpersonal relationships, scientific enterprise and society writ large. We conclude by outlining initial attempts to boost intellectual humility, foreshadowing possible scalable interventions that can turn intellectual humility into a core interpersonal, institutional and cultural value. Intellectual humility involves acknowledging the limitations of one’s knowledge and that one’s beliefs might be incorrect. In this Review, Porter and colleagues synthesize concepts of intellectual humility across fields and describe the complex interplay between intellectual humility and related individual and societal factors.
... Accordingly, research further emphasised that ambivalent individuals are more aware that their own perspectives are only a limited section of a variety of information (Grossmann et al., 2020;Rothman & Melwani, 2017;Vogus et al., 2014). Being aware that one's knowledge is limited is in turn thought to increase the reliance on experts' opinions (Meyers et al., 2020). To clarify, individuals seem to increase the weight of experts' opinions over that of lay people, when they doubt their knowledge and realise that they know less than assumed (Meyers et al., 2020). ...
... Being aware that one's knowledge is limited is in turn thought to increase the reliance on experts' opinions (Meyers et al., 2020). To clarify, individuals seem to increase the weight of experts' opinions over that of lay people, when they doubt their knowledge and realise that they know less than assumed (Meyers et al., 2020). Applying this to permanent crises, a reliance on expert opinions might be relevant, because it suggests a higher belief in the guidance of respective permanent crises experts. ...
Thesis
Policy often tries to change individuals' counterproductive behaviours regarding permanent crises-such as the COVID-19 pandemic or climate crisis-through interventions. This study aims to establish a new instrument for these interventions by investigating whether emotional ambivalence-the simultaneous experience of positive and negative emotions-can be utilised to enhance adaptive behaviours in permanent crises. Drawing on the COMB system (Michie et al., 2011), it shall thereby be explored whether individuals who are primed to experience emotional ambivalence can respond to it by entering a cognitive flexible mindset. The particular capabilities and motivations of this mindset are hypothesized to subsequently foster adaptive behaviours in permanent crises. Additionally, it is examined whether mindfulness-individuals' tendency to be aware and accepting of the present-moment-moderates the link between emotional ambivalence and the cognitive flexible mindset; this completes a moderated mediation model. An online-experiment with 123 participants was conducted to test the predicted relations in the context of the COVID-19 pandemic. The results reveal that individuals report a significantly higher motivation to adapt and social distance, when they are treated to experience emotional ambivalence rather than singular emotions. That said, emotional ambivalence fails to robustly increase the motivation to wear a mask and individuals' attitudes about the COVID-19 pandemic practices. Likewise, the data does not robustly support the predicted underlying mechanism of the cognitive flexible mindset and mindfulness. The results are discussed, shedding light on implications for theory and practice, as well as pointing on limitations and directions for future research.
... Kruger and Dunning's (1999) four successive research studies showed that people who had a low level of performance in a given task or lacked certain abilities usually tended to exaggerate or inflate their abilities or competencies (self-efficacy) and their evaluation of personal performance. Kruger and Dunning's (1999) studies identified that the people who performed poorly were more likely to overestimate or inflate their abilities or skills, while people who showed a better performance were more likely to underestimate their abilities and performance (Folk, 2016;Jansen et al., 2021;Meyers et al., 2020). This phenomenon was later labelled the Dunning-Kruger effect. ...
Article
Purpose This study compared the results of self-report and ability-based tests of problem-solving abilities of 144 hospitality managers working at hotels and restaurants through an online survey. In the first stage of the study, the managers were asked to fill in the self-report problem-solving ability scale by Tesone et al. (2010). In the second stage of the study, the managers were asked to respond to questions in a case-study-based problem-solving test. Design/methodology/approach Problem-solving is a key aspect of business process management. This study aims to investigate and compare hospitality managers' actual and claimed (self-report) problem-solving abilities. A lack of unawareness of the actual level of skills may be an important problem as managers who tend to have inflated self-efficacy beliefs are less likely to allocate resources, e.g. time, money and effort, to develop a particular skill or ability they lack. They are also more likely to take risks regarding that skill or ability. Findings The results of the study showed that there was a major difference between the results of the self-report test and the actual test. This meant that the managers who participated in the study had inflated self-efficacy beliefs regarding their problem-solving abilities, i.e. they operated under the influence of the Dunning–Kruger effect. The study showed that self-report tests that are commonly used in businesses in recruitment and promotion may not provide a correct level of people's abilities. In general, managers who have inflated self-efficacy beliefs are less likely to be interested in developing a particular skill due to the overconfidence arising from their inflated self-efficacy beliefs. The study showed that managers were less likely to allocate resources, e.g. time, money and effort, to develop a particular skill they lack and are more likely to take risks regarding that particular skill. Practical implications Managers in the hospitality industry appear to lack problem solving-abilities. While the hospitality managers assigned high marks for their problem-solving abilities in a self-report problem-solving scale and appeared to be performing significantly good overall in problem-solving, they performed poorly in an actual problem solving exercise. It is recommended that businesses rather than depending on self-report problem-solving scales, they should resort to ability-based scales or exercises that actually measure managers' problem-solving abilities. Also, as managers who had formal tourism and hospitality education performed poorly, tourism and hospitality programme managers at universities are recommend to review their syllabi and curriculum so as to help support their graduates' problem-solving abilities. Originality/value The study is original as no previous study compared managers' problem-solving abilities by using self-report and ability-based tests. The study has implications for researchers in terms of developing knowledge, ability and skill-based scales in the future. The study has also significant practical implications for the practitioners.
... This approach could draw on inoculation and debunking research, which has shown that misleading (science) communication has less impact on people if they receive messages that refute the content or expose the strategies of such communication -either before people encounter it (Compton et al., 2021 ;Cook et al. ;van der Linden et al., 2017) or after they did so (Porter et al., 2019 ;Schmid & Betsch, 2019 ;Zhang et al., 2021). Another promising technique could be to make people aware that they might undervalue the opinion of experts and overestimate their own (Meyers et al., 2020). Strategies like these may be particularly effective for people with a pronounced affinity to science-related populism: Schmid-Petri and Bürger (2022), for example, found that exposing the misinformation techniques of science skeptics is more likely to reduce climate change denial among people with strong populist attitudes. ...
Thesis
Populist and anti-intellectual sentiments pose a considerable challenge to science and science communication in many countries worldwide. One proliferating variant of such sentiments can be conceived as science-related populism. Science-related populism criticizes that scientists, scholars, and experts supposedly determine how society produces ‘true knowledge’ and communicates about it, because they are seen as members of an academic elite which allegedly applies unreliable methods, is ideologically biased – and ignores that the common sense of ordinary people ought to be superior to scientific knowledge. Accordingly, science-related populism assumes that the ordinary people, and not academic elites, should be in charge for the production and communication of ‘true knowledge’. Scholarly and journalistic accounts suggested that science-related populism can have negative implications for the legitimacy of scientific expertise in society and societal discourse about science. However, there has been neither a conceptual framework nor empirical methods and evidence to evaluate these accounts. This cumulative dissertation addresses this deficit: It includes five articles that present a conceptualization of science-related populism (Article I), a survey scale to measure science-related populist attitudes (Article II), empirical findings on these attitudes and related perceptions (Article II, Article III, and Article IV), and a discussion of populist demands toward science communication (Article V). The synopsis scrutinizes the arguments and results published in these articles in three ways: First, it discusses further theoretical considerations on science-related populism, advantages and challenges of its measurement, and broader contexts of empirical evidence on it. Second, it describes implications of science-related populism for communication and discourse about science, and proposes ways in which these implications can be addressed in science communication practice. Third, it considers how scholarship of science-related populism can advance social-scientific research on populism and anti-scientific resentments and could develop in the future.
... One option may be to encourage people to try to explain the mechanisms underlying the complex scientific phenomena at issue. This has been shown to reduce subjective knowledge (33,44) and increase deference to experts (45). Another way to potentially make feelings of ignorance more salient to people is to give them reference points. ...
Article
Full-text available
Public attitudes that are in opposition to scientific consensus can be disastrous and include rejection of vaccines and opposition to climate change mitigation policies. Five studies examine the interrelationships between opposition to expert consensus on controversial scientific issues, how much people actually know about these issues, and how much they think they know. Across seven critical issues that enjoy substantial scientific consensus, as well as attitudes toward COVID-19 vaccines and mitigation measures like mask wearing and social distancing, results indicate that those with the highest levels of opposition have the lowest levels of objective knowledge but the highest levels of subjective knowledge. Implications for scientists, policymakers, and science communicators are discussed.
... The effect of explanation on humility, on people's sense of understanding, has been replicated many times (Gaviria et al., 2017;Johnson et al., 2016;Vitriol & Marsh, 2018;Voelkel et al., 2018;Zeveney & Marsh, 2016), although support for the effect on extremity is mixed. Crawford and Ruscio (2021) reported a failure to replicate our effect on extremity, although Meyers et al. (2020) found that inducing feelings of ignorance through explanation made people more receptive to the expert opinion of economists. ...
Article
My first 30‐odd years of research in cognitive science has been driven by an attempt to balance two facts about human thought that seem incompatible and two corresponding ways of understanding information processing. The facts are that, on one hand, human memories serve as sophisticated pattern recognition devices with great flexibility and an ability to generalize and predict as long as circumstances remain sufficiently familiar. On the other hand, we are capable of deploying an enormous variety of representational schemes that map closely onto articulable structure in the world and that support explanation even in unfamiliar circumstances. The contrasting ways of modeling such processes involve, first, more and more sophisticated associative models that capture progressively higher‐order statistical structure and, second, more powerful representational languages for other sorts of structure, especially compositional and causal structure. My efforts to rectify these forces have taken me from the study of memory to induction and category knowledge to causal reasoning. In the process, I have consistently appealed to dual systems of thinking. I have come to realize that a key reason for our success as cognizers is that we rely on others for most of our information processing needs; we live in a community of knowledge. We make use of others both intuitively—by outsourcing much of our thinking without knowing we are doing it—and by deliberating with others.
... We suggest that a high admirability speaker may serve as a cue in inspiring motivation to resolve the experienced uncertainty when encountering PPBS, beyond what would be expected if a low admirability individual were to have shared it. Specifically, encountering an ambiguous statement from an admirable speaker might prompt the epistemic process of meaning-seeking (Fry, 1998), along with an open-minded inquiry into one's limits of knowledge (Meyers et al., 2020) and willingness to consider the perspectives of others (Brienza et al., 2018) i.e., characteristics philosophers and empirical scientists attribute to wisdom (Grossmann et al., 2020b). However, if a low admirability individual were to share a PPBS statement, there would likely be little interest in making sense of what was said, and the statement would simply be dismissed. ...
Preprint
Full-text available
How do people reason in response to ambiguous messages shared by admirable individuals? Using behavioral markers and self-report questionnaires, in two experiments (N = 571) we examined the influence of speakers’ admirability on meaning-seeking and wise reasoning in response to pseudo-profound bullshit. In both studies, statements that sounded superficially impressive but lacked intent to communicate meaning generated meaning-seeking, but only when delivered by high admirability speakers (e.g., the Dalai Lama) as compared to low admirability speakers (e.g., Kim Kardashian). The effect of speakers’ admirability on meaning-seeking was unique to pseudo-profound bullshit statements and was absent for mundane (Study 1) and motivational (Study 2) statements. In Study 2, participants also engaged in wiser reasoning for pseudo-profound bullshit (vs. motivational) statements and did more so when speakers were high in admirability. These effects occurred independently of the amount of time spent on statements or the complexity of participants’ reflections. It appears that pseudo-profound bullshit can promote epistemic reflection and certain aspects of wisdom, when associated with an admirable speaker.
Article
Polarization is rising in most countries in the West. How can we reduce it? One potential strategy is to ask people to explain how a political policy works—how it leads to consequences— because that has been shown to induce a kind of intellectual humility: Explanation causes people to reduce their judgments of understanding of the issues (their “illusion of explanatory depth”). It also reduces confidence in attitudes about the policies; people become less extreme. Some attempts to replicate this reduction of polarization have been unsuccessful. Is the original effect real or is it just a fluke? In this paper, we explore the effect using more timely political issues and compare judgments of issues whose attitudes are grounded in consequentialist reasoning versus protected values. We also investigate the role of social proof. We find that understanding and attitude extremity are reduced after explanation but only for consequentialist issues, not those based on protected values. There was no effect of social proof.
Article
Full-text available
Across two experiments (N=799) we demonstrate that people’s use of quantitative information (e.g., base-rates) when making a judgment varies as the causal link of qualitative information (e.g., stereotypes) changes. That is, when a clear causal link for stereotypes is provided, people make judgments that are far more in line with them. When the causal link is heavily diminished, people readily incorporate non-causal base-rates into their judgments instead. We suggest that people use and integrate all of the information that is provided to them to make judgements, but heavily prioritize information that is causal in nature. Further, people are sensitive to the underlying causal structures in their environment and adapt their decision making as such.
Article
Full-text available
There is widespread agreement among scientists that genetically modified foods are safe to consume and have the potential to provide substantial benefits to humankind3. However, many people still harbour concerns about them or oppose their use. In a nationally representative sample of US adults, we find that as extremity of opposition to and concern about genetically modified foods increases, objective knowledge about science and genetics decreases, but perceived understanding of genetically modified foods increases. Extreme opponents know the least, but think they know the most. Moreover, the relationship between self-assessed and objective knowledge shifts from positive to negative at high levels of opposition. Similar results were obtained in a parallel study with representative samples from the United States, France and Germany, and in a study testing attitudes about a medical application of genetic engineering technology (gene therapy). This pattern did not emerge, however, for attitudes and beliefs about climate change.
Article
Full-text available
The average person possesses superficial understanding of complex causal relations and, consequently, tends to overestimate the quality and depth of their explanatory knowledge. In this study, we examined the role of this illusion of explanatory depth (IOED) in politics– inflated confidence in one's causal understanding of political phenomena– for endorsement of conspiracy beliefs. Utilizing a pre‐/post‐election panel design and a large sample of U.S. Citizens (N=394) recruited in the context of the 2016 Presidential Election, we provide evidence that political IOED, but not a non‐political IOED, was associated with increased support for general and election‐specific conspiracy beliefs, particularly among political novices and supporters of the losing candidate. We find this pattern of results net the influence of a broad range of variables known to covary with conspiracy beliefs. Implications for theory and the need for future research are discussed. This article is protected by copyright. All rights reserved.
Article
Full-text available
People are prejudiced toward groups they perceive as having a worldview dissimilar from their own. This link between perceived attitudinal dissimilarity and prejudice is so stable that it has been described as a psychological law. The current research tests whether reducing people’s (over-)confidence in their own understanding of policies by puncturing their illusion of explanatory depth in the political domain will reduce the link between perceived attitudinal dissimilarity and prejudice. In an initial pre-registered experiment (N = 296), we did not find support for our hypothesis, but exploratory analyses indicated that the hypothesized effect occurred for political moderates (but not for people who identified as strong liberals/conservatives). However, despite successfully manipulating people’s understanding of policies, in the main study (N = 492) we did not replicate the result of the initial experiment. We suggest potential explanations for our results and discuss future directions for research on breaking the link between attitudinal dissimilarity and prejudice.
Article
Full-text available
We propose that an important determinant of judged confidence is the evaluation of evidence that is unknown or missing, and overconfidence is often driven by the neglect of unknowns. We contrast this account with prior research suggesting that overconfidence is due to biased processing of known evidence in favor of a focal hypothesis. In Study 1, we asked participants to list their thoughts as they answered two-alternative forced-choice trivia questions and judged the probability that their answers were correct. Participants who thought more about unknowns were less overconfident. In Studies 2 and 3, we asked participants to list unknowns before assessing their confidence. “Considering the unknowns” reduced overconfidence substantially and was more effective than the classic “consider the alternative” debiasing technique. Moreover, considering the unknowns selectively reduced confidence in domains where participants were overconfident but did not affect confidence in domains where participants were well-calibrated or underconfident. Data, as supplemental material, are available at https://doi.org/10.1287/mnsc.2016.2580 . This paper was accepted by Yuval Rottenstreich, judgment and decision making.
Article
Full-text available
The 2016 European Union (EU) refugee crisis exposed a fundamental distinction in political attitudes between the political left and right. Previous findings suggest, however, that besides political orientation, ideological strength (i.e., political extremism) is also relevant to understand such distinctive attitudes. Our study reveals that the political right is more anxious, and the political left experiences more self-efficacy, about the refugee crisis. At the same time, the political extremes—at both sides of the spectrum—are more likely than moderates to believe that the solution to this societal problem is simple. Furthermore, both extremes experience more judgmental certainty about their domain-specific knowledge of the refugee crisis, independent of their actual knowledge. Finally, belief in simple solutions mediated the relationship between ideology and judgmental certainty, but only among political extremists. We conclude that both ideological orientation and strength matter to understand citizens’ reactions to the refugee crisis.
Article
An individual's knowledge is collective in at least two senses: it often comes from other people's testimony, and its deployment in reasoning and action requires accuracy underwritten by other people's knowledge. What must one know to participate in a collective knowledge system? Here, we marshal evidence that individuals retain detailed causal information for a few domains and coarse causal models embedding markers indicating that these details are available elsewhere (others' heads or the physical world) for most domains. This framework yields further questions about metacognition, source credibility, and individual computation that are theoretically and practically important. Belief polarization depends on the web of epistemic dependence and is greatest for those who know the least, plausibly due to extreme conflation of others' knowledge with one's own.
Article
Several researchers have relied on, or advocated for, internal meta-analysis, which involves statistically aggregating multiple studies in a paper to assess their overall evidential value. Advocates of internal meta-analysis argue that it provides an efficient approach to increasing statistical power and solving the file-drawer problem. Here we show that the validity of internal-meta-analysis rests on the assumption that no studies or analyses were selectively reported. That is, the technique is only valid if (1) all conducted studies were included (i.e., an empty file-drawer), and (2) for each included study, exactly one analysis was attempted (i.e., there was no p-hacking). We show that even very small doses of selective reporting invalidate internal-meta-analysis. For example, the kind of minimal p-hacking that increases the false-positive rate of one study to just 8% increases the false-positive rate of a 10-study internal meta-analysis to 83%. If selective reporting is approximately zero, but not exactly zero, then internal meta-analysis is invalid. To be valid, (1) an internal meta-analysis would need to exclusively contain studies that were properly pre-registered, (2) those pre-registrations would have to be followed in all essential aspects, and (3) the decision of whether to include a given study in an internal meta-analysis would have to be made before any of those studies are run.
Article
Objective: Although the benefits of vaccines are widely recognized by medical experts, public opinion about vaccination policies is mixed. We analyze public opinion about vaccination policies to assess whether Dunning-Kruger effects can help to explain anti-vaccination policy attitudes. Rationale: People low in autism awareness - that is, the knowledge of basic facts and dismissal of misinformation about autism - should be the most likely to think that they are better informed than medical experts about the causes of autism (a Dunning-Kruger effect). This "overconfidence" should be associated with decreased support for mandatory vaccination policies and skepticism about the role that medical professionals play in the policymaking process. Method: In an original survey of U.S. adults (N = 1310), we modeled self-reported overconfidence as a function of responses to a knowledge test about the causes of autism, and the endorsement of misinformation about a link between vaccines and autism. We then modeled anti-vaccination policy support and attitudes toward the role that experts play in the policymaking process as a function of overconfidence and the autism awareness indicators while controlling for potential confounding factors. Results: More than a third of respondents in our sample thought that they knew as much or more than doctors (36%) and scientists (34%) about the causes of autism. Our analysis indicates that this overconfidence is highest among those with low levels of knowledge about the causes of autism and those with high levels of misinformation endorsement. Further, our results suggest that this overconfidence is associated with opposition to mandatory vaccination policy. Overconfidence is also associated with increased support for the role that non-experts (e.g., celebrities) play in the policymaking process. Conclusion: Dunning-Kruger effects can help to explain public opposition to vaccination policies and should be carefully considered in future research on anti-vaccine policy attitudes.
Article
To what extent do survey experimental treatment effect estimates generalize to other populations and contexts? Survey experiments conducted on convenience samples have often been criticized on the grounds that subjects are sufficiently different from the public at large to render the results of such experiments uninformative more broadly. In the presence of moderate treatment effect heterogeneity, however, such concerns may be allayed. I provide evidence from a series of 15 replication experiments that results derived from convenience samples like Amazon’s Mechanical Turk are similar to those obtained from national samples. Either the treatments deployed in these experiments cause similar responses for many subject types or convenience and national samples do not differ much with respect to treatment effect moderators. Using evidence of limited within-experiment heterogeneity, I show that the former is likely to be the case. Despite a wide diversity of background characteristics across samples, the effects uncovered in these experiments appear to be relatively homogeneous.