ArticlePDF Available

Unskilled and unaware - But why? A reply to Krueger and Mueller (2002)

Authors:

Abstract

J. Kruger and D. Dunning (1999) argued that the unskilled suffer a dual burden: Not only do they perform poorly, but their incompetence robs them of the metacognitive ability to realize it. J. Krueger and R. A. Mueller (2002) replicated these basic findings but interpreted them differently. They concluded that a combination of the better-than-average (BTA) effect and a regression artifact better explains why the unskilled are unaware. The authors of the present article respectfully disagree with this proposal and suggest that any interpretation of J. Krueger and R. A. Mueller's results is hampered because those authors used unreliable tests and inappropriate measures of relevant mediating variables. Additionally, a regression-BTA account cannot explain the experimental data reported in J. Kruger and D. Dunning or a reanalysis following the procedure suggested by J. Krueger and R. A. Mueller.
Unskilled and Unaware—But Why?
A Reply to Krueger and Mueller (2002)
Justin Kruger
University of Illinois David Dunning
Cornell University
J. Kruger and D. Dunning (1999) argued that the unskilled suffer a dual burden: Not only do they perform
poorly, but their incompetence robs them of the metacognitive ability to realize it. J. Krueger and R. A.
Mueller (2002) replicated these basic findings but interpreted them differently. They concluded that a
combination of the better-than-average (BTA) effect and a regression artifact better explains why the
unskilled are unaware. The authors of the present article respectfully disagree with this proposal and
suggest that any interpretation of J. Krueger and R. A. Mueller’s results is hampered because those
authors used unreliable tests and inappropriate measures of relevant mediating variables. Additionally, a
regression–BTA account cannot explain the experimental data reported in J. Kruger and D. Dunning or
a reanalysis following the procedure suggested by J. Krueger and R. A. Mueller.
In 1999, we published an article (Kruger & Dunning, 1999)
suggesting that the skills that enable one to perform well in a
domain are often the same skills necessary to be able to recognize
good performance in that domain. As a result, when people are
unskilled in a domain (as everyone is in one domain or another),
they lack the metacognitive skills necessary to realize it. To test
this hypothesis, we conducted a series of studies in which we
compared perceived and actual skill in a variety of everyday
domains. Our predictions were borne out: Across the various
studies, poor performers (i.e., those in the bottom quartile of those
tested) overestimated their percentile rank by an average of 50
percentile points.
Along the way, we also discovered that top performers, although
they estimated their raw test scores relatively accurately, slightly
but reliably underestimated their comparative performance, that is,
their percentile rank among their peers. Although not central to our
hypothesis, we reasoned that top performers might underestimate
themselves relative to others because they have an inflated view of
the competence of their peers, as predicted by the well-
documented false consensus effect (Ross, Greene, & House, 1977)
or, as Krueger and Mueller (2002) termed it, a social-projection
error.
Krueger and Mueller (2002) replicated some of our original
findings, but not others. As in Kruger and Dunning (1999), they
found that poor performers vastly overestimate themselves and
show deficient metacognitive skills in comparison with their more
skilled counterparts. Krueger and Mueller also replicated our find-
ing that top performers underestimate their comparative ranking.
They did not find, however, that metacognitive skills or social
projection mediate the link between performance and miscalibra-
tion. Additionally, they found that correcting for test unreliability
reduces or eliminates the apparent asymmetry in calibration be-
tween top and bottom performers. They thus concluded that a
regression artifact, coupled with a general better-than-average
(BTA) effect, is a more parsimonious account of our original
findings than our metacognitive one is.
In the present article we outline some of our disagreements with
Krueger and Mueller’s (2002) interpretation of our original find-
ings. We suggest that the reason the authors failed to find medi-
ational evidence was because of their use of unreliable tests and
inappropriate measures of our proposed mediators. Additionally,
we point out that the regression–BTA account is inconsistent with
the experimental data we reported in our original article, as well as
with the results of a reanalysis of those data using their own
analytical procedure.
Does Regression Explain the Results?
The central point of Krueger and Mueller’s (2002) critique is
that a regression artifact, coupled with a general BTA effect, can
explain the results of Kruger and Dunning (1999). As they noted,
all psychometric tests involve error variance, thus “with repeated
testing, high and low test scores regress toward the group average,
and the magnitude of these regression effects is proportional to the
size of the error variance and the extremity of the initial score”
(Krueger & Mueller, 2002, p. 184). They go on to point out that “in
the Kruger and Dunning (1999) paradigm, unreliable actual per-
centiles mean that the poorest performers are not as deficient as
they seem and that the highest performers are not as able as they
seem” (p. 184).
Although we agree that test unreliability can contribute to the
apparent miscalibration of top and bottom performers, it cannot
fully explain this miscalibration. If it did, then controlling for test
The writing of this reply was supported financially by University of
Illinois Board of Trustees Grant 1-2-69853 to Justin Kruger and by Na-
tional Institute of Mental Health Grant RO1 56072 to David Dunning.
Correspondence concerning this article should be addressed to Justin
Kruger, Department of Psychology, 709 Psychology Building, Univer-
sity of Illinois, 603 East Daniel Street, Champaign, Illinois 61820, or to
David Dunning, Department of Psychology, Uris Hall, Cornell Univer-
sity, Ithaca, New York 14853-7601. E-mail: jkruger@s.psych.uiuc.edu
or dad6@cornell.edu
Journal of Personality and Social Psychology Copyright 2002 by the American Psychological Association, Inc.
2002, Vol. 82, No. 2, 189–192 0022-3514/02/$5.00 DOI: 10.1037//0022-3514.82.2.189
189
reliability, as Krueger and Mueller (2002) do in Figure 2, should
cause the asymmetry to disappear. Although this was the case for
the difficult test that Krueger and Mueller used, this was inevitable
given that the test was extremely unreliable (SpearmanBrown
.17). On their easy test, which had moderate reliability of .56,
low-scoring participants still overestimated themselvesby ap-
proximately 30 percentile pointseven after controlling for test
unreliability, just as the metacognitive account predicts. When
even more reliable tests are used, the regression account is even
less plausible. For instance, in Study 4 of Kruger and Dunning
(1999), in which test reliability was quite high (Spearman
Brown .93), controlling for test unreliability following the
procedure outlined by Krueger and Mueller failed to change the
overall picture. As Figure 1 of this article shows, even after
controlling for test unreliability, low-scoring participants contin-
ued to overestimate their percentile score by nearly 40 points (and
high scorers still underestimated themselves). In sum, although we
agree with Krueger and Mueller that measurement error can con-
tribute to some of the apparent miscalibration among top and
bottom scorers, it does not, as Figure 1 of this article and Figure 2
of theirs clearly show, account for all of it.
Do Metacognition and Social Projection
Mediate Miscalibration?
Krueger and Mueller (2002) did more than merely suggest an
alternative interpretation of our data, they also called into question
our interpretation. Specifically, although they found evidence that
poor performers show lesser metacognitive skills than top per-
formers, they failed to find that these deficiencies mediate the link
between performance and miscalibration.
The fact that these authors failed to find mediational evidence is
hardly surprising, however, in light of the fact that the tests they
used to measure performance, as the authors themselves recog-
nized, were either moderately unreliable or extremely so. It is
difficult for a mediator to be significantly correlated with a crucial
variable, such as performance, when that variable is not measured
reliably.
In addition, even if the tests were reliable, we would be sur-
prised if the authors had found evidence of mediation because their
measures of metacognitive skills did not adequately capture what
that skill is. Metacognitive skill, traditionally defined, is the ability
to anticipate or recognize accuracy and error (Metcalfe & Shi-
mamura, 1994). Krueger and Mueller (2002) operationalized this
variable by correlating, across items, participantsconfidence in
their answers and the accuracy of those answers. The higher the
correlation, the better they deemed the individuals metacognitive
skills.
There are several problem with this measure, however. Principal
among them is the fact that a high correlation between judgment
and reality does not necessarily imply high accuracy, nor does a
low correlation imply the opposite. To see why, consider an
example inspired by Campbell and Kenny (1999) of two weather
forecasters, Rob and Laura. As Table 1 shows, although Robs
predictions are perfectly correlated with the actual temperatures,
Lauras are more accurate: Whereas Robs predictions are off by
an average of 48 degrees, Lauras are off by a mere 7.
How can this be? Correlational measures leave out two impor-
tant components of accuracy. The first is getting the overall level
of the outcome right, and this is something on which Rob is
impaired. The second is ensuring that the variance of the predic-
tions is in harmony with the variance of the outcome, depending on
how strongly they are correlated (Campbell & Kenny, 1999).
Correlational measures miss both these components. However,
deviational measures, that is, ones that simply assess on average
how much predictions differ from reality, do take these two com-
ponents into account. We suspect that this fact, coupled with the
problem of test unreliability, is the reason the deviational measures
of metacognition we used in our studies mediated the link between
performance and miscalibration, whereas the correlational measure
used by Krueger and Mueller (2002) did not.
1
Note that this point applies equally well to Krueger and Muel-
lers (2002) social-projection measure (how well others are doing)
as it does to their metacognition measure (how well oneself is
1
Krueger and Muellers (2002) operationalization of metacognitive
accuracy is problematic on other grounds. As researchers in metacognition
have discovered, different correlational measures of skill (e.g., Pearsonsr,
gamma) often produce very different results when applied to the exact
same data (for an excellent discussion, see Schwartz & Metcalfe, 1994).
Figure 1. Regression of estimated performance on actual performance
before and after correction for unreliability (based on data from Kruger &
Dunning, 1999, Study 4).
Table 1
Comparison of the Prediction Skills of Two
Hypothetical Weather Forecasters
Day Actual
temperature
a
Forecast
temperatures
a
Rob Laura
Monday 70 20 65
Tuesday 80 35 75
Wednesday 60 5 70
Thursday 70 20 75
Friday 90 50 80
r1.00 .53
Average deviation
from actual score 48 7
a
In degrees Fahrenheit.
190 KRUGER AND DUNNING
doing). In our original studies, we suggested that highly skilled
individuals underestimate their comparative performance because
they have an inflated view of the overall performances of their
peers (as predicted by the false consensus effect). Counter to our
hypothesis, Krueger and Mueller did not find any evidence for a
social-projection problem among high performers. However, their
measure of social projection is irrelevant to our original assertion.
Of key importance is whether high performers overestimated how
well they thought their peers had performed overall. Krueger and
Muellers measure, instead, focuses on the correlation between
participantsconfidence in their own responses across items and
their confidence in their peersresponses. Note that an individual
might have a very inflated view of the performances of their peers
(or a very deflated one), but that this within-subject correlational
measure would capture none of this misestimation, instead mea-
suring only how individual item self-confidence covaries with
individual itemother confidence.
What Does Experimental Evidence Suggest?
To be fair, we believe that there is more to Krueger and
Muellers (2002) regression effect argument than simple measure-
ment error. After all, as we noted in our original article (Kruger
and Dunning, 1999, p. 1124) and elsewhere (see Kruger, Savitsky,
& Gilovich, 1999), whenever two variables are imperfectly corre-
lated, extreme values on one variable are likely to be matched by
less extreme values of the other. In the present context, this means
that extremely poor performers are likely to overestimate that
performance to some degree.
For this reason, we collected additional data to directly test our
interpretation. The crucial test of the metacognitive account does
not come from demonstrating that regression effects cannot ex-
plain our data. Rather, the crucial test comes from experimentally
manipulating metacognitive skills and social projection to see
whether this results in improved calibration. This was the approach
we took in our original studies, and we believe the data we
obtained provide the most conclusive support for our own inter-
pretation and against Krueger and Muellers (2002) regression
BTA interpretation. If our results were due merely to a regression
artifact, then we should have observed the same regressive corre-
lations regardless of whatever experimental manipulation we used.
However, we found in Studies 3b and 4 of our original article that
we could make the regression effect evaporate under experimental
conditions as exactly predicted by our theoretical analysis.
In Study 4, for instance, we gave 140 participants a test of
logical reasoning and compared actual performance on the test
with perceived performance. Next, we asked participants to grade
their own test (i.e., to indicate which problems they thought they
had answered correctly and which they had answered incorrectly)
and to estimate their overall performance once more. Half of the
participants, however, did something else. Just prior to grading
their test, they completed a crash course on logical reasoning
adopted from Cheng, Holyoak, Nisbett, and Oliver (1986). What
we found was that participants who had received trainingbut
only participants who had received trainingbecame substantially
more calibrated with respect to their test performance. Incompetent
participants who had, just prior to training, overestimated their test
score by 5 points (out of 10) and their percentile score by 36
percentile points were then within 1 point of their actual test score
and within 17 points of their percentile score. Mediational analyses
revealed that this new-found accuracy was a direct result of the
increased metacognitive skill. In passing, Krueger and Mueller
(2002) took issue with the overall strategy we pursued in this
study, but provided no alternative account of the results of our
experimental manipulation.
We took a similar approach in demonstrating the role of false
consensus in the slight underestimation we observed with ex-
tremely skilled participants. If top performers underestimate them-
selves because of an inflated view of the comparison group, then
they (and only they) should become more calibrated if they are
given an accurate picture of the true skills of their peers. In
Study 3, that is exactly what we observed: After they were pro-
vided with a representative sample of tests that had been com-
pleted by their peers, top-scoring (but not bottom-scoring) partic-
ipants arrived at more accurate self-assessments. This, too, cannot
be explained by statistical regression artifact.
Krueger and Mueller (2002) took issue with this finding as well,
pointing out that most people increased their percentile estimates
after seeing the performances of their peers, and that the increase
[among high performers] . . . was not significantly larger than the
increase among the poor performers(p. 185). Actually, it was, but
this is not the issue: The hypothesis derived from the false con-
sensus account of the underestimation of top performers is not that
top-scoring participants will increase their self-estimates more
than will poor performers, but that top-scoring participants will
improve their self-estimates more than will poor performers, who
will show no overall improvement (that is, they will not lower their
self-estimates).
2
This, too, is precisely what we observed.
Final Thoughts
We end on two final thoughts. First, we cannot help but notice
the obvious irony presented by this exchange. Krueger and Mueller
(2002) dismissed our original account of our data, and we have
spent a hefty portion of this reply dismissing theirs. The discerning
reader may have noticed that both camps seem to be rather con-
fident in their conclusions, although, given the contradictions,
someone must be wrong. Whoever is wrong, they do not seem to
know it.
Second, although we strongly believe, for the reasons outlined
in this reply, that regression alone cannot explain why the un-
skilled are unaware, we do not believe Krueger and Muellers
(2002) alternative interpretation should be dismissed lightly. Re-
gression effects are notoriously difficult to spot but easy to mis-
understandby laypeople and researchers alike (Kahneman &
Tversky, 1973; Lawson, 2001; Nisbett & Ross, 1980). Although
regression effects cannot explain our original data, the simple fact
remains that more work needs to be done. No single study, or even
set of studies, can be taken as the final word on the issue, and it
remains to be seen which accountours, theirs, or one yet to
comebest explains why the unskilled are unaware.
2
The interaction term from the 2 (quartile: top vs. bottom) 2 (esti-
mate: Time 1 vs. Time 2) analysis on participantsperceptions of their
percentile ability was F(1, 34) 4.54, p.04, although this was not
reported in our original article because it did not pertain to our hypothesis.
191
UNSKILLED AND UNAWARE
References
Campbell, D. T., & Kenny, D. A. (1999). A primer on regression artifacts.
New York: Guilford.
Cheng, P. W., Holyoak, K. J., Nisbett, R. E., & Oliver, L. M. (1986).
Pragmatic versus syntactic approaches to training deductive reasoning.
Cognitive Psychology, 18, 293328.
Kahneman, D., & Tversky, A. (1973). On the psychology of prediction.
Psychological Review, 80, 237251.
Krueger, J., & Mueller, R. A. (2002). Unskilled, unaware, or both? The
better-than-average heuristic and statistical regression predict errors in
estimates of own performance. Journal of Personality and Social Psy-
chology, 82, 180188.
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficul-
ties in recognizing ones own incompetence lead to inflated self-
assessments. Journal of Personality and Social Psychology, 77, 11211134.
Kruger, J., Savitsky, K., & Gilovich, T. (1999). Superstition and the
regression effect. Skeptical Inquirer, 23, 2429.
Lawson, T. J. (2001). Everyday statistical reasoning. Pacific Grove, CA:
Wadsworth.
Metcalfe, J., & Shimamura, A., P. (1994). Metacognition: Knowing about
knowing. Cambridge, MA: MIT Press.
Nisbett, R., & Ross, L. (1980). Human inference: Strategies and short-
comings of social judgment. Englewood Cliffs, NJ: Prentice-Hall.
Ross, L., Greene, D., & House, P. (1977). The false consensus effect:An
egocentric bias in social perception and attribution processes. Journal of
Experimental Social Psychology, 13, 279301.
Schwartz, B. L., & Metcalfe, J. (1994). Methodological problems and
pitfalls in the study of human metacognition. In J. Metcalfe & A. P.
Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 93
113). Cambridge, MA: Mit Press.
Received August 13, 2001
Accepted August 15, 2001
192 KRUGER AND DUNNING
... In the case of confidence, a well-known cognitive bias occurs in poor performers who are overconfident in their abilities, known as the Dunning-Kruger effect (Kruger and Dunning, 1999). This interpretation has been challenged by noting that regression to the mean would lead to similar observations of overconfidence (Krueger and Mueller, 2002;Kruger and Dunning, 2002;Nuhfer et al., 2016) and a rational Bayesian inference model largely explains the miscalibration of confidence (Jansen et al., 2021). ...
Full-text available
Preprint
Rational decision makers aim to maximize their gains, but humans and other animals often fail to do so, exhibiting biases and distortions in their choice behavior. In a recent study of economic decisions, humans, mice, and rats have been reported to succumb to the sunk cost fallacy, making decisions based on irrecoverable past investments in detriment of expected future returns. We challenge this interpretation because it is subject to a statistical fallacy, a form of attrition bias, and the observed behavior can be explained without invoking a sunk cost-dependent mechanism. Using a computational model, we illustrate how a rational decision maker with a reward-maximizing decision strategy reproduces the reported behavioral pattern and propose an improved task design to dissociate sunk costs from fluctuations in decision valuation. Similar statistical confounds may be common in analyses of cognitive behaviors, highlighting the need to use causal statistical inference and generative models for interpretation.
Full-text available
Article
For many intellectual tasks, the people with the least skill overestimate themselves the most, a pattern popularly known as the Dunning–Kruger effect (DKE). The dominant account of this effect depends on the idea that assessing the quality of one's performance (metacognition) requires the same mental resources as task performance itself (cognition). Unskilled people are said to suffer a dual burden : they lack the cognitive resources to perform well, and this deprives them of metacognitive insight into their failings. In this Registered Report , we applied recently developed methods for the measurement of metacognition to a matrix reasoning task, to test the dual-burden account. Metacognitive sensitivity (information exploited by metacognition) tracked performance closely, so less information was exploited by the metacognitive judgements of poor performers; but metacognitive efficiency (quality of metacognitive processing itself) was unrelated to performance. Metacognitive bias (overall tendency towards high or low confidence) was positively associated with performance, so poor performers were appropriately less confident—not more confident—than good performers. Crucially, these metacognitive factors did not cause the DKE pattern, which was driven overwhelmingly by performance scores. These results refute the dual-burden account and suggest that the classic DKE is a statistical regression artefact that tells us nothing much about metacognition.
Article
Scientific reasoning ability, the ability to reason critically about the quality of scientific evidence, can help laypeople use scientific evidence when making judgments and decisions. We ask whether individuals with greater scientific reasoning ability are also better calibrated with respect to their ability, comparing calibration for skill with the more widely studied calibration for knowledge. In three studies, participants (Study 1: N = 1022; Study 2: N = 101; and Study 3: N = 332) took the Scientific Reasoning Scale (SRS; Drummond & Fischhoff, 2017), comprised of 11 true–false problems, and provided confidence ratings for each problem. Overall, participants were overconfident, reporting mean confidence levels that were 22.4–25% higher than their percentages of correct answers; calibration improved with score. Study 2 found similar calibration patterns for the SRS and another skill, the Cognitive Reflection Test (CRT), measuring the ability to avoid intuitive but incorrect answers. SRS and CRT scores were both associated with success at avoiding negative decision outcomes, as measured by the Decision Outcomes Inventory; confidence on the SRS, above and beyond scores, predicted worse outcomes. Study 3 added an alternative measure of calibration, asking participants to estimate the number of items answered correctly. Participants were less overconfident by this measure. SRS scores predicted correct usage of scientific information in a drug facts box task and holding beliefs consistent with the scientific consensus on controversial issues; confidence, above and beyond SRS scores, predicted worse drug facts box performance but stronger science‐consistent beliefs. We discuss the implications of our findings for improving science‐relevant decision‐making.
Full-text available
Article
This work analyzes whether there is a cognitive bias between the ideal perception of the skills and the real performance in an introductory physics class, and additionally, whether predictions of students’ performance are related to various motivational variables. We examined through a validated survey and network analysis the relationship between several motivational aspects and volitional variables with the accuracy of their predictions. The results show that the students present a motivational bias when students’ desires were considered, mainly in the students with low academic performance. Finally, it is necessary to explore the development of specific interventions that target the motivations of students, in order to be effective, and to reduce the gap between expected and actual grades, increasing students’ metacognitive skills and thus their academic performance.
Full-text available
Article
An explanation of the Dunning–Kruger effect is provided which does not require any psychological explanation, because it is derived as a statistical artifact. This is achieved by specifying a simple statistical model which explicitly takes the (random) boundary constraints into account. The model fits the data almost perfectly. JEL Classification A22; C24; C91; D84; D91; I21
Full-text available
Article
In this paper, we will re-elaborate the notions of filter bubble and of echo chamber by considering human cognitive systems’ limitations in everyday interactions and how they experience digital technologies. Researchers who applied the concept of filter bubble and echo chambers in empirical investigations see them as forms of algorithmically-caused systems that seclude the users of digital technologies from viewpoints and opinions that oppose theirs. However, a significant majority of empirical research has shown that users do find and interact with opposing views. Furthermore, we argue that the notion of filter bubble overestimates the social impact of digital technologies in explaining social and political developments without considering the not-only-technological circumstances of online behavior and interaction. This provides us with motivation to reconsider this notion’s validity and re-elaborate it in light of existing epistemological theories that deal with the discomfort people experience when dealing with what they do not know. Therefore, we will survey a series of philosophical reflections regarding the epistemic limitations of human cognitive systems. In particular, we will discuss how knowledge and mere belief are phenomenologically indistinguishable and how people’s experience of having their beliefs challenged is cause of epistemic discomfort. We will then go on to argue, in contrast with Pariser’s assumptions, that digital media users might tend to conform to their held viewpoints because of the “immediate” way they experience opposing viewpoints. Since online people experience others and their viewpoints as material features of digital environments, we maintain that this modality of confronting oneself with contrasting opinions prompts users to reinforce their preexisting beliefs and attitudes.
Article
Beginners are plagued with overconfidence. They perform the worst on tests of knowledge or skill and yet are the most overconfident. One would expect that learning would better calibrate beginners and lead to better quality decisions. However, learning can instead produce overconfidence. In this article I discuss how being beginner can lead to errors in self‐assessments and riskier decisions. I review differences in beginner overconfidence in several places in the literature. First, the Dunning–Kruger Effect, which finds those with the least knowledge are the most overconfident. To be sure, beginners can range from rank beginners, those who have acquired no knowledge, to experienced beginners, beginners who having gained some but not vast knowledge. Next, I discuss the beginner's bubble that follows the trajectory of confidence and overconfidence as people transition from rank to experienced beginners. The beginner's bubble pattern finds rank beginners have insight into their poor abilities. However, with some learning there is a surge of confidence and overconfidence. Lastly, I explore differences in confidence and overconfidence as learners transition from rank to experienced beginners in other places in the literature.
Article
Rational decision makers aim to maximize their gains, but humans and other animals often fail to do so, exhibiting biases and distortions in their choice behavior. In a recent study of economic decisions, humans, mice, and rats were reported to succumb to the sunk cost fallacy, making decisions based on irrecoverable past investments to the detriment of expected future returns. We challenge this interpretation because it is subject to a statistical fallacy, a form of attrition bias, and the observed behavior can be explained without invoking a sunk cost–dependent mechanism. Using a computational model, we illustrate how a rational decision maker with a reward-maximizing decision strategy reproduces the reported behavioral pattern and propose an improved task design to dissociate sunk costs from fluctuations in decision valuation. Similar statistical confounds may be common in analyses of cognitive behaviors, highlighting the need to use causal statistical inference and generative models for interpretation.
Full-text available
Article
Significance Although Americans believe the confusion caused by false news is extensive, relatively few indicate having seen or shared it—a discrepancy suggesting that members of the public may not only have a hard time identifying false news but fail to recognize their own deficiencies at doing so. If people incorrectly see themselves as highly skilled at identifying false news, they may unwittingly participate in its circulation. In this large-scale study, we show that not only is overconfidence extensive, but it is also linked to both self-reported and behavioral measures of false news website visits, engagement, and belief. Our results suggest that overconfidence may be a crucial factor for explaining how false and low-quality information spreads via social media.
Full-text available
Article
People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.
Article
People who score low on a performance test overestimate their own performance relative to others, whereas high scorers slightly underestimate their own performance. J. Kruger and D. Dunning (1999) attributed these asymmetric errors to differences in metacognitive skill. A replication study showed no evidence for mediation effects for any of several candidate variables. Asymmetric errors were expected because of statistical regression and the general better-than-average (BTA) heuristic. Consistent with this parsimonious model, errors were no longer asymmetric when either regression or the BTA effect was statistically removed. In fact, high rather than low performers were more error prone in that they were more likely to neglect their own estimates of the performance of others when predicting how they themselves performed relative to the group. Demonstrations of cognitive–perceptual biases have been central to social–psychological research since the breakdown of normative attribution theories in the 1970s. Ordinary social perceivers have been shown to reason egocentrically and to be insensitive to the rules of scientific inference. At the same time, they are said to be overconfident in the accuracy of their own judgments (Gilovich, Griffin, & Kahneman, 2002; Nisbett & Ross, 1980). Chief among the social–perceptual biases is the “better-than-average” (BTA) effect. Most people believe that they are better and that they do better than the average person (Alicke, 1985; Brown, 1986; Krueger, 1998b). The BTA effect emerges in a variety of judgment domains, such as personality descriptions, risk perceptions, and, with the exception of very difficult tasks, expectations of performance (Kruger, 1999). Although researchers debate its adaptive value (e.g., Asendorpf & Ostendorf, 1998), most agree that the BTA effect reflects irrational thinking because “it is logically impossible for most people to be better than the average person” (Taylor & Brown, 1988, p. 195). When the BTA effect is found as a group phenomenon, it is tempting to conclude that it characterizes people in general. But such a conclusion would be rash. Of the many people who believe themselves to be better than average, many actually are (Krueger, 1998a). The question then becomes: Who is biased and why? Kruger and Dunning (1999) recently showed that poor performers greatly overestimate their own performance, whereas high performers slightly underestimate theirs. According to Kruger and
Article
Evidence from 4 studies with 584 undergraduates demonstrates that social observers tend to perceive a "false consensus" with respect to the relative commonness of their own responses. A related bias was shown to exist in the observers' social inferences. Thus, raters estimated particular responses to be relatively common and relatively unrevealing concerning the actors' distinguishing personal dispositions when the responses in question were similar to the raters' own responses; responses differing from those of the rater, by contrast, were perceived to be relatively uncommon and revealing of the actor. These results were obtained both in questionnaire studies presenting Ss with hypothetical situations and choices and in authentic conflict situations. The implications of these findings for the understanding of social perception phenomena and for the analysis of the divergent perceptions of actors and observers are discussed. Cognitive and perceptual mechanisms are proposed which might account for distortions in perceived consensus and for corresponding biases in social inference and attributional processes. (33 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
discuss several methodological issues concerning metacognitive accuracy / show how the nature of the final test itself, and, in particular, the number of alternatives in that test, influences assessed accuracy of prediction / a review of the literature is given showing that accuracy of metacognitive prediction increases along with the number of alternatives at time of 2nd test / discuss how restricted range on either the judgments themselves or on the criterion variable can influence the accuracy of metacognitive predictions / present an experiment that illustrates the impact of this potential confound / discuss problems that may arise when comparing groups that show a different mean level of problem solving, recall, or recognition / [discuss] the use of nonparametric as compared to parametric statistics for measuring accuracy when the level of memory performance varies radically between patient groups (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Considers that intuitive predictions follow a judgmental heuristic-representativeness. By this heuristic, people predict the outcome that appears most representative of the evidence. Consequently, intuitive predictions are insensitive to the reliability of the evidence or to the prior probability of the outcome, in violation of the logic of statistical prediction. The hypothesis that people predict by representativeness was supported in a series of studies with both naive and sophisticated university students (N = 871). The ranking of outcomes by likelihood coincided with the ranking by representativeness, and Ss erroneously predicted rare events and extreme values if these happened to be representative. The experience of unjustified confidence in predictions and the prevalence of fallacious intuitions concerning statistical regression are traced to the representativeness heuristic.
Article
Two views have dominated theories of deductive reasoning. One is the view that people reason using syntactic, domain-independent rules of logic, and the other is the view that people use domain-specific knowledge. In contrast with both of these views, we present evidence that people often reason using a type of knowledge structure termed pragmatic reasoning schemas. In two experiments, syntactically equivalent forms of conditional rules produced different patterns of performance in Wason's selection task, depending on the type of pragmatic schema evoked. The differences could not be explained by either dominant view. We further tested the syntactic view by manipulating the type of logic training subjects received. If people typically do not use abstract rules analogous to those of standard logic, then training on abstract principles of standard logic alone would have little effect on selection performance, because the subjects would not know how to map such rules onto concrete instances. Training results obtained in both a laboratory and a classroom setting confirmed our hypothesis: Training was effective only when abstract principles were coupled with examples of selection problems, which served to elucidate the mapping between abstract principles and concrete instances. In contrast, a third experiment demonstrated that brief abstract training on a pragmatic reasoning schema had a substantial impact on subjects' reasoning about problems that were interpretable in terms of the schema. The dominance of pragmatic schemas over purely syntactic rules was discussed with respect to the relative utility of both types of rules for solving real-world problems.