ArticlePublisher preview available

Overconfidence Among Beginners: Is a Little Learning a Dangerous Thing?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Across 6 studies we investigated the development of overconfidence among beginners. In 4 of the studies, participants completed multicue probabilistic learning tasks (e.g., learning to diagnose “zombie diseases” from physical symptoms). Although beginners did not start out overconfident in their judgments, they rapidly surged to a “beginner’s bubble” of overconfidence. This bubble was traced to exuberant and error-filled theorizing about how to approach the task formed after just a few learning experiences. Later trials challenged and refined those theories, leading to a temporary leveling off of confidence while performance incrementally improved, although confidence began to rise again after this pause. In 2 additional studies we found a real-world echo of this pattern of overconfidence across the life course. Self-ratings of financial literacy surged among young adults, then leveled off among older respondents until late adulthood, where it begins to rise again, with actual financial knowledge all the while rising more slowly, consistently, and incrementally throughout adulthood. Hence, when it comes to overconfident judgment, a little learning does appear to be a dangerous thing. Although beginners start with humble self-perceptions, with just a little experience their confidence races ahead of their actual performance.
Overconfidence Among Beginners: Is a Little Learning a
Dangerous Thing?
Carmen Sanchez
Cornell University
David Dunning
University of Michigan
Across 6 studies we investigated the development of overconfidence among beginners. In 4 of the
studies, participants completed multicue probabilistic learning tasks (e.g., learning to diagnose “zombie
diseases” from physical symptoms). Although beginners did not start out overconfident in their judg-
ments, they rapidly surged to a “beginner’s bubble” of overconfidence. This bubble was traced to
exuberant and error-filled theorizing about how to approach the task formed after just a few learning
experiences. Later trials challenged and refined those theories, leading to a temporary leveling off of
confidence while performance incrementally improved, although confidence began to rise again after this
pause. In 2 additional studies we found a real-world echo of this pattern of overconfidence across the life
course. Self-ratings of financial literacy surged among young adults, then leveled off among older
respondents until late adulthood, where it begins to rise again, with actual financial knowledge all the
while rising more slowly, consistently, and incrementally throughout adulthood. Hence, when it comes
to overconfident judgment, a little learning does appear to be a dangerous thing. Although beginners start
with humble self-perceptions, with just a little experience their confidence races ahead of their actual
performance.
Keywords: confidence, learning, metacognition, novices, overconfidence
Supplemental materials: http://dx.doi.org/10.1037/pspa0000102.supp
A little learning is a dangerous thing;
Drink deep, or taste not the Pierian spring;
There shallow draughts intoxicate the brain,
And drinking largely sobers us again.
—Alexander Pope (1711)
Of all the errors and biases people make in self and social
judgment, overconfidence arguably shows the widest range in its
implications and the most trouble in its potential costs. Overcon-
fidence occurs when one overestimates the chance that one’s
judgments are accurate or that one’s decisions are correct (Dun-
ning, Griffin, Milojkovic, & Ross, 1990;Dunning, Heath, & Suls,
2004;Fischhoff, Slovic, & Lichtenstein, 1977;Moore & Healy,
2008;Russo & Schoemaker, 1992;Vallone, Griffin, Lin, & Ross,
1990).
Research shows that the costs associated with overconfident
judgments are broad and substantive. Overconfidence leads to an
overabundance of risk-taking (Hayward, Shepherd, & Griffin,
2006). It prompts stock market traders to trade too often, typically
to their detriment (Barber & Odean, 2000), and people to invest in
decisions leading to too little profit (Camerer & Lovallo, 1999;
Hayward & Hambrick, 1997). In medicine, it contributes to diag-
nostic error (Berner & Graber, 2008). In negotiation, it leads
people to unwise intransigence and conflict (Thompson & Loew-
enstein, 1992). In extreme cases, it can smooth the tragic road to
war (Johnson, 2004).
To be sure, overconfidence does have its advantages. Confident
people, even overconfident ones, are esteemed by their peers
(Anderson, Brion, Moore, & Kennedy, 2012). It may also allow
people to escape the stress associated with pessimistic thought
(Armor & Taylor, 1998), although it does suppress the delight
associated with success (McGraw, Mellers, & Ritov, 2004). How-
ever, as Nobel laureate Daniel Kahneman has put it, if he had a
magic wand to eliminate just one judgmental bias from the world,
overconfidence would be the one he would banish (Kahneman,
2011).
In this article, we study a circumstance most likely to produce
overconfidence, namely, being a beginner at some task or skill. We
trace how well confidence tracks actual performance from the
point where people begin their involvement with a task to better
describe when confidence adheres to performance and when it
veers into unrealistic and overly positive appraisal—that is, how
closely the subjective learning curve fits the objective one.
Popular culture suggests that beginners are pervasively plagued
by overconfidence, and even predicts the specific time-course and
psychology underlying that overconfidence. According to the pop-
ular “four stages of competence” model, widely discussed on the
Internet (e.g., Adams, 2017;Pateros, 2017;Wikipedia, 2017),
beginners show a great deal of error and overconfidence that
dissipates as they acquire a complex skill. At first, people are naïve
about their deficits and are best described as “unconscious incom-
This article was published Online First November 2, 2017.
Carmen Sanchez, Department of Psychology, Cornell University; David
Dunning, Department of Psychology, University of Michigan.
Correspondence concerning this article should be addressed to Carmen
Sanchez, Department of Psychology, Cornell University, Uris Hall Ithaca,
NY 14853-7601. E-mail: cjs386@cornell.edu
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Journal of Personality and Social Psychology © 2017 American Psychological Association
2018, Vol. 114, No. 1, 10–28 0022-3514/18/$12.00 http://dx.doi.org/10.1037/pspa0000102
10
... This finding may stem from the control group lacking the opportunity to test and refine their knowledge, falsely inflating their abilities, leading them to confuse their familiarity with presented information for the true understanding of content [13]. Previous work has demonstrated the idea of a "beginner's bubble" [49]. Learners are initially cautious and lack comfort in what they know before education. ...
... Learners are initially cautious and lack comfort in what they know before education. Small increases in learning may result in large leaps of confidence, which may not accurately align with actual knowledge increase [49], likely explaining the control group's negative correlation. Continued education and challenging and refining of knowledge lead to a leveling of confidence with incremental performance improvement. ...
... This obligates teaching staff to reestablish the smooth functioning of the class and mitigate any disruptions that have occurred in the learning process of other students (He et al., 2018;Sheldon et al., 2014). In industry, the expectation is that individuals should behave in a professional manner (Sanchez et al., 2018), otherwise careers will be abruptly curtailed (Elliot, 2021;Lo Presti and Elia, 2020). In workplaces, disruptive behaviour can reduce the quality of decisions and negatively impacts organisational climate and performance (Meissner et al., 2018). ...
... In the literature, the links between critical feedback and student as an individual have been intensively investigated (e.g., Sanchez et al., 2018;Gibbs et al., 2017;Sheldon et al., 2014). However, the extant literature has overlooked how overconfidence in new learning skills manifested at individual level can be mitigated by feedback provided at group level (Breslin, 2021). ...
Preprint
Full-text available
One of the challenges that tertiary sector educators need to consider is how to best prepare students for the challenges they will confront in the workplace. An example of this is the high level of overconfidence in newly learnt skills shown by some of them when over-evaluated their work. Highlighting flaws in their work may cause negative emotions to surface and can contribute to a detrimental classroom environment. Overconfidence evident in low-performing students, called the Dunning-Kruger effect (DKE), has a negative impact on the education process, as it forces lecturers to manage unnecessary conflicts instead of staying focused on the teaching process. This study aims to better understand the DKE in educational contexts and identify ways to mitigate it to maintain a convivial classroom environment while helping these students understand that their overconfidence may compromise their future careers. We tested seven hypotheses in a longitudinal study of 137 student projects. The methodology deployed for this paper is empirical and quantitative. Results suggest that while it is not realistic to eliminate the DKE entirely, student groups with heterogeneous characteristics can ameliorate this problem, and it can be further diminished when there is the presence of emergent leadership within the project groups.
... Future research can formally integrate and unify these various explanations for generating differences in overprecision. In addition, future research can explore whether my results hold across other types of probabilistic distributions and how results change with learning (Sanchez & Dunning, 2018. For example, my results suggest that for beginners making decisions related to problems with aleatory uncertainty, visualizing outcomes could create a "beginner's bubble," raising distribution confidence (which can be beneficial for scenarios characterized by overgeneralization). ...
Article
Full-text available
Despite the volume of research examining overprecision, the underlying drivers of individual differences in probability estimations remain elusive. I propose that visual processing through mental simulation of small samples is a cognitive mechanism influencing the relative degrees of over and underprecision. I conducted three preregistered experiments contrasting probability estimates between a control group and a treatment group, where participants were prompted to engage in greater visual processing by mentally simulating outcomes. In Study 1, participants estimate a binomial distribution of a ball drop machine. I find that engaging in visual simulation led to higher estimates of values near the distributions’ center, while the control group provided higher estimates for the distributions’ tails. Although the control group is more underprecise, visual simulation could arguably increase estimation accuracy and not overconfidence. To separate these effects, I modify the ball drop mechanism in Study 2 to produce a flatter distribution. The results show that the control group is well-calibrated, but the visual simulation group is overprecise. Study 3 investigates a boundary condition where participants mentally simulated multiple outcomes. The results demonstrate that an increase in the variance of imagined outcomes lowers subjective estimates near the center of the distribution, diminishing the treatment effect.
... In educational contexts, for example, they can provide individual feedback and more personalized instruction [Ka23], and they allow for deeper processing of learning material [Ro23b]. When students, therefore, see chatbots as easy to use, they might underestimate chatbots' affordances and overestimate their abilities to fully use them [SD18]. Additionally, they might perceive chatbots as an "easy" medium overall, overlooking functionalities that require a certain level of mental effort [Sa84]. ...
Conference Paper
Full-text available
Understanding how to use generative AI can greatly benefit the learning process. Despite available concepts for teaching "how to prompt", little empirical evidence exists on students' current micro-level chatbot use that would justify a need for instruction on how to prompt. This pilot study investigates students' chatbot use in an authentic setting. Findings reveal general interaction patterns, including a notable lack of conversational patterns, indicating an underutilization of this central chatbot capability. However, despite having no formal instruction, some students discovered specific chatbot affordances. While basic prompting skills are displayed or acquired during exploration, explicit training on effective chatbot interaction could enhance skillful chatbot use. This training should integrate cognitive and metacognitive strategies as well as technological knowledge, helping students leverage the technology's full potential.
... Kruger and Dunning (1999) mentioned in their initial discussion of the effect that there could be boundary . 1 Scatterplot of realistic self-assessment (difference between self-assessed score and actual score) and actual score conditions, in the form of a minimum threshold of learning or experience, that must be crossed before excessive confidence in poor performance is displayed [15]. It has been shown that even a little learning experience could lead individuals into a situation where they become some of the most susceptible to the Dunning-Kruger effect leading them to overestimate their performance inappropriately [65]. The first-year cohorts observed in this study undoubtedly belong to exactly this group. ...
Article
Full-text available
Introduction The ability to self-assess is a crucial skill in identifying one’s own strengths and weaknesses and in coordinating self-directed learning. The Dunning-Kruger effect occurs when limited knowledge causes individuals to overestimate their competence and underestimate others’, leading to poor self-assessment and unrecognized incompetence. To serve as a foundation for developing strategies to improve self-assessment, the self-assessment abilities of first-semester students were assessed. Methods In the final weeks of the summer 2021, winter 2021/22, and summer 2022 semesters, the academic performance (oral anatomy exam) of first semester students was assessed (0–15 points). Before the exam results were announced, students were asked to self-assess their performance. Results Exam scores (M = 10.64, SD = 2.95) and self-assessed scores (M = 10.38, SD = 2.54) were comparable. The absolute difference between them, as a measure of self-assessment ability ranged from − 9 to + 9 points (M = -0.26, SD = 2.59). Among participants (N = 426), 18.5% assessed themselves accurately, 35.5% overestimated, and 46.0% underestimated their performance. The correlation between actual score and self-assessment was ρ = -0.590 (p < 0.001), reflecting the Dunning-Kruger effect. When separated by gender, correlation for females was ρ = -0.591 (p < 0.001), and for males ρ = -0.580 (p < 0.001). Conclusions Realistic self-assessment is a challenge for first-semester students. The data indicate that females tend to overestimate their performance while males underestimate theirs. A pronounced Dunning-Kruger effect is evident in both genders, with significant negative correlations between self-assessment and actual performance. There are several reasons for the occurrence of the Dunning-Kruger effect. Considering that the COVID-19 pandemic influenced learning environments, collaborative learning was significantly restricted. The lack of opportunities for comparison could potentially lead to unrealistic self-assessment.
... This leads into the positive feedback bias (i.e. where teachers overestimate their abilities if students provide positive feedback that is more a reflection of the teacher's likability than their teaching effectiveness) (Sanchez & Dunning, 2018). On top of that, there may be overreliance on exam results where teachers might assume that high student exam scores indicate effective teaching, when in reality, the scores could be more reflective of students' test-taking skills rather than deep language understanding (Blanch, 2017). ...
Article
The Dunning-Kruger Effect (DKE) is a cognitive bias where individuals with limited ability or knowledge tend to overestimate their own competencies, while those who are more skilled often underestimate their capabilities. Identified in a seminal 1999 study by psychologists David Dunning and Justin Kruger, this phenomenon underscores essential concepts in metacognition – the awareness and understanding of one’s thought processes. The DKE manifests across various domains, particularly in educational contexts, leading to significant implications for students, teachers, and administrators within English Language Teaching (ELT). This paper explores the origins and key findings surrounding the DKE, illustrating its detrimental impact on selfassessment and feedback mechanisms. It addresses students’ overconfidence or self-doubt in language proficiency, the challenges teachers face in evaluating their instructional effectiveness, and the potential pitfalls administrators encounter in decision-making and policy implementation. Additionally, the paper discusses the interplay of related biases, such as optimism bias and cognitive dissonance, which further complicate accurate self-evaluation. To combat these challenges, it advocates for enhanced metacognitive training, constructive feedback strategies, and a growth mindset for all stakeholders involved. Ultimately, fostering self-awareness and a reflective practice in ELT settings can lead to improved learning outcomes and a more productive educational environment.
... They can also provide important insights into exclusions and biases in the research process that may require explicit justification (Rad et al., 2018). Another way to think about these kinds of recommendations is that they advance intellectual humility, enabling scientists to better understand the limits of our knowledge and what we do not yet know (Griffin & Tversky, 1992;Sanchez & Dunning, 2018). ...
Article
Scholars have been working through multiple avenues to address longstanding and entrenched patterns of global and racial exclusion in psychology and academia more generally. As part of the Society for Personality and Social Psychology’s efforts to enhance inclusive excellence in its journals, the Anti Colorism/Eurocentrism in Methods and Practices (ACEMAP) task force worked to develop recommendations and resources to counteract racism and global exclusion in standard publication practices. In this paper, the task force describes a structure and process we developed for conducting committee work that centers marginalized perspectives while mitigating cultural taxation. We then describe our recommendations and openly accessible resources (e.g., resources for inclusive reviewing practices, writing about constraints on generalizability, drafting a globally inclusive demographic information survey, inclusive citation practices, and improving representation among editorial gatekeeping positions; recommendations and resource links are provided in Table 3). These recommendations and resources are both (a) tailored for a particular set of journals at a particular time and (b) useful as a foundation that can be continually adapted and improved for other journals and going forward. This paper provides concrete plans for readers looking to enhance inclusive excellence in their committee work, authorship, reviewing, and/or editing.
Article
Full-text available
Dunning Kruger Effect (DKE) adalah fenomena metakognitif superioritas ilusi di mana individu yang melakukan tugas dengan buruk percaya bahwa mereka melakukannya lebih baik dari pada yang lain, namun individu yang melakukannya dengan sangat baik percaya bahwa mereka berkinerja buruk dibandingkan dengan orang lain. Efek Dunning kruger effect mengacu pada pengamatan bahwa orang yang tidak kompeten seringkali tidak cocok untuk mengenali ketidakmampuan mereka. Di sini kami menyelidiki potensi Dunning kruger effect dalam penalaran tingkat tinggi dan, khususnya, berfokus pada keefektifan relatif dari pemantauan metakognitif di antara para penalar yang bias.
Article
Assigning responsibility for a project’s success or failure is key to organizational performance, yet attribution fallacies often interfere. Our experimental study (N=339) shows team members mistakenly attribute too much influence to their leaders on task outcomes. Despite task outcomes being randomly determined by easy or hard difficulty rather than leadership, leaders received undue credit or blame. Leaders assessed their teams more negatively in difficult tasks, except for female leaders, who were more lenient in assessing both conditions than men. Leaders' self-assessments did not differ between experimental conditions, confirming their self-motivated evaluation; moreover completing an easy task boosted their confidence for harder challenges. Our study shows that attributional errors manifest differently in the evaluation of leaders and followers and demonstrates that success in simpler tasks can increase leaders' confidence, potentially leading to riskier behaviors.
Chapter
The chapter on robo-advisors in investment management explores the multifaceted landscape of automated investment platforms, shedding light on the hurdles and apprehensions associated with their integration into the financial industry. Delving into issues such as the absence of human touch, regulatory compliance complexities, technological risks, algorithmic challenges, market volatility considerations, and the imperative need for client education, the chapter offers a comprehensive examination of the challenges and concerns that both investors and industry participants encounter in the realm of robo-advisors. By delineating these challenges, the chapter aims to contribute to a nuanced understanding of the dynamics shaping the adoption and evolution of robo-advisory services, robust risk management strategies to navigate market uncertainties, and the essential role of client education in fostering understanding and trust.
Article
Full-text available
People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.
Article
Full-text available
Even when Ss fail to recall a solicited target, they can provide feeling-of-knowing (FOK) judgments about its availability in memory. Most previous studies addressed the question of FOK accuracy, only a few examined how FOK itself is determined, and none asked how the processes assumed to underlie FOK also account for its accuracy. The present work examined all 3 questions within a unified model, with the aim of demystifying the FOK phenomenon. The model postulates that the computation of FOK is parasitic on the processes involved in attempting to retrieve the target, relying on the accessibility of pertinent information. It specifies the links between memory strength, accessibility of correct and incorrect information about the target, FOK judgments, and recognition memory. Evidence from 3 experiments is presented. The results challenge the view that FOK is based on a direct, privileged access to an internal monitor.
Article
Full-text available
Exemplar and connectionist models were compared on their ability to predict overconfidence effects in category learning data. In the standard task, participants learned to classify hypothetical patients with particular symptom patterns into disease categories and reported confidence judgments in the form of probabilities. The connectionist model asserts that classifications and confidence are based on the strength of learned associations between symptoms and diseases. The exemplar retrieval model (ERM) proposes that people learn by storing examples and that their judgments are often based on the first example they happen to retrieve. Experiments 1 and 2 established that overconfidence increases when the classification step of the process is bypassed. Experiments 2 and 3 showed that a direct instruction to retrieve many exemplars reduces overconfidence. Only the ERM predicted the major qualitative phenomena exhibited in these experiments.
Article
Full-text available
We propose that an important determinant of judged confidence is the evaluation of evidence that is unknown or missing, and overconfidence is often driven by the neglect of unknowns. We contrast this account with prior research suggesting that overconfidence is due to biased processing of known evidence in favor of a focal hypothesis. In Study 1, we asked participants to list their thoughts as they answered two-alternative forced-choice trivia questions and judged the probability that their answers were correct. Participants who thought more about unknowns were less overconfident. In Studies 2 and 3, we asked participants to list unknowns before assessing their confidence. “Considering the unknowns” reduced overconfidence substantially and was more effective than the classic “consider the alternative” debiasing technique. Moreover, considering the unknowns selectively reduced confidence in domains where participants were overconfident but did not affect confidence in domains where participants were well-calibrated or underconfident. Data, as supplemental material, are available at https://doi.org/10.1287/mnsc.2016.2580 . This paper was accepted by Yuval Rottenstreich, judgment and decision making.