Skilled and unskilled, but still aware of it: How perceptions of difficulty drive miscalibration in relative comparisons

Ross School of Business, University of Michigan, Ann Arbor, MI 48109, USA.
Journal of Personality and Social Psychology (Impact Factor: 5.08). 02/2006; 90(1):60-77. DOI: 10.1037/0022-3514.90.1.60
Source: PubMed


People are inaccurate judges of how their abilities compare to others'. J. Kruger and D. Dunning (1999, 2002) argued that unskilled performers in particular lack metacognitive insight about their relative performance and disproportionately account for better-than-average effects. The unskilled overestimate their actual percentile of performance, whereas skilled performers more accurately predict theirs. However, not all tasks show this bias. In a series of 12 tasks across 3 studies, the authors show that on moderately difficult tasks, best and worst performers differ very little in accuracy, and on more difficult tasks, best performers are less accurate than worst performers in their judgments. This pattern suggests that judges at all skill levels are subject to similar degrees of error. The authors propose that a noise-plus-bias model of judgment is sufficient to explain the relation between skill level and accuracy of judgments of relative standing.

Download full-text


Available from: Katherine A. Burson,

Click to see the full-text of:

Article: Skilled and unskilled, but still aware of it: How perceptions of difficulty drive miscalibration in relative comparisons

0 B

See full-text
  • Source
    • "Decades of research document the tendency to self-enhance (Greenwald, 1980; Sedikides and Strube, 1997), with most people inflating their standing on positive attributes ranging from intelligence to ability to morality (Alicke, 1985; Taylor and Brown, 1988). Much of the empirical work on biased self-evaluations has explored the motivation for overestimating our own abilities or viewing ourselves as better than we truly are (e.g., Burson et al., 2006). This motivation is so strong that most people ignore or rationalize negative information about themselves to maintain a positive self-image (Pyszczynski and Greenberg, 1987; Kunda, 1990; Chance and Norton, 2010). "
    [Show abstract] [Hide abstract]
    ABSTRACT: People demonstrate an impressive ability to self-deceive, distorting misbehavior to reflect positively on themselves-for example, by cheating on a test and believing that their inflated performance reflects their true ability. But what happens to self-deception when self-deceivers must face reality, such as when taking another test on which they cannot cheat? We find that self-deception diminishes over time only when self-deceivers are repeatedly confronted with evidence of their true ability (Study 1); this learning, however, fails to make them less susceptible to future self-deception (Study 2).
    Frontiers in Psychology 09/2015; 6:1075. DOI:10.3389/fpsyg.2015.01075 · 2.80 Impact Factor
    • "In addition, Burson, Larrick, and Klayman (2006) demonstrated that task difficulty significantly restrains the accuracy of metacognitive judgments for both skilled (good performers) and unskilled students (poor performers). However, the results of Burson et al.'s (2006) study as well as the results of a study conducted by Hacker, Bol, and Bahbahani (2008) indicated that unskilled students are more likely to overestimate their performance than skilled students (for more details regarding the " unskilled-unaware hypothesis " , see Kruger & Dunning, 2002). It should, however, be mentioned that students base their judgments on subjective perceptions of task difficulty rather than on objective difficulty of the tasks. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Inaccurate judgments of task difficulty and invested mental effort may negatively affect how accurate students monitor their own performance. When students are not able to accurately monitor their own performance, they cannot control their learning effectively (e.g., allocate adequate mental effort and study time). Although students' judgments of task difficulty and invested mental effort are closely related to their study behaviors, it is still an open question how the accuracy of these judgments can be improved in learning from problem solving. The present study focused on the impact of three types of instructional support on the accuracy of students' judgments of difficulty and invested mental effort in relation to their performance while learning genetics in a computer-based environment. Sixty-seven university students with different prior knowledge received either incomplete worked-out examples, completion problems, or conventional problems. Results indicated that lower prior knowledge students performed better with completion problems, while higher prior knowledge students performed better with conventional problems. Incomplete worked-out examples resulted in an overestimation of performance, that is, an illusion of understanding, whereas completion and conventional problems showed neither over-nor underestimation. The findings suggest that completion problems can be used to avoid students' misjudgments of their competencies.
    Contemporary Educational Psychology 01/2015; 41. DOI:10.1016/j.cedpsych.2015.01.001 · 2.20 Impact Factor
  • Source
    • "The final analysis sought to ascertain that the obtained pattern of findings was not specific to the particular measure of overconfidence used, namely, a numerical difference between self-and listener-based ratings expressed as a proportion on a 9-point ordinal scale. In line with previous psychological research on selfassessment (e.g., Burson et al., 2006; Kruger & Dunning, 1999), rated accent and comprehensibility values were first rank-ordered and then expressed as percentile scores by subtracting listener-rated performance from speakers' own estimates to derive a percentile-based measure of overconfidence. The resulting overconfidence scores matched closely the original measure of overconfidence for both accent, r(132) = .98, "
    [Show abstract] [Hide abstract]
    ABSTRACT: This study targeted the relationship between self- and other-assessment of accentedness and comprehensibility in second language (L2) speech, extending prior social and cognitive research documenting weak or non-existing links between people's self-assessment and objective measures of performance. Results of two experiments (N = 134) revealed mostly inaccurate self-assessment: speakers at the low end of the accentedness and comprehensibility scales overestimated their performance; speakers at the high end of each scale underestimated it. For both accent and comprehensibility, discrepancies in self- versus other-assessment were associated with listener-rated measures of phonological accuracy and temporal fluency but not with listener-rated measures of lexical appropriateness and richness, grammatical accuracy and complexity, or discourse structure. Findings suggest that inaccurate self-assessment is linked to the inherent complexity of L2 perception and production as cognitive skills and point to several ways of helping L2 speakers align or calibrate their self-assessment with their actual performance.
    Bilingualism 01/2015; DOI:10.1017/S1366728914000832 · 1.71 Impact Factor
Show more