ArticlePDF Available

Faulty Self‐Assessment: Why Evaluating One's Own Competence Is an Intrinsically Difficult Task

Wiley
Social and Personality Psychology Compass
Authors:

Abstract

People's perception of their competence often diverges from their true level of competence. We argue that people have such erroneous view of their competence because self-evaluation is an intrinsically difficult task. People live in an information environment that does not contain all the data they need for accurate self-evaluation. The information environment is insufficient in two ways. First, when making self-judgments, people lack crucial categories of information necessary to reach accurate evaluations. Second, although people receive feedback over time that could correct faulty self-assessments, this feedback is often biased, difficult to recognize, or otherwise flawed. Because of the difficulty in making inferences based on such limited and misleading data, it is unreasonable to expect that people will prove accurate in judgments of their skills.
© 2007 The Authors
Journal Compilation © 2007 Blackwell Publishing Ltd
Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Faulty Self-Assessment: Why Evaluating
One’s Own Competence Is an Intrinsically
Difficult Task
Travis J. Carter* and David Dunning
Cornell University
Abstract
People’s perception of their competence often diverges from their true level of
competence. We argue that people have such erroneous view of their competence
because self-evaluation is an intrinsically difficult task. People live in an information
environment that does not contain all the data they need for accurate self-evaluation.
The information environment is insufficient in two ways. First, when making self-
judgments, people lack crucial categories of information necessary to reach accurate
evaluations. Second, although people receive feedback over time that could correct
faulty self-assessments, this feedback is often biased, difficult to recognize, or
otherwise flawed. Because of the difficulty in making inferences based on such
limited and misleading data, it is unreasonable to expect that people will prove
accurate in judgments of their skills.
Know yourself. Don’t accept your dog’s admiration as conclusive evidence that
you are wonderful.
– Ann Landers, American advice columnist, 1918 –2002
Ann Landers comes from a long line of philosophers, psychologists, social
commentators, and advice columnists who have exhorted people to gain an
accurate vision of themselves. The rewards for doing so are obvious. To the
extent that people know their strengths, they can make profitable decisions
about how to spend their time and apply their efforts, such as choosing the
best career in which to spend their lives. Furthermore, to the extent that
people know their weaknesses, they can avoid situations that might lead
to costly mistakes. Better yet, they can work on those shortcomings to rid
themselves of them.
However, although the exhortation to ‘know oneself’ has a long and
venerable history, recent investigations in behavioral science paint a vexing
and troubling portrait about people’s success at self-insight. Such research
increasingly shows that people are not very good at assessing their competence
and character accurately. They often hold self-perceptions that wander a
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
Faulty Self-Assessment 347
good deal away from the reality of themselves (for recent reviews, see
Dunning, 2005; Dunning, Heath, & Suls, 2004).
For example, correlational studies show that the perceptions people hold
of their competence is typically related to their actual performance, at best,
to only a modest degree (Falchikov & Boud, 1989; Harris & Schaubroeck,
1988; Mabe & West, 1982). Often, the relationship between perception and
reality of self is quite weak or even evaporates completely. For example,
what consumers think they know about their purchases correlates only
moderately with what they really know (Alba & Hutchinson, 2000). How
public health workers rate their understanding of plans to respond to a
community-wide disaster (such as a bioterrorist attack) correlates only
0.34 with their actual level of understanding (Kerby, Brand, Johnson, &
Ghouri, 2005). Medical students’ evaluations of their communication skills,
as they complete their training, bear little relationship to how their
supervisors and their patients rate them, although supervisors and patients
tend to agree with each other’s evaluations substantially (Millis et al., 2002).
Beyond this, people also tend to be overconfident in their skill and
expertise, providing rosy judgments of self that are not or cannot be true.
For example, people on average tend to say they are more invulnerable to
disease than the average person – although it is impossible for the average
person to be ‘above average’ in invulnerability (Larwood, 1978). People also
overpredict the occurrence of positive, and underpredict the occurrence
of negative, events in their lives (Weinstein, 1980). Lawyers, for example,
overestimate the likelihood that they will win the case they are currently
working on (Loftus & Wagenaar, 1988). Software developers chronically
underestimate the amount of time that it will take to write a new piece
of software (Cusumano & Selby, 1995), an example of a general tendency
to underestimate how much time projects will take to complete (Buehler,
Griffin, & Ross, 1994).
In summary, the extant psychological literature suggests that people
have some, albeit only a meager, amount of self-insight. This is not to say
that self-knowledge is nonexistent, or that people are necessarily less accurate
in self-judgments than other judgments, although that is sometimes the case
(Dunning, 2005). Rather, because of the importance of accurately assessing
one’s strengths and weaknesses, and the lifetime of opportunities people
have to learn about oneself, it is nonetheless impressive that self-judgments
often lie closer to worthless than they do to perfection. Reviews elsewhere
have dealt with the degree to which people make erroneous self-evaluations,
the costs (and benefits) of those flawed evaluations, and the exceptions under
which people pretty much get themselves right (see, for example, Dunning,
2005; Dunning et al., 2004).
Our goal in this essay is to focus on one critical dimension of the task
to judge one’s self accurately, one that we believe has not received sufficient
attention in the psychological literature. We argue that the task of self-
assessment is an intrinsically difficult if not impossible one – and that it is
348 Faulty Self-Assessment
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
thus unreasonable to expect more than a meager amount of accuracy in
self-judgments. In particular, we wish to argue that the information environment
in which people provide self-evaluations is too impoverished to allow
them to make accurate self-evaluations. By information environment, we
mean the data people have available to them as they strive toward some
sort of honest evaluation. We argue that people frequently do not have all
the data they need to determine their true level of competence.
In the sections that follow, we will discuss what types of information
people are missing as they strive to reach accurate judgments of self. In
two different sections that follow, we argue that the information environment
is insufficient in many ways. In the first section, we focus on people at the
moment they are asked to provide some assessment of their competence.
At that moment, we argue that people, left to their own devices, are often
missing crucial types of data necessary to arrive at an accurate judgment.
In the second section, we describe how the outside world fails to inform
people of their strengths and weaknesses. People may not come to accurate
self-views if they were just left to themselves, but if external agents – such
as, for example, friends, bosses, and teachers – provided them with feedback
about their competence, they might come to better know their good and
bad points. We argue, however, that feedback from the outside world
tends to be misleading, murky, and often missing. As a consequence, the
faulty views that people hold about themselves tend not to be corrected.
Let us consider each of these issues in turn.
Deficits in the Information Environment
Suppose that the reader was looking over a short article on, say, the
accuracy of self-judgment, but that someone burst into the room to hand
him or her a pop quiz on scientific reasoning. Being a good sport, the
reader completes the quiz, and then calculates how good a job he or she
did. The reader can come up with an estimate, but in a sense the reader,
left to his or her own devices, does not have all the information necessary
to really know whether he or she has posted a top score or a lousy one.
Consider, as people confront tasks, all the types of information they are
lacking as they judge their performances.
Errors of omission
When performing some task, people know the solutions they have come
up with to address that task. Doctors, for example, know which diagnoses
they test for. Lawyers know which arguments they have crafted to win a
case. However, knowing this is often not sufficient to provide an accurate
assessment of performance. Consider, for example, the plight of Larry
Donner (played by Billy Crystal), from the classic 1980s movie Throw Mamma
from the Train, as he struggles to describe a night in the American South.
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
Faulty Self-Assessment 349
The night was hot, wait no, the night, the night was humid. The night was
humid, no wait, hot, hot. The night was hot. The night was hot and wet, wet
and hot. The night was wet and hot, hot and wet, wet and hot; that’s humid.
The night was humid. (Brezner & DeVito, 1987)
These are all fine solutions to his task, until his acquaintance’s mother
leans over and suggests, ‘The night ... was sultry’ (Brezner & DeVito,
1987).
In a sense, people can often be Larry Donners, left with whatever solutions
they have generated – but unaware of the solutions that could have been
generated but were not. For the doctor, there might be symptoms or
diagnoses that were not considered. For the lawyer, there might be relevant
legal precedent of which she is unaware.
We would argue that these missed solutions, or rather errors of omission,
are important pieces of data for self-evaluation. The doctor should know
about all of the relevant diagnoses. The lawyer should be aware of all
arguments supporting both sides of the case. These pieces of information,
however, are ones that people are not aware of by definition. As a
consequence, their self-judgments suffer in terms of accuracy.
Recent research demonstrates that people are not aware of their errors
of omission. Caputo and Dunning (2005) asked participants to find as
many words as possible in a Boggle puzzle, and then to assess their ability.
Participants based their self-assessments almost entirely on the number of
words they found, but not on number they missed, although they con-
sidered their misses quite relevant. Furthermore, participants’ guesses of
the number of omission errors they had made were uncorrelated with
their actual number of words missed. Other studies by Caputo and Dunning
found a similar lack of awareness concerning omission errors. For example,
graduate students asked to critique psychological studies showed little
awareness of the range and number of methodological errors they had
failed to spot.
The reader may object, asking how could we ever expect people to
know about their errors of omission – but that would be our point. This
is an aspect of the information environment that is hidden from view. As
a consequence, people cannot be expected to provide completely accurate
self-evaluations when such an important type of information is, by
definition, not available to them.
Further data show that the fault lies with the information environment
and not with people. Specifically, when participants in the research of
Caputo and Dunning (2005) learned of their errors of omission, they took
them into account. In fact, in subsequent self-judgments, they gave just
as much weight to their omission errors as they did to the number of
solutions they had found – and their subsequent self-assessments became
much more accurate as a result. This finding suggests that although hidden
or missing information is detrimental to accurate self-insight, people can
appropriately use that information when it is provided to them.
350 Faulty Self-Assessment
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
Incompetence and knowing the rules of judgment
There is another way in which people fail to have available all the information
they need to provide accurate self-judgments – and this deficit in information
may hit hardest those most in need of revising their self-views. Often, to
judge one’s own or another person’s choices, one needs to know the proper
way in which a choice should be made. For example, suppose one were
asked to judge whether another person’s conclusion is logically sound. To
provide an accurate judgment, one would have to have a pretty good
grasp of the rules of logic. But what about those who fail to have such a
grasp? Can they adequately judge?
Kruger and Dunning (1999; see also Dunning, Johnson, Ehrlinger, &
Kruger, 2003; Ehrlinger, Johnson, Dunning, Kruger, & Banner, forthcoming;
Haun, Zeringue, Leach, & Foley, 2000) suggested that people who do not
have such expertise cannot judge accurately – either themselves or another
person. Specifically, Kruger and Dunning argued, with data, that people
who suffer from a deficit of expertise or knowledge in many intellectual
or social domains fall prey to a dual curse. First, their deficits lead them
to make many mistakes, perform worse than other people, and, in a word,
suffer from incompetence. But, second, those exact same deficits mean
that they cannot judge competence either. Because they choose what they
think are the best responses to situations, they think they are doing just
fine when, in fact, their responses are fraught with error. Indeed, if they
had the expertise necessary to recognize their mistakes, they would not
have made them in the first place.
Consider, once again, the domain of logic. If people do not know the
rules of logic, they are likely to make mistaken inferences and not know
it. For example, knowing that A is ‘necessary’ for B implies that if B is
present, one can safely infer that A is also present. However, one cannot
further conclude from necessity the converse, that A’s presence also implies
B – although many unskilled in the ways of logic make this mistake. The
problems for people making this mistake go beyond just committing it.
As part of the second half of the double curse of incompetence, they will
be confident in their incorrect conclusion and think anyone actually
reaching the right conclusion is wrong. People who know logic would
be unlikely to make such a mistake, but beyond that, will know they
are right, and will correctly spot when another student is making a mistake.
In short, one aspect of the information environment necessary to
adequately judge oneself is competence in the skill being judged. To the
extent that people lack that competence, their deficits leave them less able
to judge the quality of their performances. Their incompetence acts as a
sword that slices away an important category of knowledge needed to
judge self and others accurately. In fact, suffering under such deficits, it is
hardly reasonable to assume that they would be able to spot their own
incompetence whatsoever. By contrast, those who are competent live in
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
Faulty Self-Assessment 351
a richer and more accurate information environment. Thus, competence
both creates and is created by the information environment, and a lack of
competence is a blow to self-insight from both directions.
The ill-defined nature of a right answer
Another common problem in the information environment is the fact that
the criteria people should use to judge a performance are ambiguous,
open to disagreement, or just flat out unknowable. Many tasks are ill defined,
in that there is no clear and unambiguous rule one should use to compute
a correct answer, nor a clear yardstick to judge whether an answer is correct.
Composing the next big hit in popular music is such an ill-defined task,
in that there is no obvious algorithm to use to write such a song. Leading
a group is another ill-defined task. Different people possess very different
leadership styles, and some work better in some situations than in others
(Fiedler, Chemers, & Mahar, 1976). There exists no one clear, rigidly
defined way to lead a group. Intelligence, too, is an ill-defined quality.
Does intelligence mean finishing math problems quickly or does it mean
negotiating a compromise between two warring factions? People differ in
their responses to this question (Dunning, Perie, & Story, 1991). These
types of tasks stand in contrast to well-defined tasks, where the procedure
to produce – and thus to judge – whether an answer is correct is easily
determined. Such well-defined tasks would include, for example, computing
the circumference of a circle, or converting miles to kilometers. Small
calculators can be fed the clear-cut decision rules used to determine
correct answers on these well-defined tasks, but no calculator, to our
knowledge, has been successfully built to write the Great American novel
or to provide adequate therapy to a person suffering from mental illness.
The ill-defined nature of many tasks appears to lie behind biases in
people’s judgments of self. When excellence along a trait is ambiguous or
can be defined many different ways, people tend to think of themselves
as rather good to an unrealistic degree. When success at a trait is more
clearly defined, people provide more realistic judgments (Dunning, Meyer-
owitz, & Holzberg, 1989; Felson, 1981). For instance, Dunning et al. (1989)
asked participants to rate themselves on ambiguous traits (such as sensitive
and neurotic), as well as unambiguous ones (such as mathematical and gossipy).
The ambiguous traits could be defined in many different ways (sensitive
can mean loving animals, or being very attuned to a spouse’s moods),
whereas the unambiguous traits were fairly constrained in their inter-
pretation (being mathematical typically involves getting very high grades
in math classes). Participants showed a strong tendency to self-enhance
when the trait was ambiguous, ratings themselves as ‘above average’ on
positive and ‘below average’ on negative ones, but revealed very little
self-enhancement on unambiguous traits where the criteria of judgment
were rather clear-cut.
352 Faulty Self-Assessment
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
This tendency to self-enhance is almost never corrected by the information
environment. Instead, the information environment often gives people
wide latitude to diverge in the criteria they use to judge themselves and
other people. It does not constrain people to use a consensus set of criteria,
and as a consequence people are free to select the criteria that allow them
to judge themselves in a flattering way. If people used the same criteria
instead, their judgments of self – and others – would be in more realistic
and more in agreement (Dunning et al., 1989; Hayes & Dunning, 1997),
but often the information environment is not that directive.
Deficits in Feedback
Above, we have argued that people are not in an information environment
that compels correct conclusions about the self. However, the description
we gave of self-judgment did carry one important but unspoken assumption.
We assumed that the individual was not in a position to receive feedback
from others, but was only able to gain self-insight based on a self-appraisal
of his or her performance. Perhaps if people have only their own resources
they will be stranded in an information environment hostile to accurate
impressions of self. But what about the world people actually live in? In
many circumstances, people do receive feedback from others, and they do
get to stick around to see the outcomes of their choices and judgments.
One could argue that over time people gain the information they need
to achieve accurate impressions of self. Incompetence in some domains
can be remedied only by direct feedback, since poor performers typically
cannot even recognize when they are failing (Kruger & Dunning, 1999).
That is, as people choose and as they act, they receive feedback about the
wisdom of their choices. They pass or fail exams. They win praise or
suffer insults. They get that promotion or get passed over. They win
money at the poker table or they crash out.
To be sure, people do receive feedback as they live their lives, but if
one looks at the types of feedback people get – or, fail to get – one often
sees that the feedback people receive tends to be, once again, insufficient
to guide them toward accurate impressions of self. Consider the following
problems associated with feedback.
Probabilistic feedback
Whenever there is a probabilistic element to an outcome, there is always
the possibility that even if one makes the objectively best choice, the
outcome will nonetheless be undesirable. For example, imagine that
one was given the choice between two options. One could take a 50%
chance of winning $20 (Bet A), or an 80% chance of winning $10
(Bet B). In this case, the expected value of Bet A ($10) is higher than the
expected value of Bet B ($8); therefore, the objectively best bet, according
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
Faulty Self-Assessment 353
to an economist, is Bet A. However, half of the time, this objectively
correct choice will yield $0. Similarly, a professional poker player can play
a hand perfectly by the numbers, and still lose to a lucky amateur on the
last card. A good baseball manager can take out the pitcher with a 0.164
average for a pinch hitter with a 0.380 average, but that pinch hitter will
still sulk back to the dugout without a hit 62% of the time. Does the
negative outcome mean that one made a poor choice, or that the decision
was right, but merely unlucky?
When feedback is probabilistic – and it often is in life – the outcome
can be inconsistent with the quality of the choice people made (Baron &
Hershey, 1988; Hershey & Baron, 1992). Correct choices can lead to disastrous
outcomes (ask any professional card player), just as lousy choices can
inadvertently lead to success (ask any golfer whose poorly aimed shot
ricochets off a tree and onto the green). In these situations, it is very difficult
to accurately evaluate one’s choice based only on the outcome, and that
can lead to inaccurate inferences about the quality of our performance
or judgment. In the real world, the information environment does not
typically provide discrete probabilities so as to calculate the expected
utility, making it even more difficult to draw any conclusions about one’s
choices or skills.
Ambiguous feedback
Sometimes the information provided by the environment can be difficult
to interpret, in that it is not clearly a success or failure. For example, if
Sam asks Hazel out for dinner on Friday and she says that she has plans
that night, what lesson should Sam learn? Is Hazel refusing because she
cannot stand Sam or because she is honestly busy with family obligations
that night?
At other times it may not be the outcome or feedback that is ambiguous,
but rather the reasons behind it. If Sam is unambiguously rejected when
asking Hazel out on a date, the reason for that rejection could still be
ambiguous, obscuring the lesson to be learned from the rejection, if
any. It could be that he had food in his teeth when he asked, that the
particular ensemble he chose for the occasion was in poor taste, or even
that Hazel is currently recovering from a previous relationship and simply
is not interested in dating anyone, or believes that Sam is just too good
for her.
Without knowing the specific reason why he received the rejection,
Sam will be left to his best guess as to how to keep it from happening
again. This sort of guesswork puts everyone at a disadvantage. First, inferring
the cause of a single instance is likely to be a spurious inference indeed.
These inferences are likely to be based on cultural conventions and prior
beliefs, which can be inaccurate (Wilson, 2002) or biased (Dunning,
2005; Ehrlinger & Dunning, 2003).
354 Faulty Self-Assessment
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
Also, the task of consciously detecting covariation – that is, the
relationship between action and outcome – is a difficult one, given people’s
limitations (Alloy & Tabachnik, 1984; Crocker, 1981). Thus, even if poor
Sam has had enough rejections for a clear pattern to emerge, he is still
likely to make errors in noticing the pattern. Integrating multifaceted
information from a number of sources into an accurate portrayal of
cause and effect is no trivial matter, and although this is indeed a human
failing, it is nonetheless another example of the environment presenting
information in a fashion difficult for humans to process. As such, people
can hardly be blamed for making errors when trying to put cause and
effect together.
Biased feedback
One would be hard-pressed to find someone who actually enjoys delivering
bad news, with the possible exception of American Idols Simon Cowell,
who appears to be a cultural phenomenon simply because he is unconcerned
about puncturing the egos of the hopeful singers in front of him. In fact,
to prevent the bearer of bad news from being the object of the recipient’s
wrath, the Greeks and Romans had a law protecting messengers from
harm while delivering their news. To avoid more modern versions of
this occupational hazard, people tend to go out of their way to avoid
delivering negative feedback, often disguising bad feedback as good (Tesser
& Rosen, 1975).
For example, imagine that Anne is attending the first performance of
her nephew’s new rock band, only to find her ears forever scarred by the
experience. The band plays their instruments adequately enough, but the
songs manage to blend clichés in new, but unwanted and excruciating
ways. However, Anne is faced with an unfortunate predicament. What
does she tell her nephew after the show? As the cool aunt, her duty is
clear: she must be supportive of her nephew’s new enterprise, but she also
does not want to tell an outright lie. Thus, Anne is likely to resort to
half-truths. To spare her nephew’s ego, she might focus on the positive
aspects of the show (‘Wow, your band is really loud!’), or use a cleverly
ambiguous phrase (‘That was really something!’). As a result, her poor
nephew will take away the mistaken impression that his band’s performance
garnered praise, rather than feedback that might shape the band into a
unit that really deserves praise. That is, the feedback people give is often
biased by the desire to spare a person’s feelings, a parent’s love, or even
just to avoid an awkward moment (DePaulo & Bell, 1996).
Missing feedback
Unfortunately, feedback is often present but hidden discreetly from view
(Dunning, 2005), coming in the form of nonoccurrences, what fails to
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
Faulty Self-Assessment 355
happen rather than what does. For example, positive feedback is often
withheld when people are performing well. If someone is performing
well, it may seem unnecessary to give feedback – they appear to know
what they are doing, and do not need improvement. If someone does not
know that they are succeeding, however, this lack of reinforcement could
cause them to make a change for the worse.
Hidden feedback can be even more of a problem for negative behaviors.
As is often the case, Dave Barry said it best:
I argue very well. Ask any of my remaining friends. I can win an argument
on any topic, against any opponent. People know this, and steer clear of me at
parties. Often, as a sign of their great respect, they don’t even invite me.
Everyone knows at least one person whose dreadful behavior has ostracized
him from an otherwise welcoming social circle – but who never seems to
get the hint. Perhaps he talks too loud, or maybe he makes awkward and
inappropriate comments, but the general consensus is that people are
more relaxed when he is not around. In these cases, people handle this
individual’s bad habits not by correcting that behavior but rather finding
ways to avoid the person displaying it.
However, consider the perspective of this poor outcast. Because he
clearly lacks the social skills to realize what constitutes appropriate behavior
– and without receiving explicit feedback about his negative behaviors, he
may never know that anything is amiss. He is unlikely to be invited to
dinner parties, but because these noninvitations are deliberately kept secret
to spare his feelings, he will never know how many dinner party invitations
his behavior has cost him (Dunning, 2005). Nonoccurrences of this type
may not be noticed, and thus he is left with a mistaken belief that his
social skills are just fine and that his social life is all that it can be.
In short, one issue with feedback is that it often comes in the form of
a nonoccurrence. As diagnostic as these nonoccurrences can be, this missing
information may be especially unlikely to be used in self-judgments
because it is difficult to identify. This difficulty arises, in part, because of
biases people have in their information search strategies. First, when trying
to uncover a relationship between two items (say, the number of off-color
jokes one tells and the number of dinner party invitations one receives),
people tend to look primarily at evidence that confirms, rather than
disconfirms, their hypotheses (for a review, see Klayman & Ha, 1987), and
people’s hypotheses are likely to be biased by their desires (Kunda, 1990).
Because people are motivated to see themselves in a positive light, they
are likely to entertain the hypothesis (and seek evidence thusly) about how
their actions led to favorable outcomes. That person with the penchant for
off-color jokes is likely to seek evidence that such jokes are enchanting
rather than obnoxious.
Second, when looking for relationships between two items, people tend
to expect positive, rather than negative, relationships. That is, people tend
356 Faulty Self-Assessment
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
to expect that an increase in one item is likely to produce an increase in
the other (such as eating more ice cream is expected to expand waistlines).
It is more difficult to recognize when a decrease in one item leads to an
increase in the second. Thus, people might expect a positive relationship
between off-color jokes and dinner party invitations (more jokes = more
invitations) rather than a negative relationship (fewer jokes = more
invitations). In a number of studies, Newman, Wolff, and Hearst (1980)
showed that for college students, much like pigeons (Jenkins & Sainsbury,
1969) and children (Sainsbury, 1971), learning a logical rule is much more
difficult if that rule depends on an attribute being absent rather than
present. In one study, it took participants significantly longer to figure out
a rule that determined whether a card containing four symbols was ‘good’
(rather than ‘not good’) if it did not contain a triangle than if it did contain
a triangle (Newman et al., 1980).
Even if people make a great effort to do a better job dealing with
negative or disconfirming evidence, the feedback they receive about their
choices can still be incomplete. People receive feedback for the choices
they make (e.g., a polite thank you for buying an electric can opener as
a birthday gift for a friend), but because they make that choice, by
definition they forego receiving feedback about alternative choices they
could have made (e.g., an ecstatic shriek for buying two tickets to the
opera for their friend). Thus, they are not in a position to know whether
they have made the best, or even a good, choice, given all the unknown
alternatives.
Often, it is the case that the decisions people make based on those
predictions preclude them from finding out whether their choice was the
right one. When making some decisions, such as which college to attend
or which person to marry, choosing one path precludes taking the other
(Cohen & March, 1974; May, 1973). In these cases, the only way to
evaluate such a choice would be to directly compare the two experiences.
Because time travel is currently impossible, if Jerry chooses to attend
Dartmouth, he’ll never know if he would have liked Columbia better.
One can only assess the outcome of your decision against itself, against
the experiences of others, or against one’s imaginings of the alternatives.
People may come love or hate their decisions, but that doesn’t necessarily
mean they would have liked any of the alternatives any better (or any
worse). As such, any insight gained from the experience may be inad-
equate to guide future decisions. Yet again, we do not have the data
necessary to learn the real lessons of our failures and successes, and future
decisions will suffer as a result.
Concluding Remarks
The final mystery is oneself.
– Oscar Wilde (Irish poet, novelist, dramatist, and critic, 1854–1900)
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
Faulty Self-Assessment 357
In this essay, we noted that people are often mysteries to themselves, as
Oscar Wilde had it, and then went beyond to explain that it could not
be any other way. We have proposed that self-evaluation is an intrinsically
difficult if not impossible task, and thus it should not be surprising that
people show only meager to modest self-insight in the psychological
literature and in the social world people encounter everyday. The information
people have at the moment they make an evaluation is often insufficient
to guide them to an accurate assessment. Over time, the feedback people
receive about themselves contains layers of imperfection, bias, or ambiguity
that undermines any accurate view of self.
To be sure, by outlining the reasons why self-evaluation is difficult, we
should also note that we have delineated when it might be easy. There
are circumstances that lend themselves to accurate self-judgment. If the
individual is competent, can receive information about errors of omission,
can get clear feedback, and is working on a well-defined task, self-judgment
can be very accurate. One should not forget this other side of the coin – and
also not forget that to the extent that one can create a world with these
circumstances, one’s sense of self will lie close to the truth.
In addition, we should note that there is much future work to be done
to complete the portrait of individual as self-evaluator. For example, some
of the problems we have described for self-assessment (such as ill-defined
correct answers, missing information, and incompetence in the domain of
judgment) are problems that also arise when people try to judge others.
Is self-assessment more difficult – and more fraught with error – than
social judgment? Although we believe there is much work to be done on
this issue, initial signs suggest that self-assessment is more difficult, in that
people at times seem to be better at predicting their peers than they are
themselves. People, for example, more accurately predict when their college
roommates will experience a romantic break-up than when they themselves
will experience a break-up (MacDonald & Ross, 1999; for a review, see
Dunning, 2005). In addition, some of the issues we have described
(particularly biased or deliberately ambiguous feedback) appear to afflict
self-assessment more than they do assessments of others. Future work,
however, is necessary to see the extent to which self versus peer prediction
is inferior to the other.
Furthermore, upon reflection, we believe this essay provides an important
perspective on the troubling state of self-assessment described in the
psychological literature. One should not summarily blame people for their
errors in self-judgment. The task they face is often an impossible one. Instead,
if one is casting about for someone or something to blame, one should look
toward the circumstances surrounding the person making the judgment.
People are called upon to make self-judgments in information environments
that are simply not up to the task – and it is the insufficiency of these
environments that should often be blamed, not the individual. Indeed,
there is a sense of irony in judging the individual when the environment
358 Faulty Self-Assessment
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
is at fault. By withholding or skewing feedback, we create the impoverished
information environment responsible for others’ inaccurate judgments.
In a sense, we are suggesting that to blame other people for their
judgmental errors would be to commit, to conjure a classic psychological
concept, the fundamental attribution error – attributing a person’s outcomes
to their personality and character when in fact those outcomes were
dictated by outside situational forces (Nisbett & Ross, 1980; Ross, 1977).
Such is often the case in self-assessment. Flawed self-assessments are a
function of an inadequate information environment, and not necessarily
a sign of bias, wishful thinking, or foolishness on the part of the indi-
vidual. Instead, such flawed assessments are brought about by informational
circumstances that surround our own judgments as well as those we catch
making mistakes.
Attributing flawed self-assessments to inadequate information environments
also provides the potential for hope. If people reach erroneous conclusions
about their competence because they do not have all the information they
need, then one can see room for intervention. One can, for example,
point out to people what information they fail to have – perhaps prompting
people to be more cautious in their self-evaluations. Better yet, one can
potentially design interventions that bring people the information they
lack, so that they can make more accurate self-judgments. Providing these
interventions may be a complex process, but doing so may prove to be a
task that is well worth the effort, making each of us a little bit less of a
mystery to ourselves.
Short Biography
Travis J. Carter is currently a PhD candidate in Social Psychology at Cornell
University. He received his AB in Psychology from the University of
Chicago. His research interests span a large range, including social cognition,
consumer and political decision-making, and the self and social judgment.
David Dunning is Professor of Psychology at Cornell University. He
received his BA from Michigan State University and his PhD from Stanford
University, both in Psychology. A past associate editor of the Journal of
Personality and Social Psychology, he currently serves as executive officer of
the Society for Personality and Social Psychology. He is an experimental
social psychologist specializing in self-judgment, self-deception, behavioral
economics, and the psychology of eyewitness testimony. His book Self-
Insight: Roadblocks and Detours on the Path to Knowing Thyself (Psychology
Press, 2005) describes the difficulties and failures of accurate self-judgment.
Endnotes
* Correspondence address: Department of Psychology, Uris Hall, Cornell University, Ithaca,
NY 14853, USA. Email: tjc38@cornell.edu.
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
Faulty Self-Assessment 359
References
Alba, J. W., & Hutchinson, J. W. (2000). Knowledge calibration: What consumers know and
what they think they know. Journal of Consumer Research, 27, 123 –156.
Alloy, L. B., & Tabachnik, N. (1984). Assessment of covariation by humans and animals: The
joint influence of prior expectations and current situational information. Psychological Review,
91, 112–149.
Baron, J., & Hershey, J. C. (1988). Heuristics and biases in diagnostic reasoning: I. Priors, error
costs, and test accuracy. Organizational Behavior and Human Decision Processes, 41, 259–279.
Brezner, L. (Producer), & DeVito, D. (Director). (1987). Throw Momma from the Train [Motion
picture]. Los Angeles, CA: Orion Pictures.
Buehler, R., Griffin, D., & Ross, M. (1994). Exploring the ‘planning fallacy’: Why people
underestimate their task completion times. Journal of Personality and Social Psychology, 67, 366 381.
Caputo, D., & Dunning, D. (2005). What you don’t know: The role played by errors of
omission in imperfect self-assessments. Journal of Experimental Social Psychology, 41, 488–505.
Cohen, M. D., & March, J. G. (1974). Leadership and Ambiguity: The American College President.
New York: McGraw-Hill.
Crocker, J. (1981). Judgment of covariation by social perceivers. Psychological Bulletin, 90, 272–292.
Cusumano, M. A., & Selby, R. W. (1995). Microsoft Secrets. New York: Free Press.
DePaulo, B. M., & Bell, K. L. (1996). Truth and investment: Lies are told to those who care.
Journal of Personality and Social Psychology, 71, 703–716.
Dunning, D. (2005). Self-Insight: Roadblocks and Detours on the Path to Knowing Thyself. New York:
Psychology Press.
Dunning, D., Heath, C., & Suls, J. M. (2004). Flawed self-assessment: Implications for health,
education, and the workplace. Psychological Science in the Public Interest, 5, 69 –106.
Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003). Why people fail to recognize their
own incompetence. Current Directions in Psychological Science, 12, 83 86.
Dunning, D., Meyerowitz, J. A., & Holzberg, A. D. (1989). Ambiguity and self-evaluation: The
role of idiosyncratic trait definitions in self-serving assessments of ability. Journal of Personality
and Social Psychology, 57, 1082–1090.
Dunning, D., Perie, M., & Story, A. L. (1991). Self-serving prototypes of social categories.
Journal of Personality and Social Psychology, 61, 957–968.
Ehrlinger, J., & Dunning, D. (2003). How chronic self-views influence (and potentially mislead)
estimates of performance. Journal of Personality and Social Psychology, 84, 5 –17.
Ehrlinger, J., Johnson, K., Dunning, D., Kruger, J., & Banner, M. (forthcoming). Why the
unskilled are unaware? Further explorations of (lack of ) self-insight among the incompetent.
Organizational Behavior and Human Decision Processes.
Falchikov, N., & Boud, D. (1989). Student self-assessment in higher education: A meta-analysis.
Review of Education Research, 59, 395– 430.
Felson, R. (1981). Ambiguity and bias in the self-concept. Social Psychology Quarterly, 44, 64 –69.
Fiedler, F.E., Chemers, M.M., & Mahar, L. (1976). Improving Leadership Effectiveness: The Leader
Match Concept. New York: John Wiley & Sons.
Harris, M. M., & Schaubroeck, J. (1988). A meta-analysis of self-supervisor, s elf-peer, and pe er-
supervisor ratings. Personnel Psychology, 41, 43– 62.
Haun, D. E., Zeringue, A., Leach, A., & Foley, A. (2000). Assessing the competence of
specimen-processing personnel. Laboratory Medicine, 31, 633 637.
Hayes, A. F., & Dunning, D. (1997). Construal processes and trait ambiguity: Implications for self-
peer agreement in personality judgment. Journal of Personality and Social Psychology, 72, 664– 677.
Hershey, J. C., & Baron, J. (1992). Judgment by outcomes: When is it justified? Organizational
Behavior and Human Decision Processes, 53, 89 93.
Jenkins, H. M., & Sainsbury, R. S. (1969). The development of stimulus control through
differential reinforcement. In N. J. Mackintosh & W. K. Honig (Eds.), Fundamental Issues in
Associative Learning (pp. 123–167). Halifax, NS: Dalhousie University Press.
Kerby, D. S., Brand, M. W., Johnson, D. L., & Ghouri, F. S. (2005). Self-assessment in the
measurement of public health workforce preparedness for bioterrorism or other public health
disasters. Public Health Reports, 120, 186 –191.
360 Faulty Self-Assessment
© 2007 The Authors Social and Personality Psychology Compass 2/1 (2008): 346–360, 10.1111/j.1751-9004.2007.00031.x
Journal Compilation © 2007 Blackwell Publishing Ltd
Klayman, J., & Ha, Y. W. (1987). Confirmation, discontinuation, and information in hypothesis
testing. Psychological Review, 94, 211–228.
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing
one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social
Psychology, 77, 1121–1134.
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480– 498.
Larwood, L. (1978). Swine flu: A field study of self-serving biases. Journal of Applied Social
Psychology, 8, 283 289.
Loftus, E. F., & Wagenaar, W. A. (1988). Lawyers’ predictions of success. Jurimetrics Journal, 29,
437–453.
Mabe, P. A., & West, S. G. (1982). Validity of self-evaluation of ability: A review and meta-analysis.
Journal of Applied Psychology, 67, 280–296.
MacDonald, T. K., & Ross, M. (1999). Assessing the accuracy of predictions about dating
relationships: How and why do lovers’ predictions differ from those made by observers?
Personality and Social Psychology Bulletin, 25, 1417–1429.
May, E. R. (1973). ‘Lessons’ of the Past: The Use and Misuse of History in American Foreign Policy.
New York: Oxford University Press.
Millis S. R., Jain, S. S., Eyles, M., Tulsky, D., Nadler, S. F., Foye, P. M., et al. (2002). Assessing
physician’s inter personal skills: Do patients and physicians see eye-to-eye? American Journal of
Physical Medicine and Rehabilitation, 81, 946 951.
Newman, J., Wolff, W. T., & Hearst, E. (1980). The feature-positive effect in adult human
subjects. Journal of Experimental Psychology: Human Learning and Memory, 6, 630– 650.
Nisbett, R. E., & Ross, L. (1980). Human Inference: Strategies and Shortcomings of Social Judgment.
Englewood Cliffs, NJ: Prentice Hall.
Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution
process. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology (Vol. 10, pp. 173– 240).
Orlando, FL: Academic Press.
Sainsbury, R. (1971). The ‘feature positive effect’ and simultaneous discrimination learning.
Journal of Experimental Child Psychology, 11, 347–356.
Tesser, A., & Rosen, S. (1975). The reluctance to transmit bad news. In L. Berkowitz (Ed.),
Advances in Experimental Social Psychology (Vol. 8, pp. 193 –232). New York: Academic Press.
Weinstein, N. D. (1980). Unrealistic optimism about future life events. Journal of Personality and
Social Psychology, 39, 806 820.
Wilson, T. D. (2002). Strangers to Ourselves: Discovering the Adaptive Unconscious. Cambridge, MA:
Harvard University Press.
... One rarely gets feedback about success or failure on tasks from the world or other people. One might not have been able to determine success or failure and therefore assume success because one choose what one believed to be the right response (Carter & Dunning, 2008), and less skill in a task goes along with increased overconfidence because skill is required to accurately assess competence (Fischhoff & Broomell, 2020). ...
... Many factors go into a lack of feedback: probabilistic events in which the best choice can fail and the worst choice can succeed, ambiguous or ill-defined tasks, a lack of feedback from others, occurrences that are the result of multiple interacting reasons, and the impossibility of feedback on actions one did not take (Carter & Dunning, 2008). Feedback is difficult to interpret when it exists because we are bad at noticing relationships between actions and outcomes, and worse for relationships between stopping an action and an outcome (Carter & Dunning, 2008). ...
... Many factors go into a lack of feedback: probabilistic events in which the best choice can fail and the worst choice can succeed, ambiguous or ill-defined tasks, a lack of feedback from others, occurrences that are the result of multiple interacting reasons, and the impossibility of feedback on actions one did not take (Carter & Dunning, 2008). Feedback is difficult to interpret when it exists because we are bad at noticing relationships between actions and outcomes, and worse for relationships between stopping an action and an outcome (Carter & Dunning, 2008). ...
Research
Full-text available
Review of a humidifier from the perspective of cognitive skills and decision making for a Foundations in Human Factors class for Bentley University's Human Factors in Information Design masters program.
... 248). Research suggests that hidden or missing information is essential for accurate self-insight, and if that information is provided, people can make appropriate use of that information to get better self-insight (Carter & Dunning, 2008). Kruger and Dunning (1999) find that students tend to rate their performances lower if they are more intellectually equipped, however, the skills that "engender competence in a particular domain are often the very same skills necessary to evaluate competence in that domain" (p. ...
... Representative student justifications such as "I felt like I knew the stuff going into it and even after I took it I felt confident about it" confirm the claims of Kruger and Dunning (1999) that lack of knowledge is why students perform poorly, and the same lack of knowledge limits their ability to correctly assess the quality of their performance. Carter and Dunning (2008) argue that it is unreasonable to expect more than a "meager amount" of selfinsight from such individuals because they live in an "insufficient information environment" or do not have all the "data" they need to make a true assessment of their competence or performance. However, results also showed that poor performers do not always overestimate their performance when they have some relevant information about their knowledge and performance. ...
... Therefore, our findings indicate that individuals who are aware of their knowledge or lack thereof (K-K, K-NK, KNK-BW) tend to be more accurate predictors of their performance, regardless of their actual performance level. Our findings offered supporting evidence to Carter and Dunning's (2008) assertion that inaccurate selfassessment is likely not the result of an individual's "wishful thinking" or "foolishness", but rather a function of an insufficient information environment. When students do not know if they know or do not know (NK-K, NK-NK), they are less accurate in predicting their performances because they were living in an "insufficient information environment" at that moment. ...
Article
Full-text available
We collected data on students’ self-assessment behavior from four sections of a Calculus II course. Students were asked to write their expected scores on each of the weekly in-class quizzes and problems in the exams, immediately after they completed them. They were then asked to justify their expectation in writing. One-on-one interviews were conducted with a purposefully selected sample of students. During the interviews, they were asked to explain their perceived reasons for their self-assessment behaviors. While the results from quantitative analysis seemed to partially reinforce the findings of existing research that low performers generally overestimate, high performers underestimate their performance, and those in-between performers were (almost) accurate predictors, results from qualitative analysis provided additional insights into their self-assessment behaviors. After analyzing qualitative data, we identified five categories of student behavior: knowing about knowing (KK), not knowing about knowing (NKK), knowing about not knowing (KNK), knowing something is not known but not sure what (KBNKW), and not knowing about not knowing (NKNK). The quantitative analysis showed that students exhibited greater accuracy in assessing their performance when they belonged to the categories KK, KNK, and KBNKW, while their accuracy was lower when they fell into the categories NKNK and NKK. In other words, students who had greater awareness of their level of knowledge were more accurate in predicting their scores compared to their peers, irrespective of their actual performance levels. The logistic regression model revealed a substantial increase in the likelihood of underperforming students overestimating their performance compared to their high-performing counterparts.
... Research shows that individuals often misjudge their abilities, leading to both overestimation and underestimation. This inaccuracy is influenced by lack of skill, preexisting self-views, and motivational factors [93,31,58]. Self-assessment errors occur across various domains, including health, education, and the workplace [55]. ...
Article
Full-text available
The informal economy is a vital component of global labor markets but is frequently hindered by the challenge of skills mismatch. This paper explores the nature and causes of skills mismatch in the informal economy, discussed in existing literature. There are mainly four types of mismatch: horizontal mismatch, where workers' roles do not align with their educational qualifications; vertical mismatch, involving discrepancies between job demands and workers' educational levels; skill gaps, which indicate deficiencies in specific abilities needed for effective job performance; and skill obsolescence, where previously relevant skills lose their utility over time. The several factors driving these mismatches include : Educational system inefficiencies, Labor market dynamics, Regional and Sectoral variations ,Workers' characteristics and decisions, shaped by personal preferences and socioeconomic circumstances, global trends like automation and demographic changes, sociological factors, including cultural norms and policy and institutional shortcomings .
... First, the content and format of our knowledge tests allowed for the definition of a correct answer. Carter and Dunning (2008) point out that accurate self-assessments are prone to bias when the correct answer is ill-defined or ambiguous. It is therefore unclear if the feedback could have affected self-assessment accuracy differently, had we chosen a different, less defined task (e.g., assess the quality of one's instruction). ...
Article
Full-text available
Although feedback is of high importance for the professional development of student teachers, the impact of (inadequate) feedback on their self-regulated learning is still unclear. In two studies with mathematics student teachers, we investigated how discrepancies between performance and feedback affected two important aspects of self-regulated learning—self-efficacy and self-assessment accuracy regarding mathematical content knowledge. In the first study, N = 154 student teachers studying mathematics completed a knowledge test on the Pythagorean theorem and received performance feedback that was either correct or manipulated to be more positive or more negative than actual performance. The results showed that feedback that exceeded performance resulted in higher self-efficacy than feedback that fell below performance. In contrast, self-assessment accuracy in a second test on the same content was not affected by the discrepancy between student teachers’ test performance and the feedback they received. In the second study, we used the think-aloud method with N = 26 participants to investigate the processes underlying the effects obtained in Study 1. We found that student teachers who had received overly positive feedback were more likely to report positive affect-related statements than participants who had received overly negative or correct feedback. At the same time, they based their self-assessments in the knowledge test more strongly on their monitoring of heuristic factors than on knowledge. The results indicate that overly positive feedback elicits positive motivational states in mathematics student teachers, but bears the risk that they neglect their knowledge as a basis for their self-assessments.
... Teachers could foster metacognitive awareness by explicitly teaching strategies such as selfassessment techniques, goal setting, and reflection exercises, as well as encouraging students to regularly reflect on their progress can help them develop a more accurate understanding of their abilities (Gogoi, & Mukherjee, 2020). They could also provide more detailed, structured, and constructive feedback as that can help students align their self-perceptions with reality, and for overconfident students, this might involve showing specific examples of errors, while underconfident students may benefit from positive reinforcement that highlights their strengths (Miller & Geraci, 2011).Teachers might also consider using data-driven assessments, especially when students see data that contradicts their self-assessment, they may be more open to adjusting their perceptions (Carter & Dunning, 2008). Finally, though hard, teachers can try encouraging a growth mindset where students view their abilities as improvable through effort. ...
Article
The Dunning-Kruger Effect (DKE) is a cognitive bias where individuals with limited ability or knowledge tend to overestimate their own competencies, while those who are more skilled often underestimate their capabilities. Identified in a seminal 1999 study by psychologists David Dunning and Justin Kruger, this phenomenon underscores essential concepts in metacognition – the awareness and understanding of one’s thought processes. The DKE manifests across various domains, particularly in educational contexts, leading to significant implications for students, teachers, and administrators within English Language Teaching (ELT). This paper explores the origins and key findings surrounding the DKE, illustrating its detrimental impact on selfassessment and feedback mechanisms. It addresses students’ overconfidence or self-doubt in language proficiency, the challenges teachers face in evaluating their instructional effectiveness, and the potential pitfalls administrators encounter in decision-making and policy implementation. Additionally, the paper discusses the interplay of related biases, such as optimism bias and cognitive dissonance, which further complicate accurate self-evaluation. To combat these challenges, it advocates for enhanced metacognitive training, constructive feedback strategies, and a growth mindset for all stakeholders involved. Ultimately, fostering self-awareness and a reflective practice in ELT settings can lead to improved learning outcomes and a more productive educational environment.
... This study seeks to further leverage knowledge surveys for instructor course improvement with particular focus on whether or not topical information is presented appropriately in the broader context of the field. This is akin to Carter and Dunning's [9] concept of an "informational environment" which is the understanding of what information exists related to a particular topic. Student awareness of a given subject can be limited to topics covered in a course, and by restricting the informational environment to core subject matter without appropriate context the informational environment does not allow students to understand the course in the context of the broader subject landscape. ...
... Additionally, in doing so, we avoid the often-described problems of selfassessment (Carter & Dunning, 2008). ...
Chapter
Full-text available
Mentoring is a key scaffold where pre-service biology teachers are supported by an experienced biology teacher (i.e. a mentor). A SWOT analysis revealed that our federal state is missing biology mentor teacher (BMT) training. We conducted a design-based research study to create a training programme for our biology mentor teachers. The first aim of the study was to develop the theoretical background for the complex process for biology mentor teacher training. The tetrahedron model developed by Prediger and colleagues was adapted to describe the challenging situation for BMT training. Our second aim was to evaluate the mentoring quality and subject-specific content of mentoring dialogues in the design cycle to connect this study to previous research. Our results show that mentoring quality increases after BMT training. Surprisingly, the mentor teacher and the pre-service teacher differ in their assessment. The evaluation of content of mentoring dialogues shows a higher share of discussions on content knowledge than in previous studies. All in all, this is one of the first studies on BMT training. This paper is an invitation to continue the work in this important field of biology teacher training.
Article
Full-text available
The Lake Wobegon effect, characterized by individuals’ tendency to overestimate their abilities, is a widely recognized phenomenon across various domains of human performance. This study examined the extent to which Chinese psychotherapists (N = 223), a group culturally influenced by the value of modesty, would demonstrate that effect. It also examined in a subsample the association between that effect and clinical experience (N = 63) as well as the justifications these therapists gave for the self-assessments of effectiveness that they had provided (N = 66). Participants rated themselves as above average, with the mean percentile rank of 71.4 (SD = 14.14; Mdn = 73), which is generally similar to results obtained in studies of Western therapists. Therapist experience level was not related to self-ratings of effectiveness. The evidence therapists reported using to arrive at their self-assessments primarily consisted of feedback they had received from others (44.7% of respondents) and their observations of clients’ responsiveness to treatment (40.4%). Possible implications of these findings to the field are discussed.
Article
Full-text available
People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.
Article
Full-text available
The relationship between trait ambiguity and self–peer agreement in personality judgment was examined. In Study 1, self–peer agreement was lower on ambiguous traits (those with many behavioral referents) than on unambiguous ones (those with few behavioral referents). This finding was partially moderated by the level of friendship between peers. These results suggest that people disagree in their judgments because they use idiosyncratic trait definitions when making judgments on ambiguous traits. Study 2 tested this explanation by exploring self–peer agreement when participant pairs were forced to use the same trait definition versus different ones when judging themselves and each other. Forcing participants to use the same trait definition increased the degree to which their judgments covaried with one another. Discussion centers on the cognitive and motivational forces that can influence the degree to which personality judgments differ.
Article
Full-text available
Why and when do people disagree on their conceptions or prototypes of social categories? In 6 studies, it was revealed that such differences tend to be self-serving. Ss tended to endorse self-descriptive attributes as central to their prototypes of desirable social concepts and emphasize features that were not self-descriptive in their conceptions of undesirable categories. Such disagreements were constrained to attributes potentially central to the domain in question and did not occur for clearly peripheral features. Self-serving differences in prototype structure were exhibited in social information processing tasks and led to disagreements in judgments of others. Potential mechanisms underlying the development of these egocentric cognitive structures and their implications for self-serving judgments of ability are discussed.
Article
Full-text available
Proposes a theoretical framework for understanding and integrating people's and animals' covariation assessment. It is argued that covariation perception is determined by the interaction between 2 sources of information: (a) the organism's prior expectations about the covariation between 2 events and (b) current situational information provided by the environment about the objective contingency between the events. Both accuracies and errors in people's and animals' covariation assessments are analyzed within this interactional theoretical framework. Four lines of research are reviewed in support of this analysis. The issue of accuracy vs rationality in covariation assessment is considered. (8 p ref)
Article
Full-text available
An important source of people's perceptions of their performance, and potential errors in those perceptions, are chronic views people hold regarding their abilities. In support of this observation, manipulating people's general views of their ability, or altering which view seemed most relevant to a task, changed performance estimates independently of any impact on actual performance. A final study extended this analysis to why women disproportionately avoid careers in science. Women performed equally to men on a science quiz, yet underestimated their performance because they thought less of their general scientific reasoning ability than did men. They, consequently, were more likely to refuse to enter a science competition.
Article
Full-text available
Participants discussed paintings they liked and disliked with artists who were or were not personally invested in them. Participants were urged to be honest or polite or were given no special instructions. There were no conditions under which the artists received totally honest feedback about the paintings they cared about. As predicted by the defensibility postulate, participants stonewalled, amassed misleading evidence, and conveyed positive evaluations by implication. They also told some outright lies. But the participants also communicated clearly their relative degrees of liking for the different special paintings. The results provide new answers to the question of why beliefs about other people's appraisals do not always correspond well with their actual appraisals.
Article
The widely reported "epidemic of medical error" has resulted in calls to find systems that prevent, detect, and correct errors. Our incident investigations indicated that specimen-processing personnel often facilitated negative patient outcomes by failing to prioritize work or modify rules in critical situations. Our team developed a knowledge and problem-solving assessment for specimen-processing personnel in order to identify opportunities for training. We included a self-assessment and found that poor performers grossly overestimated their knowledge and problem-solving ability. This study illustrates the utility of using competency challenges to identify opportunities for improvement.