ArticlePDF Available

The Consequences of the Hindsight Bias in Medical Decision Making


Abstract and Figures

The hindsight bias manifests in the tendency to exaggerate the extent to which a past event could have been predicted beforehand. This bias has particularly detrimental effects in the domain of medical decision making. I present a demonstration of the bias, its contribution to overconfidence, and its involvement in judgments of medical malpractice. Finally, I point out that physicians and psychologists can collaborate to the mutual benefit of both professions.
Content may be subject to copyright.
Current Directions in Psychological
22(5) 356 –360
© The Author(s) 2013
Reprints and permissions:
DOI: 10.1177/0963721413489988
The hindsight bias manifests in the tendency to exagger-
ate the extent to which a past event could have been
predicted beforehand. First systematically investigated by
Fischhoff (1975), the bias is sometimes called “Monday
morning quarterbacking” or the “I knew-it-all-along
effect” (Wood, 1978). The hindsight bias has particularly
detrimental effects in the domain of medical decision
making. I begin with the classic study demonstrating how
the bias diminishes the salutary impact of a medical edu-
cation exercise.
The Hindsight Bias as an Impediment
to Learning
A clinicopathologic conference (CPC) is a dramatic event
at a hospital. A young physician, such as a resident, is
given all of the documentation except the autopsy report
that pertains to a deceased patient. After studying the
material for a week or so, the physician presents the case
to the assembled medical staff, going over the case and
listing the differential diagnosis, which consists of the
several possible diagnoses for this patient. Finally, the
presenting physician announces the diagnosis that he or
she thinks is the correct one. The presenter then sits
down, sweating profusely, as the pathologist who did the
autopsy takes the podium and announces the correct
diagnosis. The cases are chosen because they are diffi-
cult, so the presenting physician’s hypothesis often is
The CPC is supposed to be an educational experience.
Its goal is to enlighten the audience members about diag-
nosing a particularly challenging case. However, after
hearing the pathologist’s report, which contradicts the
diagnosis made by the presenting physician, many audi-
ence members think, “Why aren’t we hiring residents as
astute as my cohort of residents? This diagnosis was
easy.” The audience members do not learn from the
instructive case presented at the CPC. Instead, they criti-
cize the presenter, because in hindsight, they think the
case was relatively obvious.
A CPC is fertile ground for the manifestation of the
hindsight bias; after the correct diagnosis is disclosed, it
seems as if it could easily have been discerned before-
hand. Dawson et al. (1988) interrupted eight CPCs at two
critical junctures. At each CPC, after the presenter listed
the five possible diagnoses, the researchers asked half of
the audience to assign a probability to each diagnosis
(i.e., the likelihood that it was correct). These participants
were in the foresight group, because they were asked to
provide data before the correct diagnosis was revealed.
These data were collected, the pathologist then revealed
the true diagnosis, and the CPC was paused a second
time for the other half of the audience to provide data.
489988CDPXXX10.1177/0963721413489988ArkesHindsight Bias in Medical Decision Making
Corresponding Author:
Hal R. Arkes, Department of Psychology, Ohio State University, 1827
Neil Ave., 240N Lazenby Hall, Columbus, OH 43210
The Consequences of the Hindsight Bias
in Medical Decision Making
Hal R. Arkes
Ohio State University
The hindsight bias manifests in the tendency to exaggerate the extent to which a past event could have been predicted
beforehand. This bias has particularly detrimental effects in the domain of medical decision making. I present a
demonstration of the bias, its contribution to overconfidence, and its involvement in judgments of medical malpractice.
Finally, I point out that physicians and psychologists can collaborate to the mutual benefit of both professions.
hindsight bias, overconfidence, malpractice, decision support systems
Hindsight Bias in Medical Decision Making 357
The researchers asked them to assign a probability to
each of the five possibilities as if they had not just been
informed of the right answer. These participants were in
the hindsight group, because they were asked to provide
data after the correct diagnosis was revealed. For each
case, Dawson et al. asked two experts who had attended
their domain-appropriate CPC to indicate, on a 10-cm
line, what proportion of good clinicians would choose
the correct diagnosis. The average of the two experts’
ratings was used to divide the eight CPCs into two
quartets—one being the four more difficult cases and the
other being the four less difficult ones. Dawson et al.
divided the attendees into groups of less experienced
and more experienced physicians on the basis of their
training and seniority. As Figure 1 shows, in three of the
four experience–diagnostic-difficulty groups, the hind-
sight participants estimated the correct answer to be
more likely than the foresight group did. The difference
averaged approximately 12 percentage points. Hindsight
physicians mistakenly think that the case was easier than
it really was (as evidenced by the predictions of the fore-
sight subjects). The educational benefit of the CPC is con-
sequently diminished, because the audience members
think there is little or nothing to be learned, given that
they retrospectively judge their diagnoses to have been
relatively accurate.
Note that the most senior physicians diagnosing the
most difficult cases escaped the hindsight bias. They real-
ized that they were not likely to have made the correct
diagnosis because a particular disease was so very rare or
the presentation of the disease was so very abnormal.
Given that the hindsight bias compromises the educa-
tional value of a CPC, it would be helpful if there were
some way to diminish its negative impact. Following the
guidance of Slovic and Fischhoff (1977), Lord, Lepper,
and Preston (1984), and Koriat, Lichtenstein, and Fischhoff
(1980), Arkes, Faust, Guilmette, and Hart (1988) modified
the typical hindsight design. Before some neuropsychol-
ogists stated in hindsight the probability that they would
have assigned to the correct diagnosis, they first had to
explain how each alternative diagnosis might have been
correct. What symptoms might have been consistent with
the diagnoses that were not the correct one? This simple
exercise reduced the magnitude of the hindsight bias
by 71%.
The hindsight bias also plays an insidious role in generat-
ing unwarranted overconfidence. Consider the case of
right-sided heart catheterization, a procedure in which a
long slender tube is inserted into an artery, usually in the
groin area, and then is very carefully snaked through the
circulatory system, ultimately reaching the heart. There
the catheter can be used to monitor various aspects of
blood flow (“hemodynamic functioning”). Unfortunately,
the procedure of catheterization poses some risk to the
patient; adverse events do infrequently occur during this
procedure. Some physicians assert that various indices of
hemodynamic functioning can be accurately and confi-
dently estimated without the use of the catheter; that is,
some physicians think that by using only such noninva-
sive procedures as assessing blood pressure, they can
gather enough information about hemodynamic func-
tioning that a catheterization is unnecessary. Thus,
adverse events due to catheterization would be avoided.
However, when only noninvasive procedures are used,
are confident physicians more likely to be accurate in
their estimates than physicians who are not confident in
their estimates based solely on noninvasive procedures?
Are confident physicians justified in eschewing catheter-
ization because it is not necessary given the presumed
high accuracy of their estimates based on noninvasive
measures? Dawson et al. (1993) checked whether physi-
cians’ confidence in three indices of hemodynamic func-
tioning was appropriate. Before the catheter was inserted
into the patient, each physician estimated these three
indices and stated his or her confidence in each estimate.
Then the catheter was inserted, and the three levels were
directly measured. The results were startling: There was
no relation whatsoever between the accuracy of an esti-
mate and the confidence a physician assigned to that
estimate. One relation involving confidence was statisti-
cally significant, however: the relation between years of
experience and expressed confidence. Veteran physi-
cians were more confident in their estimates than were
junior physicians, even though confidence and accuracy
were unrelated. Yang and Thompson (2010) reported a
similar finding among nurses.
Foresight Hindsight
Probability Est. of Correct Diagnosis
42 41.1 More Experienced
40.2 Less Experienced
38.5 Less Experienced
24.5 More Experienced
Fig. 1. Mean estimated probabilities of the correct diagnosis as a func-
tion of the timing of the estimates (foresight vs. hindsight), experience
of the estimators (less = thin lines vs. more = bold lines), and case
difficulty (less difficult = solid lines vs. more difficult = dashed lines).
358 Arkes
This research provided important guidance: Physicians’
confidence in their estimate of hemodynamic functioning
should not be used as a basis for deciding whether a
catheterization is needed.
Why was physician confidence unrelated to accuracy?
This experiment constituted the first time these physi-
cians ever had to provide a confidence rating for their
estimates of hemodynamic status. In their prior experi-
ence, they had inserted the catheter, noted the levels of
each index, and concluded that each was “pretty much
what I thought it would be.” This is a manifestation of
the hindsight bias, the belief that they “knew it all along.”
In fact, they did not know it all along. The hindsight
bias merely provides unwarranted post hoc confirmation
of their ghost estimate of hemodynamic functioning.
Physicians that are more senior were more confident
because they had experienced more of these bogus “con-
firmations.” By being forced to give an a priori estimate in
our experiment, the physicians had experienced their first
learning trial. In order to improve one’s estimates, one has
to make an actual estimate and then receive feedback. No
prior overt estimates had apparently occurred in the
experience of these physicians.
Note that most of the studies cited in the hindsight and
overconfidence sections of the article by Dawson et al.
(1993) are 20 years old or more. Because it is difficult to
do naturalistic “on-the-job” psychological research with a
statistically sufficient number of physicians working in
high-stakes situations, research that is more recent is
scarce but greatly needed.
Malpractice verdicts are always rendered from the per-
spective of hindsight. An adverse event has already taken
place, and jurors are asked to consider whether a physi-
cian has met the standard of care in his or her treatment of
the patient. I once asked a physician if he practiced “defen-
sive medicine,” which is generally defined as treatment not
designed to promote the health of the patient but instead
to reduce the possibility of successful malpractice claims
against the practitioner. The physician, who was well
versed in the psychology of medical decision making,
promptly answered that he did practice defensive medi-
cine. He thought that jurors would succumb to the hind-
sight bias. They would think that he should have easily
been able to make the correct diagnosis. Due to the hind-
sight bias the physician would test for every possible diag-
nosis no matter how unlikely. Defensive medicine was his
way of counteracting the hindsight bias.
This physician made a reasonable argument that was
a testament to the power he attributed to the hindsight
bias. One possible way to lessen one’s vulnerability to
the hindsight bias is to use a computer-based decision
support system (DSS). Such systems are designed to
assist physicians in diagnosis and treatment. They pro-
vide advice to practitioners, and they have been shown
to be superior to physicians in a wide variety of diagnos-
tic contexts (Dawes, Faust, & Meehl, 1989). Because the
use of a DSS might represent modern medicine in the
eyes of a juror, perhaps any physician who used such an
advanced tool might be insulated from jurors’ ire com-
pared with a physician who did not use computer assis-
tance. On the other hand, if the use of a DSS were to be
perceived as an abrogation of the physician’s responsibil-
ity, then the use of a DSS might foster greater probability
of being found liable for malpractice.
Arkes, Shaffer, and Medow (2008) tested these two
opposing hypotheses using a realistic video recording of
the key portions of a staged malpractice trial. The same
adverse medical outcome occurred regardless of whether
the physician used a DSS. The eight scenarios also varied
the severity of the symptoms and whether the physician
heeded the advice of the DSS (or in the case of the physi-
cians who used no DSS, whether the physician happened
to choose the course that would have been recom-
mended by a DSS). The good news is that the use of the
aid did not increase mock jurors’ willingness to find the
physician liable for malpractice. However, if a physician
used the aid but defied its recommendation and was
found liable, the jurors were more punitive toward the
physician than if the aid was not used or was used and
heeded. This has led some physicians I have met to
decide that they would not want to use a DSS, because
our results suggest that if they were to be found liable for
malpractice, they would have had to heed the DSS, even
if they disagreed with its advice, in order to escape the
wrath of punitive jurors. Many times, physicians want to
defy the aid even if it would have been better for them to
heed it (Dawes et al., 1989).
Why Should a Psychologist Be
Interested in Medical Decision
There are at least two reasons why psychologists should
become involved in the domain of medical decision mak-
ing. First, as illustrated in many of these examples, psy-
chologists can contribute to the medical education
of physicians, both in their formal training and in their
on-the-job experience. Some of the studies reviewed sug-
gest how physicians can reduce their susceptibility to
the hindsight bias and improve the calibration of their
confidence estimates. Gigerenzer (2002) has demon-
strated ways to substantially improve physicians’ consid-
eration of risk. Given physicians’ surprising difficulty in
Hindsight Bias in Medical Decision Making 359
comprehending health statistics (Wegwarth & Gigerenzer,
2011), techniques that improve physicians’ performance
in this domain would be extremely valuable. Second, to
improve health care and health care decisions, patients
and journalists must also improve their understanding of
health statistics and medical data. Psychologists have
already contributed very substantially in this endeavor
(e.g., Garcia-Retamero & Galesic, 2010; Tait, Voepel-
Lewis, Zikmund-Fisher, & Fagerlin, 2010), but much more
needs to be done.
Not only can psychologists contribute to medicine, but
also research in the domain of medical decision making
can contribute to psychological theory in at least two
ways. First, medical decisions generally involve impor-
tant, highly consequential situations. However, most psy-
chological theories are tested in much less significant
contexts. For example, the earliest demonstration of the
hindsight bias (Fischhoff, 1975) used laypersons consid-
ering the potential outcomes of obscure historical events.
It is important to ascertain whether psychological theo-
ries and findings also apply when lives and malpractice
verdicts are at stake. Second, most psychology experi-
ments in the domain of decision making are done with
descriptions of lotteries, gambles, and risky situations.
However, Hertwig, Barron, Weber, and Erev (2004) have
shown that decisions based on experience can differ sub-
stantially from those based on mere descriptions. Note
that physicians use their experience in rendering their
decisions, but patients—even the best informed ones—
generally have only a description of the probabilities and
possible outcomes. Thus, the medical situation is a good
venue in which to study the difference between decision
making based on personal experience and decision
making based solely on descriptions. I am suggesting
that medicine can provide an important realm for the
testing and furtherance of psychological theory, and psy-
chologists can contribute to the performance and training
of physicians. Members of both professions can benefit
from mutual collaboration.
Recommended Reading
Arkes, H. R., & Gaissmaier, W. (2012). Psychological research
and the PSA test controversy. Psychological Science, 23,
547–553. Illustrates how a psychology-based explanation
can cast light upon a current medical controversy.
Chapman, G. B., & Sonnenberg, F. A. (2000). Decision mak-
ing in health care: Theory, psychology, and applications.
Cambridge, England: Cambridge University Press. Contains
a number of excellent chapters on the psychology of medi-
cal decision making.
Gigerenzer, G., & Gray, J. A. M. (2011). Better doctors, better
patients, better decisions. Cambridge, MA: MIT Press. Contains
important information on why both physicians and patients
are not making good decisions and how their decision
making could be improved.
Marks, M. A. Z., & Arkes, H. R. (2008). Patient and surrogate dis-
agreement in end-of-life decisions: Can surrogates accurately
predict patients’ preferences? Medical Decision Making, 28,
524–531. Another example of the analysis of an important
decision—one made at the very end of life.
Declaration of Conflicting Interests
The author declared no conflicts of interest with respect to the
authorship or the publication of this article.
Some of the research upon which this article is based
was funded by the Program in Decision, Risk, and Manage-
ment Science at the National Science Foundation (Grant SES
Arkes, H. R., Faust, D., Guilmette, T. J., & Hart, K. (1988).
Eliminating the hindsight bias. Journal of Applied Psychol-
ogy, 73, 305–307. doi:10.1037/0021-9010.73.2.305
Arkes, H. R., Shaffer, V. A., & Medow, M. A. (2008). The influ-
ence of a physician’s use of a diagnostic decision aid on
the malpractice verdicts of mock jurors. Medical Decision
Making, 28, 201–208. doi:10.1177/0272989X07313280
Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus
actuarial judgment. Science, 243, 1668–1674. doi:10.1126/
Dawson, N. V., Arkes, H. R., Siciliano, C., Blinkhorn, R.,
Lakshmanan, M., & Petrelli, M. (1988). Hindsight bias:
An impediment to accurate probability estimation in clini-
copathologic conferences. Medical Decision Making, 8,
259–264. doi:10.1177/0272989X8800800406
Dawson, N. V., Connors, A. F., Jr., Speroff, T., Kemka, A.,
Shaw, P., & Arkes, H. R. (1993). Hemodynamic assess-
ment in the critically ill: Is physician confidence warranted?
Medical Decision Making, 13, 258–266. doi:10.1177/02729
Fischhoff, B. (1975). Hindsight is not equal to foresight:
The effect of outcome knowledge on judgment under
uncertainty. Journal of Experimental Psychology: Human
Perception and Performance, 1, 288–299. doi:10.1037/0096-
Garcia-Retamero, R., & Galesic, M. (2010). How to reduce
the effect of framing on messages about health. Journal
of General Internal Medicine, 25, 1323–1329. doi:10.1007/
Gigerenzer, G. (2002). Insight. In Calculated risks (pp. 39–54)
New York, NY: Simon & Schuster.
Hertwig, R., Barron, G., Weber, E. U., & Erev, I. (2004).
Decisions from experience and the effect of rare events in
risky choice. Psychological Science, 15, 534–539.
Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for
confidence. Journal of Experimental Psychology: Human
Learning and Memory, 6, 107–118. doi:10.1037/0278-
Lord, C. G., Lepper, M. R., & Preston, E. (1984). Considering the
opposite: A corrective strategy for social judgment. Journal
360 Arkes
of Personality and Social Psychology, 47, 1231–1243.
Slovic, P., & Fischhoff, B. (1977). On the psychology of experi-
mental surprises. Journal of Experimental Psychology:
Human Perception and Performance, 3, 544–551.
Tait, A. R., Voepel-Lewis, T., Zikmund-Fisher, B. J., & Fagerlin,
A. (2010). The effect of format on parental understanding
of the risks and benefits of clinical research: A compari-
son between text, tables, and graphs. Journal of Health
Communication, 15, 487–501. doi:10.1080/10810730.2010
Wegwarth, O., & Gigerenzer, G. (2011). Statistical illiteracy in
doctors. In G. Gigerenzer & J. A. M. Gray (Eds.), Better
doctors, better patients, better decisions: Envisioning health
care 2020 (pp. 137–151). Cambridge, MA: MIT Press.
Wood, G. (1978). The knew-it-all-along effect. Journal
of Experimental Psychology: Human Perception and
Performance, 4, 345–353. doi:10.1037/0096-1523.4.2.345
Yang, H., & Thompson, C. (2010). Nurses’ risk assessment
judgements: A confidence calibration study. Journal of
Advanced Nursing, 66, 2751–2760. doi:10.1111/j.1365-
... Wu et al. (2013) reported that those with larger household sizes were more likely to stay in commercial facilities or public shelters. Wu et al. (2012;2013) reported that Whites (Caucasians) were less likely to stay in public shelters. Wu et al. (2012) found that evacuees with higher education, and those with higher income were less likely to stay in public shelters as shown in Table 2, which is consistent with Whitehead et al.'s (2000a) findings. ...
... Married evacuees are more likely to have larger household sizes (e.g., Wu et al., 2012;2013;Lindell et al., 2011) Wizemann et al. (2014) reported that married evacuees were more likely to have children and other vulnerable household members who need to be protected from the impacts of the hurricane. For our study, we anticipate that married evacuees are less likely to stay with peers, possibly because peers may not have enough rooms to accommodate them (the married couple) as well as their children or other vulnerable household members who may evacuate with them. ...
... This study had some findings that were inconsistent with prior research. While previous research (e.g., Wu et al. 2012;2013) reported that total evacuation cost was negatively associated with hotels/motels and positively associated with peers' homes and public shelters, this study found that evacuees with greater extent of concern about evacuation expenses were more likely to stay in hotels/motels than with peers, suggesting that evacuation costs or concern about such costs could be more nuanced than previously thought. Potentially, households who are extremely concerned about evacuation costs cannot afford the travel costs to distant peers. ...
This paper investigates how perceived certainty factors influenced households’ selection of destination and accommodation type during evacuation. Using survey responses from Jacksonville, FL, multinomial logit models were developed for both choices. For the first, greater understanding of hurricane-related graphics decreased households' probability of staying within their community. Households with a member who has special medical needs and those evacuating with a greater number of vehicles were more likely to stay in the eastern portion of their county. Greater perceived certainty about the hurricane impact location decreased households’ probability of evacuating to the south. For the accommodation model, married evacuees and those who received official evacuation notices had increased likelihood of staying in hotels/motels, while those who evacuated a day before landfall were less likely to do so. Greater perceived certainty about hurricane impact time and frequency of communication with social network members increased the probability of staying in a peer’s home.
... Indeed, overconfidence has been linked to diagnostic error (Berner & Graber, 2008;Friedman et al., 2005;Meyer et al., 2013). Berner and Graber (2008) suggest that physicians may develop an "illusion of validity", which makes them overestimate the accuracy of their judgements (Einhorn & Hogarth, 1978). 1 As a result, physicians often anchor on their initial diagnostic hypotheses and become less likely to seek advice or consider other possibilities (Arkes, 2013;Dreiseitl & Binder, 2005). When they do, they may selectively seek information that supports their hypotheses, (Dani, Bowen-Carpenter, & McGown, 2019;Mendel et al., 2011), and/or distort this information in favour their hypotheses (Kostopoulou et al., 2009(Kostopoulou et al., , 2012Leblanc et al., 2001Leblanc et al., , 2002Nurek et al., 2014). ...
Full-text available
Previous research has highlighted the importance of physicians' early hypotheses for their subsequent diagnostic decisions. It has also been shown that diagnostic accuracy improves when physicians are presented with a list of diagnostic suggestions to consider at the start of the clinical encounter. The psychological mechanisms underlying this improvement in accuracy are hypothesised. It is possible that the provision of diagnostic suggestions disrupts physicians' intuitive thinking and reduces their certainty in their initial diagnostic hypotheses. This may encourage them to seek more information before reaching a diagnostic conclusion, evaluate this information more objectively, and be more open to changing their initial hypotheses. Three online experiments explored the effects of early diagnostic suggestions, provided by a hypothetical decision aid, on different aspects of the diagnostic reasoning process. Family physicians assessed up to two patient scenarios with and without suggestions. We measured effects on certainty about the initial diagnosis, information search and evaluation, and frequency of diagnostic changes. We did not find a clear and consistent effect of suggestions and detected mainly non-significant trends, some in the expected direction. We also detected a potential biasing effect: when the most likely diagnosis was included in the list of suggestions (vs. not included), physicians who gave that diagnosis initially, tended to request less information, evaluate it as more supportive of their diagnosis, become more certain about it, and change it less frequently when encountering new but ambiguous information; in other words, they seemed to validate rather than question their initial hypothesis. We conclude that further research using different methodologies and more realistic experimental situations is required to uncover both the beneficial and biasing effects of early diagnostic suggestions.
... Judgment and decision-making can be systematically biased in several ways (Gilovich et al., 2002;Kahneman et al., 1982). Bias can lead to poor judgments and decisions in domains such as intelligence analysis (Dhami et al., 2019;Morewedge et al., 2015), medicine (Arkes, 2013;Bornstein & Emler, 2001) and forensics (Kassim et al., 2013) to name a few. This is compounded by the fact that individuals, including children (Elashi & Mills, 2015;Hagá et al., 2018), often report that they are less susceptible to biases than others-a phenomenon known as the bias blind spot (BBS; Pronin et al., 2002;Pronin & Kugler, 2007;Scopelliti et al., 2015;West et al., 2012). ...
Full-text available
Individuals often assess themselves as being less susceptible to common biases compared to others. This bias blind spot (BBS) is thought to represent a metacognitive error. In this research, we tested three explanations for the effect: The cognitive sophistication hypothesis posits that individuals who display the BBS more strongly are actually less biased than others. The introspection bias hypothesis posits that the BBS occurs because people rely on introspection more when assessing themselves compared to others. The conversational processes hypothesis posits that the effect is largely a consequence of the pragmatic aspects of the experimental situation rather than true metacognitive error. In two experiments (N = 1057) examining 18 social/motivational and cognitive biases, there was strong evidence of the BBS. Among the three hypotheses examined, the conversational processes hypothesis attracted the greatest support, thus raising questions about the extent to which the BBB is a metacognitive effect.
... Arkes gives the example of the clinicopathological conference, in which a physician is given a case to diagnose, omitting the diagnostic pathology. 11 When the true diagnosis is revealed and the physician is shown to have been wrong, members of the audience tend to think in hindsight that the diagnosis was obvious, underestimate the difficulty that their colleague had, and fail to learn from the experience. This induces overconfidence in those whose hindsight misled them. ...
Just as overdetection and overdefinition contribute to a risk of overdiagnosis or misdiagnosis, so does overconfidence. Physicians in general tend to underappreciate the likelihood that their diagnoses are wrong, because of overconfidence in their diagnostic abilities. Overconfidence in intuitive thinking, through long experience, can lead to misdiagnosis when analytical thinking is suppressed; fewer errors are made when both intuitive and analytical thinking are engaged. Furthermore, those who are overconfident tend to adduce evidence that supports their diagnosis and reject evidence that does not. Overconfidence can be bred during medical training. Poorly performing students are more confident in their abilities than their performances suggest, although they also report less confidence in their predictions. Surprisingly, high fidelity (highly realistic) simulation teaching, say with mannekins, may lead to overconfidence and equal or even worse performance and growth in knowledge than low fidelity simulation. Hindsight bias, the tendency to exaggerate the extent to which a past event could have been predicted beforehand, also tends to induce overconfidence.
... Since Fischhoff's (1975) early demonstration that people in retrospect overestimate the likelihood of historical events, hindsight bias in individuals has been documented in many contexts such as legal (Giroux et al., 2016), medical (Arkes, 2013), and economic decisionmaking (Biais & Weber, 2009), election outcomes (Blank et al., 2003), sporting events (Bondsraacke et al., 2001), and scientific findings (Slovic & Fischhoff, 1977). Hindsight bias is a robust and pervasive phenomenon (Christensen-Szalanski & Wilham, 1991;Guilbault et al., 2004) and people are often unaware that they succumbed to hindsight bias (Pohl & Hell, 1996). ...
Full-text available
Hindsight bias not only occurs in individual perception but in written work (e.g., Wikipedia articles) as well. To avoid the possibility that biased written representations of events distort the views of broad audiences, one needs to understand the factors that determine hindsight bias in written work. Therefore, we tested the effect of three potential determinants: the extent to which an event evokes sense-making motivation, the availability of verifiable causal information regarding the event, and the provision of content policies. We conducted one field study examining real Wikipedia articles (N = 40) and three preregistered experimental studies in which participants wrote or edited articles based on different materials (total N = 720). In each experiment, we systematically varied one determinant. Findings provide further-and even more general-support that Wikipedia articles about various events contain hindsight bias. The magnitude of hindsight bias in written work was contingent on the sense-making motivation and the availability of causal information. We did not find support for the effect of content policies. Findings are in line with causal model theory and suggest that some types and topics of written work might be particularly biased by hindsight (e.g., coverage of disasters, research reports, written expert opinions). (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... Uma vez que há fortes evidências na literatura que confirmam a presença do viés retrospectivo no cotidiano de pessoas comuns e especialistas (Motavalli & Nestel, 2016;Muntazir et al., 2013), e, além disso, diante das implicações negativas que esse fenômeno acarreta, surge o questionamento de como é possível superar essa falha cognitiva, a fim de se evitar erros e aumentar a eficiência nas tomadas de decisões. Uma dessas estratégias é levar o tomador de decisões a cogitar o oposto, ou seja, elaborar diferentes explicações para o desfecho (Arkes, 2013;Kahneman, 2012). Por exemplo, para não se apoiar em um resultado positivo que pode ter sido causado por fatores aleatórios, um executivo pode solicitar aos seus diretores que imaginem que a situação atual fosse diferente, ou seja, não ocorrera o êxito esperado. ...
Full-text available
Resumo O viés retrospectivo é o fenômeno de perceber e avaliar eventos diferentemente, uma vez que eles tenham ocorrido. Dada uma falha cognitiva, pessoas tendem a ter distorções da memória e a produzir falsas sensações de inevitabilidade e previsibilidade. Os objetivos deste trabalho foram fazer uma revisão de literatura do viés retrospectivo e discutir as implicações teóricas e práticas do tema. Uma pesquisa, realizada nas principais bases de dados do país, não encontrou nenhum artigo relacionado com o tema. A partir de uma seleção das principais pesquisas internacionais sobre o assunto, foi feita uma análise que mostrou os níveis cognitivos do viés retrospectivo, suas implicações e as estratégias usadas para atenuar o viés. Nas considerações finais, são discutidos o impacto e a importância do conhecimento desse fenômeno no dia a dia de tomadores de decisão e cidadãos, assim como a necessidade de se explorar o tema em pesquisas nacionais. Palavras-chave: Viés retrospectivo; Vieses cognitivos; Falsa memória; Heurísticas; Tomada de decisão. Abstract Hindsight bias is the phenomenon of perceiving and evaluating events differently once they are occurred. Given a cognitive flaw, people tend to have distortions of memory and produce false sensations of inevitability and predictability. The objective of this work is to make a literature review of the hindsight bias and to discuss both theoretical and practical implications of the theme. A search conducted in the Brazilian's main databases did not find any articles related to the topic. From a selection of the main international studies on the subject, an analysis was made showing the cognitive levels of the hindsight bias, its implications and the strategies used to attenuate the bias. In the final considerations, the impact and importance of day-today knowledge of the phenomenon and the need to explore the subject in national research are discussed.
The field of psychology–law is extremely broad, encompassing a strikingly large range of topic areas in both applied psychology and experimental psychology. Despite the continued and rapid growth of the field, there is no current and comprehensive resource that provides coverage of the major topic areas in the psychology–law field. The Oxford Handbook of Psychology and Law is an up-to-date, scholarly, and comprehensive volume that provides broad coverage of psychology–law topics. The field of psychology–law can be broadly divided into applied and experimental domains. Whereas applied specialties in psychology, such as clinical, counseling, neuropsychology, and school, are typically grounded in the scientist-practitioner model that emphasizes both research and the provision of clinical services (e.g., assessment, therapy), experimental psychology focuses almost exclusively on conducting empirical research grounded in theories from areas such as cognitive, developmental, and social psychology. Importantly, both applied and experimental psychologists have made meaningful contributions to the psychology–law field, and each of these domains of the psychology–law field includes a range of well-developed topic areas with robust empirical support. This book provides comprehensive coverage of applied and experimental topic areas, with chapters written by a diverse group of well-established psychology–law scholars and emerging future leaders.
Full-text available
Individuals often assess themselves as being less susceptible to common biases compared to others. This bias blind spot (BBS) is thought to represent a metacognitive error. In this research, we tested three explanations for the effect: The cognitive sophistication hypothesis posits that individuals who display the BBS more strongly are actually less biased than others. The introspection bias hypothesis posits that the BBS occurs because people rely on introspection more when assessing themselves compared to others. The conversational processes hypothesis posits that the effect is largely a consequence of the pragmatic aspects of the experimental situation rather than true metacognitive error. In two experiments (N = 1057) examining 18 social/motivational and cognitive biases, there was strong evidence of the BBS. Among the three hypotheses examined, the conversational processes hypothesis attracted the greatest support, thus raising questions about the extent to which the BBB is a metacognitive effect.
A healthy, immunocompetent South Asian man in his mid-20s, with a medical history of gastric ulcer, presented to Accident & Emergency with pleuritic chest pain, shortness of breath, fever, night sweats, weight loss, dry cough and asymptomatic iron deficiency anaemia. Following his initial assessment and investigations (chest X-ray, CT and blood tests), a diagnosis of miliary tuberculosis (TB) was made and empirical antimicrobial treatment started. However, subsequent microbiological testing, including urine, blood, induced sputum and lymph node sampling, was negative. Being interpreted as non-diagnostic, the antimicrobial therapy was continued. Following a clinical deterioration while on treatment, the patient's case was re-evaluated and further investigations, including a repeat CT and a liver biopsy, confirmed a diagnosis of stage IV (T1aN3bM1) gastric carcinoma. Our case highlights the diagnostic challenges in differentiating metastatic cancer from miliary TB. We also focus on possible cognitive biases that may have influenced the initial management decisions.
Full-text available
Studies of the psychology of hindsight have shown that reporting the outcome of a historical event increases the perceived likelihood of that outcome. Three experiments with a total of 463 paid volunteers show that similar hindsight effects occur when people evaluate the predictability of scientific results—they tend to believe they "knew all along" what the experiments would find. The hindsight effect was reduced, however, by forcing Ss to consider how the research could otherwise have turned out. Implications for the evaluation of scientific research by lay observers are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Full-text available
Two experiments with 268 paid volunteers investigated the possibility that assessment of confidence is biased by attempts to justify one's chosen answer. These attempts include selectively focusing on evidence supporting the chosen answer and disregarding evidence contradicting it. Exp I presented Ss with 2-alternative questions and required them to list reasons for and against each of the alternatives prior to choosing an answer and assessing the probability of its being correct. This procedure produced a marked improvement in the appropriateness of confidence judgments. Exp II simplified the manipulation by asking Ss first to choose an answer and then to list (a) 1 reason supporting that choice, (b) 1 reason contradicting it, or (c) 1 reason supporting and 1 reason contradicting. Only the listing of contradicting reasons improved the appropriateness of confidence. Correlational analyses of the data of Exp I strongly suggested that the confidence depends on the amount and strength of the evidence supporting the answer chosen. (21 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Full-text available
Those who consider the likelihood of an event after it has occurred exaggerate their likelihood of having been able to predict that event in advance. We attempted to eliminate this hindsight bias among 194 neuropsychologists. Foresight subjects read a case history and were asked to estimate the probability of three different diagnoses. Subjects in each of the three hindsight groups were told that one of the three diagnoses was correct and were asked to state what probability they would have assigned to each diagnosis if they were making the original diagnosis. Foresight-reasons and hindsight-reasons subjects performed the same task as their foresight and hindsight counterparts, except they had to list one reason why each of the possible diagnoses might be correct. The frequency of subjects succumbing to the hindsight bias was lower in the hindsight-reasons groups than in the hindsight groups not asked to list reasons χ–2(1, N = 140) = 4.12, p
Full-text available
There is a paucity of information regarding the optimal method of presenting risk/benefit information to parents of pediatric research subjects. This study, therefore, was designed to examine the effect of different message formats on parents' understanding of research risks and benefits. An Internet-administered survey was completed by 4,685 parents who were randomized to receive risk/benefit information about a study of pediatric postoperative pain control presented in different message formats (text, tables, and pictographs). Survey questions assessed participants' gist and verbatim understanding of the information and their perceptions of the risks and benefits. Pictographs were associated with significantly (p < .05) greater likelihood of adequate gist and verbatim understanding compared with text and tables regardless of the participants' numeracy. Parents who received the information in pictograph format perceived the risks to be lower and the benefits to be higher compared with the other formats (p < .001). Furthermore, compared with text and tables, pictographs were perceived as more "effective," "helpful," and "trustworthy" in presenting risk/benefit information. These results underscore the difficulties associated with presenting risk/benefit information for clinical research but suggest a simple method for enhancing parents' informed understanding of the relevant statistics.
Full-text available
Although clinicopathologic conferences (CPCs) have been valued for teaching differential diagnosis, their instructional value may be compromised by hindsight bias. This bias occurs when those who know the actual diagnosis overestimate the likelihood that they would have been able to predict the correct diagnosis had they been asked to do so beforehand. Evidence for the presence of the hindsight bias was sought among 160 physicians and trainees attending four CPCs. Before the correct diagnosis was announced, half of the conference audience estimated the probability that each of five possible diagnoses was correct (foresight subjects). After the correct diagnosis was announced the remaining (hindsight) subjects estimated the probability they would have assigned to each of the five possible diagnoses had they been making the initial differential diagnosis. Only 30% of the foresight subjects ranked the correct diagnosis as first, versus 50% of the hindsight subjects (p less than 0.02). Although less experienced physicians consistently demonstrated the hindsight bias, more experienced physicians succumbed only on easier cases.
Full-text available
It is proposed that several biases in social judgment result from a failure--first noted by Francis Bacon--to consider possibilities at odds with beliefs and perceptions of the moment. Individuals who are induced to consider the opposite, therefore, should display less bias in social judgment. In two separate but conceptually parallel experiments, this reasoning was applied to two domains--biased assimilation of new evidence on social issues and biased hypothesis testing of personality impressions. Subjects were induced to consider the opposite in two ways: through explicit instructions to do so and through stimulus materials that made opposite possibilities more salient. In both experiments the induction of a consider-the-opposite strategy had greater corrective effect than more demand-laden alternative instructions to be as fair and unbiased as possible. The results are viewed as consistent with previous research on perseverance, hindsight, and logical problem solving, and are thought to suggest an effective method of retraining social judgment.
Individuals can be asked to make preoutcome judgments about future events (e.g., a scheduled athletic contest) or past events of which they are ignorant (e.g., whether Benjamin Franklin was fired from his office as Postmaster General of the Colonies). In addition, they can be asked to indicate the likelihood of possible outcomes after outcome information is provided. For postoutcome judgments they are asked to rate the possible outcomes as they would have if they had not been given outcome knowledge. The general finding of previous investigations is that Ss are unable or unwilling to ignore outcome knowledge when making postoutcome judgments as revealed by the fact that postoutcome judgments are better than preoutcome judgments. The inability to ignore outcome knowledge can be labeled the "knew-it-all-along" effect. Two experiments were conducted with 179 undergraduates to assess the reliability, generality, and explanations for this effect by varying outcome knowledge, instructions, materials, and the presence or absence of preoutcome judgments. Results support the reliability and generality of the phenomenon. Preoutcome judgments reduced the size of the "knew-it-all-along" effect, but only when Ss were encouraged to remember their preoutcome judgments while making postoutcome judgments. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
This paper is a report of a study of the relationship between nurses' clinical experience and calibration of their self-confidence and judgement accuracy for critical event risk assessment judgements. Miscalibration (i.e. under-confidence or over-confidence of confidence levels) has an important impact on the quality of nursing care. Despite this, little is known about how nurses' subjective confidence is calibrated with the accuracy of their judgments. A sample of 103 nursing students and 34 experienced nurses were exposed to 25 risk assessment vignettes. For each vignette they made dichotomous judgements of whether the patient in each scenario was at risk of a critical event, and assigned confidence ratings (0-100) to their judgement calls. The clinical vignettes and judgement criteria were generated from real patient cases. The methodology of confidence calibration was used to calculate calibration measures and generate calibration curves. Data were collected between March 2007 and January 2008. Experienced nurses were statistically significantly more confident than students but no more accurate. Whilst students tended towards under-confidence, experienced nurses were over-confident. Experienced nurses were no more calibrated than students. Experienced nurses were no better at discriminating between correct and incorrect judgements than students. These patterns were exacerbated when nurses and students were extremely over-confident or extremely under-confident. Nurses were systematically biased towards over/under-confidence in their critical event risk assessment judgements. In particular, experienced nurses were no better calibrated than their student counterparts; with student under-confidence countered by experienced nurses' greater susceptibility to over-confidence.
Patients must be informed about risks before any treatment can be implemented. Yet serious problems in communicating these risks occur because of framing effects. To investigate the effects of different information frames when communicating health risks to people with high and low numeracy and determine whether these effects can be countered or eliminated by using different types of visual displays (i.e., icon arrays, horizontal bars, vertical bars, or pies). Experiment on probabilistic, nationally representative US (n = 492) and German (n = 495) samples, conducted in summer 2008. Participants' risk perceptions of the medical risk expressed in positive (i.e., chances of surviving after surgery) and negative (i.e., chances of dying after surgery) terms. Although low-numeracy people are more susceptible to framing than those with high numeracy, use of visual aids is an effective method to eliminate its effects. However, not all visual aids were equally effective: pie charts and vertical and horizontal bars almost completely removed the effect of framing. Icon arrays, however, led to a smaller decrease in the framing effect. Difficulties with understanding numerical information often do not reside in the mind, but in the representation of the problem.
Professionals are frequently consulted to diagnose and predict human behavior; optimal treatment and planning often hinge on the consultant's judgmental accuracy. The consultant may rely on one of two contrasting approaches to decision-making--the clinical and actuarial methods. Research comparing these two approaches shows the actuarial method to be superior. Factors underlying the greater accuracy of actuarial methods, sources of resistance to the scientific findings, and the benefits of increased reliance on actuarial approaches are discussed.