ArticlePDF Available

The Consequences of the Hindsight Bias in Medical Decision Making

Authors:

Abstract and Figures

The hindsight bias manifests in the tendency to exaggerate the extent to which a past event could have been predicted beforehand. This bias has particularly detrimental effects in the domain of medical decision making. I present a demonstration of the bias, its contribution to overconfidence, and its involvement in judgments of medical malpractice. Finally, I point out that physicians and psychologists can collaborate to the mutual benefit of both professions.
Content may be subject to copyright.
Current Directions in Psychological
Science
22(5) 356 –360
© The Author(s) 2013
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0963721413489988
cdps.sagepub.com
The hindsight bias manifests in the tendency to exagger-
ate the extent to which a past event could have been
predicted beforehand. First systematically investigated by
Fischhoff (1975), the bias is sometimes called “Monday
morning quarterbacking” or the “I knew-it-all-along
effect” (Wood, 1978). The hindsight bias has particularly
detrimental effects in the domain of medical decision
making. I begin with the classic study demonstrating how
the bias diminishes the salutary impact of a medical edu-
cation exercise.
The Hindsight Bias as an Impediment
to Learning
A clinicopathologic conference (CPC) is a dramatic event
at a hospital. A young physician, such as a resident, is
given all of the documentation except the autopsy report
that pertains to a deceased patient. After studying the
material for a week or so, the physician presents the case
to the assembled medical staff, going over the case and
listing the differential diagnosis, which consists of the
several possible diagnoses for this patient. Finally, the
presenting physician announces the diagnosis that he or
she thinks is the correct one. The presenter then sits
down, sweating profusely, as the pathologist who did the
autopsy takes the podium and announces the correct
diagnosis. The cases are chosen because they are diffi-
cult, so the presenting physician’s hypothesis often is
incorrect.
The CPC is supposed to be an educational experience.
Its goal is to enlighten the audience members about diag-
nosing a particularly challenging case. However, after
hearing the pathologist’s report, which contradicts the
diagnosis made by the presenting physician, many audi-
ence members think, “Why aren’t we hiring residents as
astute as my cohort of residents? This diagnosis was
easy.” The audience members do not learn from the
instructive case presented at the CPC. Instead, they criti-
cize the presenter, because in hindsight, they think the
case was relatively obvious.
A CPC is fertile ground for the manifestation of the
hindsight bias; after the correct diagnosis is disclosed, it
seems as if it could easily have been discerned before-
hand. Dawson et al. (1988) interrupted eight CPCs at two
critical junctures. At each CPC, after the presenter listed
the five possible diagnoses, the researchers asked half of
the audience to assign a probability to each diagnosis
(i.e., the likelihood that it was correct). These participants
were in the foresight group, because they were asked to
provide data before the correct diagnosis was revealed.
These data were collected, the pathologist then revealed
the true diagnosis, and the CPC was paused a second
time for the other half of the audience to provide data.
489988CDPXXX10.1177/0963721413489988ArkesHindsight Bias in Medical Decision Making
research-article2013
Corresponding Author:
Hal R. Arkes, Department of Psychology, Ohio State University, 1827
Neil Ave., 240N Lazenby Hall, Columbus, OH 43210
E-mail: arkes.1@osu.edu
The Consequences of the Hindsight Bias
in Medical Decision Making
Hal R. Arkes
Ohio State University
Abstract
The hindsight bias manifests in the tendency to exaggerate the extent to which a past event could have been predicted
beforehand. This bias has particularly detrimental effects in the domain of medical decision making. I present a
demonstration of the bias, its contribution to overconfidence, and its involvement in judgments of medical malpractice.
Finally, I point out that physicians and psychologists can collaborate to the mutual benefit of both professions.
Keywords
hindsight bias, overconfidence, malpractice, decision support systems
Hindsight Bias in Medical Decision Making 357
The researchers asked them to assign a probability to
each of the five possibilities as if they had not just been
informed of the right answer. These participants were in
the hindsight group, because they were asked to provide
data after the correct diagnosis was revealed. For each
case, Dawson et al. asked two experts who had attended
their domain-appropriate CPC to indicate, on a 10-cm
line, what proportion of good clinicians would choose
the correct diagnosis. The average of the two experts’
ratings was used to divide the eight CPCs into two
quartets—one being the four more difficult cases and the
other being the four less difficult ones. Dawson et al.
divided the attendees into groups of less experienced
and more experienced physicians on the basis of their
training and seniority. As Figure 1 shows, in three of the
four experience–diagnostic-difficulty groups, the hind-
sight participants estimated the correct answer to be
more likely than the foresight group did. The difference
averaged approximately 12 percentage points. Hindsight
physicians mistakenly think that the case was easier than
it really was (as evidenced by the predictions of the fore-
sight subjects). The educational benefit of the CPC is con-
sequently diminished, because the audience members
think there is little or nothing to be learned, given that
they retrospectively judge their diagnoses to have been
relatively accurate.
Note that the most senior physicians diagnosing the
most difficult cases escaped the hindsight bias. They real-
ized that they were not likely to have made the correct
diagnosis because a particular disease was so very rare or
the presentation of the disease was so very abnormal.
Given that the hindsight bias compromises the educa-
tional value of a CPC, it would be helpful if there were
some way to diminish its negative impact. Following the
guidance of Slovic and Fischhoff (1977), Lord, Lepper,
and Preston (1984), and Koriat, Lichtenstein, and Fischhoff
(1980), Arkes, Faust, Guilmette, and Hart (1988) modified
the typical hindsight design. Before some neuropsychol-
ogists stated in hindsight the probability that they would
have assigned to the correct diagnosis, they first had to
explain how each alternative diagnosis might have been
correct. What symptoms might have been consistent with
the diagnoses that were not the correct one? This simple
exercise reduced the magnitude of the hindsight bias
by 71%.
Overconfidence
The hindsight bias also plays an insidious role in generat-
ing unwarranted overconfidence. Consider the case of
right-sided heart catheterization, a procedure in which a
long slender tube is inserted into an artery, usually in the
groin area, and then is very carefully snaked through the
circulatory system, ultimately reaching the heart. There
the catheter can be used to monitor various aspects of
blood flow (“hemodynamic functioning”). Unfortunately,
the procedure of catheterization poses some risk to the
patient; adverse events do infrequently occur during this
procedure. Some physicians assert that various indices of
hemodynamic functioning can be accurately and confi-
dently estimated without the use of the catheter; that is,
some physicians think that by using only such noninva-
sive procedures as assessing blood pressure, they can
gather enough information about hemodynamic func-
tioning that a catheterization is unnecessary. Thus,
adverse events due to catheterization would be avoided.
However, when only noninvasive procedures are used,
are confident physicians more likely to be accurate in
their estimates than physicians who are not confident in
their estimates based solely on noninvasive procedures?
Are confident physicians justified in eschewing catheter-
ization because it is not necessary given the presumed
high accuracy of their estimates based on noninvasive
measures? Dawson et al. (1993) checked whether physi-
cians’ confidence in three indices of hemodynamic func-
tioning was appropriate. Before the catheter was inserted
into the patient, each physician estimated these three
indices and stated his or her confidence in each estimate.
Then the catheter was inserted, and the three levels were
directly measured. The results were startling: There was
no relation whatsoever between the accuracy of an esti-
mate and the confidence a physician assigned to that
estimate. One relation involving confidence was statisti-
cally significant, however: the relation between years of
experience and expressed confidence. Veteran physi-
cians were more confident in their estimates than were
junior physicians, even though confidence and accuracy
were unrelated. Yang and Thompson (2010) reported a
similar finding among nurses.
Foresight Hindsight
Probability Est. of Correct Diagnosis
24
26
28
30
32
34
36
38
40
42 41.1 More Experienced
40.2 Less Experienced
38.5 Less Experienced
24.5 More Experienced
Fig. 1. Mean estimated probabilities of the correct diagnosis as a func-
tion of the timing of the estimates (foresight vs. hindsight), experience
of the estimators (less = thin lines vs. more = bold lines), and case
difficulty (less difficult = solid lines vs. more difficult = dashed lines).
358 Arkes
This research provided important guidance: Physicians’
confidence in their estimate of hemodynamic functioning
should not be used as a basis for deciding whether a
catheterization is needed.
Why was physician confidence unrelated to accuracy?
This experiment constituted the first time these physi-
cians ever had to provide a confidence rating for their
estimates of hemodynamic status. In their prior experi-
ence, they had inserted the catheter, noted the levels of
each index, and concluded that each was “pretty much
what I thought it would be.” This is a manifestation of
the hindsight bias, the belief that they “knew it all along.”
In fact, they did not know it all along. The hindsight
bias merely provides unwarranted post hoc confirmation
of their ghost estimate of hemodynamic functioning.
Physicians that are more senior were more confident
because they had experienced more of these bogus “con-
firmations.” By being forced to give an a priori estimate in
our experiment, the physicians had experienced their first
learning trial. In order to improve one’s estimates, one has
to make an actual estimate and then receive feedback. No
prior overt estimates had apparently occurred in the
experience of these physicians.
Note that most of the studies cited in the hindsight and
overconfidence sections of the article by Dawson et al.
(1993) are 20 years old or more. Because it is difficult to
do naturalistic “on-the-job” psychological research with a
statistically sufficient number of physicians working in
high-stakes situations, research that is more recent is
scarce but greatly needed.
Malpractice
Malpractice verdicts are always rendered from the per-
spective of hindsight. An adverse event has already taken
place, and jurors are asked to consider whether a physi-
cian has met the standard of care in his or her treatment of
the patient. I once asked a physician if he practiced “defen-
sive medicine,” which is generally defined as treatment not
designed to promote the health of the patient but instead
to reduce the possibility of successful malpractice claims
against the practitioner. The physician, who was well
versed in the psychology of medical decision making,
promptly answered that he did practice defensive medi-
cine. He thought that jurors would succumb to the hind-
sight bias. They would think that he should have easily
been able to make the correct diagnosis. Due to the hind-
sight bias the physician would test for every possible diag-
nosis no matter how unlikely. Defensive medicine was his
way of counteracting the hindsight bias.
This physician made a reasonable argument that was
a testament to the power he attributed to the hindsight
bias. One possible way to lessen one’s vulnerability to
the hindsight bias is to use a computer-based decision
support system (DSS). Such systems are designed to
assist physicians in diagnosis and treatment. They pro-
vide advice to practitioners, and they have been shown
to be superior to physicians in a wide variety of diagnos-
tic contexts (Dawes, Faust, & Meehl, 1989). Because the
use of a DSS might represent modern medicine in the
eyes of a juror, perhaps any physician who used such an
advanced tool might be insulated from jurors’ ire com-
pared with a physician who did not use computer assis-
tance. On the other hand, if the use of a DSS were to be
perceived as an abrogation of the physician’s responsibil-
ity, then the use of a DSS might foster greater probability
of being found liable for malpractice.
Arkes, Shaffer, and Medow (2008) tested these two
opposing hypotheses using a realistic video recording of
the key portions of a staged malpractice trial. The same
adverse medical outcome occurred regardless of whether
the physician used a DSS. The eight scenarios also varied
the severity of the symptoms and whether the physician
heeded the advice of the DSS (or in the case of the physi-
cians who used no DSS, whether the physician happened
to choose the course that would have been recom-
mended by a DSS). The good news is that the use of the
aid did not increase mock jurors’ willingness to find the
physician liable for malpractice. However, if a physician
used the aid but defied its recommendation and was
found liable, the jurors were more punitive toward the
physician than if the aid was not used or was used and
heeded. This has led some physicians I have met to
decide that they would not want to use a DSS, because
our results suggest that if they were to be found liable for
malpractice, they would have had to heed the DSS, even
if they disagreed with its advice, in order to escape the
wrath of punitive jurors. Many times, physicians want to
defy the aid even if it would have been better for them to
heed it (Dawes et al., 1989).
Why Should a Psychologist Be
Interested in Medical Decision
Making?
There are at least two reasons why psychologists should
become involved in the domain of medical decision mak-
ing. First, as illustrated in many of these examples, psy-
chologists can contribute to the medical education
of physicians, both in their formal training and in their
on-the-job experience. Some of the studies reviewed sug-
gest how physicians can reduce their susceptibility to
the hindsight bias and improve the calibration of their
confidence estimates. Gigerenzer (2002) has demon-
strated ways to substantially improve physicians’ consid-
eration of risk. Given physicians’ surprising difficulty in
Hindsight Bias in Medical Decision Making 359
comprehending health statistics (Wegwarth & Gigerenzer,
2011), techniques that improve physicians’ performance
in this domain would be extremely valuable. Second, to
improve health care and health care decisions, patients
and journalists must also improve their understanding of
health statistics and medical data. Psychologists have
already contributed very substantially in this endeavor
(e.g., Garcia-Retamero & Galesic, 2010; Tait, Voepel-
Lewis, Zikmund-Fisher, & Fagerlin, 2010), but much more
needs to be done.
Not only can psychologists contribute to medicine, but
also research in the domain of medical decision making
can contribute to psychological theory in at least two
ways. First, medical decisions generally involve impor-
tant, highly consequential situations. However, most psy-
chological theories are tested in much less significant
contexts. For example, the earliest demonstration of the
hindsight bias (Fischhoff, 1975) used laypersons consid-
ering the potential outcomes of obscure historical events.
It is important to ascertain whether psychological theo-
ries and findings also apply when lives and malpractice
verdicts are at stake. Second, most psychology experi-
ments in the domain of decision making are done with
descriptions of lotteries, gambles, and risky situations.
However, Hertwig, Barron, Weber, and Erev (2004) have
shown that decisions based on experience can differ sub-
stantially from those based on mere descriptions. Note
that physicians use their experience in rendering their
decisions, but patients—even the best informed ones—
generally have only a description of the probabilities and
possible outcomes. Thus, the medical situation is a good
venue in which to study the difference between decision
making based on personal experience and decision
making based solely on descriptions. I am suggesting
that medicine can provide an important realm for the
testing and furtherance of psychological theory, and psy-
chologists can contribute to the performance and training
of physicians. Members of both professions can benefit
from mutual collaboration.
Recommended Reading
Arkes, H. R., & Gaissmaier, W. (2012). Psychological research
and the PSA test controversy. Psychological Science, 23,
547–553. Illustrates how a psychology-based explanation
can cast light upon a current medical controversy.
Chapman, G. B., & Sonnenberg, F. A. (2000). Decision mak-
ing in health care: Theory, psychology, and applications.
Cambridge, England: Cambridge University Press. Contains
a number of excellent chapters on the psychology of medi-
cal decision making.
Gigerenzer, G., & Gray, J. A. M. (2011). Better doctors, better
patients, better decisions. Cambridge, MA: MIT Press. Contains
important information on why both physicians and patients
are not making good decisions and how their decision
making could be improved.
Marks, M. A. Z., & Arkes, H. R. (2008). Patient and surrogate dis-
agreement in end-of-life decisions: Can surrogates accurately
predict patients’ preferences? Medical Decision Making, 28,
524–531. Another example of the analysis of an important
decision—one made at the very end of life.
Declaration of Conflicting Interests
The author declared no conflicts of interest with respect to the
authorship or the publication of this article.
Funding
Some of the research upon which this article is based
was funded by the Program in Decision, Risk, and Manage-
ment Science at the National Science Foundation (Grant SES
0326468).
References
Arkes, H. R., Faust, D., Guilmette, T. J., & Hart, K. (1988).
Eliminating the hindsight bias. Journal of Applied Psychol-
ogy, 73, 305–307. doi:10.1037/0021-9010.73.2.305
Arkes, H. R., Shaffer, V. A., & Medow, M. A. (2008). The influ-
ence of a physician’s use of a diagnostic decision aid on
the malpractice verdicts of mock jurors. Medical Decision
Making, 28, 201–208. doi:10.1177/0272989X07313280
Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus
actuarial judgment. Science, 243, 1668–1674. doi:10.1126/
science.2648573
Dawson, N. V., Arkes, H. R., Siciliano, C., Blinkhorn, R.,
Lakshmanan, M., & Petrelli, M. (1988). Hindsight bias:
An impediment to accurate probability estimation in clini-
copathologic conferences. Medical Decision Making, 8,
259–264. doi:10.1177/0272989X8800800406
Dawson, N. V., Connors, A. F., Jr., Speroff, T., Kemka, A.,
Shaw, P., & Arkes, H. R. (1993). Hemodynamic assess-
ment in the critically ill: Is physician confidence warranted?
Medical Decision Making, 13, 258–266. doi:10.1177/02729
89X9301300314
Fischhoff, B. (1975). Hindsight is not equal to foresight:
The effect of outcome knowledge on judgment under
uncertainty. Journal of Experimental Psychology: Human
Perception and Performance, 1, 288–299. doi:10.1037/0096-
1523.1.3.288
Garcia-Retamero, R., & Galesic, M. (2010). How to reduce
the effect of framing on messages about health. Journal
of General Internal Medicine, 25, 1323–1329. doi:10.1007/
s11606-010-1484-9
Gigerenzer, G. (2002). Insight. In Calculated risks (pp. 39–54)
New York, NY: Simon & Schuster.
Hertwig, R., Barron, G., Weber, E. U., & Erev, I. (2004).
Decisions from experience and the effect of rare events in
risky choice. Psychological Science, 15, 534–539.
Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for
confidence. Journal of Experimental Psychology: Human
Learning and Memory, 6, 107–118. doi:10.1037/0278-
7393.6.2.107
Lord, C. G., Lepper, M. R., & Preston, E. (1984). Considering the
opposite: A corrective strategy for social judgment. Journal
360 Arkes
of Personality and Social Psychology, 47, 1231–1243.
doi:10.1037/0022-3514.47.6.1231
Slovic, P., & Fischhoff, B. (1977). On the psychology of experi-
mental surprises. Journal of Experimental Psychology:
Human Perception and Performance, 3, 544–551.
Tait, A. R., Voepel-Lewis, T., Zikmund-Fisher, B. J., & Fagerlin,
A. (2010). The effect of format on parental understanding
of the risks and benefits of clinical research: A compari-
son between text, tables, and graphs. Journal of Health
Communication, 15, 487–501. doi:10.1080/10810730.2010
.492560
Wegwarth, O., & Gigerenzer, G. (2011). Statistical illiteracy in
doctors. In G. Gigerenzer & J. A. M. Gray (Eds.), Better
doctors, better patients, better decisions: Envisioning health
care 2020 (pp. 137–151). Cambridge, MA: MIT Press.
Wood, G. (1978). The knew-it-all-along effect. Journal
of Experimental Psychology: Human Perception and
Performance, 4, 345–353. doi:10.1037/0096-1523.4.2.345
Yang, H., & Thompson, C. (2010). Nurses’ risk assessment
judgements: A confidence calibration study. Journal of
Advanced Nursing, 66, 2751–2760. doi:10.1111/j.1365-
2648.2010.05437.x
... Two biases that exist in medical practice that may surface in the analysis of task phase are hindsight bias and weight bias (Persky & Eccleston 2010;Arkes 2013). Hindsight bias is the tendency to exaggerate the extent to which a past event could have been predicted (Arkes 2013;Saposnik et al. 2016). ...
... Two biases that exist in medical practice that may surface in the analysis of task phase are hindsight bias and weight bias (Persky & Eccleston 2010;Arkes 2013). Hindsight bias is the tendency to exaggerate the extent to which a past event could have been predicted (Arkes 2013;Saposnik et al. 2016). Arkes' (2013) literature review on the consequences of hindsight bias in medical decision-making found that hindsight bias affects learning and also leads to overconfidence when practicing medicine, which can result in malpractice (Arkes 2013). ...
... Hindsight bias is the tendency to exaggerate the extent to which a past event could have been predicted (Arkes 2013;Saposnik et al. 2016). Arkes' (2013) literature review on the consequences of hindsight bias in medical decision-making found that hindsight bias affects learning and also leads to overconfidence when practicing medicine, which can result in malpractice (Arkes 2013). Medical decisionmaking and the analysis of task phase have several paralleling tasks. ...
Article
Full-text available
Engineering design research has focused on developing and refining methods and evaluating design education in design education, design research and design in practice. One important aspect that is not thoroughly investigated is the influence of bias on design within these spaces of design. Bias is known to impact the interpretation of information, decision-making and practices in all areas. These factors are vital in engineering design education, practice and research, emphasizing the importance of investigating bias. The first goal of this study is to highlight and synthesize existing bias research in design education, research and practice. The second goal is to identify areas where bias may be under-researched or under-reported in design. To achieve these goals, a comparative analysis is performed against a comparable field: medicine. Many parallels exist between both fields. Patient–provider and designer–end-user relationships are comparable. Medical education is comparable to design education with the cooperative, inquiry-based and integrated learning pedagogy approaches. Lastly, physicians and design engineers both solve cognitively complex systems-oriented problems. Leveraging research on bias in medicine enables us to highlight gaps in engineering design. Recommendations are made to help design researchers address these gaps.
... This could have serious consequences, especially if the first judgment is poor, dubious, arbitrary, or simply wrong. A number of studies have investigated such consequences in applied contexts, for example, in medical (Arkes, 2013), legal (Harley, 2007), economic (Biais & Weber, 2009), or everyday decision-making (Pieters et al., 2006). Giroux et al. (2016) gave an overview on "hindsight bias and law" and also discussed several debiasing strategies (see also Strohmaier et al., 2021). ...
Chapter
Hindsight bias describes the phenomenon that, in hindsight, we tend to exaggerate what we had known in foresight. This inflated confidence that we knew the solution or outcome all along has been shown in numerous studies and represents a robust finding. In this chapter, we first describe the phenomenon of hindsight bias in more detail and how it is typically assessed. Next, we present an overview of empirical findings and a typical hindsight-bias experiment that can be used as a classroom demonstration, before we finally move to a discussion of theoretical explanations and applied perspectives. As cognitive explanations, either memory impairment (recollection bias) or a biased rejudgment process (reconstruction bias) might be responsible for hindsight bias. Hindsight bias cannot be avoided intentionally and may have dramatic consequences in several applied domains such as legal, political, economic, medical, and everyday decision-making.
... To comprehend how hindsight bias could diminish the educational benefits medical settings could bring, a quintessential study with respect to clinicopathologic conference (CPC) is examined [14]. A junior physician, after studying a pile of files regarding a departed patient, presents his inferencesincluding several diagnosesin front of the seniors and discloses the diagnosis he or she believes is the most possible one, and a pathologist would then announce the correct answer. ...
Article
Full-text available
This essay seeks to explicate hindsight bias as a pervasive cognitive heuristic and discusses its manifestation in everyday judgment and decision-making, incorporating the representative psychological factors that underlie different conceptual constructs. With the combination of theoretical frameworks with past contextualized experiments, three essential applications of hindsight bias in financial investment, legal decisions, and medical diagnosis, including their significance in society, are introduced respectively by extrapolation. Judgment concerning financial investment and economic outlook necessitate an unprejudiced future-centric perspective but is often susceptible to a biased retrospection that results in unwise decisions. Similarly, in the criminal justice system, hindsight bias may be responsible for numerous impartial decisions made, which would interfere with legal impartiality. In medical settings, people also have a habitual tendency to overestimate their diagnostic abilities, which may have negative implications for future therapeutic treatment. Finally, potential methods to ameliorate the influence of hindsight bias are suggested to guide more sound decisions and enhance societal efficiency.
... Indeed, overconfidence has been linked to diagnostic error (Berner & Graber, 2008;Friedman et al., 2005;Meyer et al., 2013). Berner and Graber (2008) suggest that physicians may develop an "illusion of validity", which makes them overestimate the accuracy of their judgements (Einhorn & Hogarth, 1978). 1 As a result, physicians often anchor on their initial diagnostic hypotheses and become less likely to seek advice or consider other possibilities (Arkes, 2013;Dreiseitl & Binder, 2005). When they do, they may selectively seek information that supports their hypotheses, (Dani, Bowen-Carpenter, & McGown, 2019;Mendel et al., 2011), and/or distort this information in favour their hypotheses (Kostopoulou et al., 2009(Kostopoulou et al., , 2012Leblanc et al., 2001Leblanc et al., , 2002Nurek et al., 2014). ...
Article
Full-text available
Previous research has highlighted the importance of physicians' early hypotheses for their subsequent diagnostic decisions. It has also been shown that diagnostic accuracy improves when physicians are presented with a list of diagnostic suggestions to consider at the start of the clinical encounter. The psychological mechanisms underlying this improvement in accuracy are hypothesised. It is possible that the provision of diagnostic suggestions disrupts physicians' intuitive thinking and reduces their certainty in their initial diagnostic hypotheses. This may encourage them to seek more information before reaching a diagnostic conclusion, evaluate this information more objectively, and be more open to changing their initial hypotheses. Three online experiments explored the effects of early diagnostic suggestions, provided by a hypothetical decision aid, on different aspects of the diagnostic reasoning process. Family physicians assessed up to two patient scenarios with and without suggestions. We measured effects on certainty about the initial diagnosis, information search and evaluation, and frequency of diagnostic changes. We did not find a clear and consistent effect of suggestions and detected mainly non-significant trends, some in the expected direction. We also detected a potential biasing effect: when the most likely diagnosis was included in the list of suggestions (vs. not included), physicians who gave that diagnosis initially, tended to request less information, evaluate it as more supportive of their diagnosis, become more certain about it, and change it less frequently when encountering new but ambiguous information; in other words, they seemed to validate rather than question their initial hypothesis. We conclude that further research using different methodologies and more realistic experimental situations is required to uncover both the beneficial and biasing effects of early diagnostic suggestions.
Chapter
Over the last decades, research into distortions of memory has grown significantly—a development that has enabled a deep understanding of some of its most relevant related phenomena. Notwithstanding, few efforts have been made to articulate findings across the different disciplines interested in their study. Thus, this chapter summarizes key findings across fields of psychology—such as behavioral economics, neuroscience, forensic, cognitive, and clinical psychology—that have been identified as influencing the process of learning through distortions in memory formation. Specifically, it reviews evidence on biases that cause alterations in the content, valence, or weighting of memories—for example, the misinformation effect, the hindsight bias, and the consistency bias. It particularly focuses on the phenomena related to false memories due to the breadth of accumulated knowledge on the topic, especially in relation to their associated neural networks—where literature has shown how novelty and familiarity detection processes appear to play a critical role. Ultimately, this work contributes to bridging the gap between the study of biases and heuristics in behavioral science to our broader understanding of psychological systems at an interfunctional level of analysis.KeywordsMemory distortionCognitive biasFalse memoriesMisinformationHindsight
Chapter
The field of psychology–law is extremely broad, encompassing a strikingly large range of topic areas in both applied psychology and experimental psychology. Despite the continued and rapid growth of the field, there is no current and comprehensive resource that provides coverage of the major topic areas in the psychology–law field. The Oxford Handbook of Psychology and Law is an up-to-date, scholarly, and comprehensive volume that provides broad coverage of psychology–law topics. The field of psychology–law can be broadly divided into applied and experimental domains. Whereas applied specialties in psychology, such as clinical, counseling, neuropsychology, and school, are typically grounded in the scientist-practitioner model that emphasizes both research and the provision of clinical services (e.g., assessment, therapy), experimental psychology focuses almost exclusively on conducting empirical research grounded in theories from areas such as cognitive, developmental, and social psychology. Importantly, both applied and experimental psychologists have made meaningful contributions to the psychology–law field, and each of these domains of the psychology–law field includes a range of well-developed topic areas with robust empirical support. This book provides comprehensive coverage of applied and experimental topic areas, with chapters written by a diverse group of well-established psychology–law scholars and emerging future leaders.
Article
Full-text available
When a romantic relationship ends, individuals often look back and wish they would have done things differently. What may seem clear in hindsight, however, is often unclear in foresight. We investigated the effects of outcome knowledge on individuals’ judgments of a dating couple. In Study 1 (N1 = 181 U.S. college students, N2 = 334 U.S. community adults), participants read about a couple with an uncertain relationship trajectory; then, experimental group participants received knowledge about the couple’s status six months down the road as broken up or still together, while control group participants received no outcome knowledge. Individuals who were told the dating couple broke up perceived that outcome as more likely and obvious than did individuals who were not given outcome knowledge or who were told the couple stayed together. In Study 2 (N1 = 262 U.S. college students, N2 = 333 U.S. community adults), participants in the experimental conditions received knowledge about the couple’s status six months later as broken up or engaged, while control group participants received no outcome knowledge. In both samples, outcome knowledge of a breakup had a negative effect on individuals’ judgments about the couple. Among community adults, but not among college students, outcome knowledge of an engagement positively affected judgments of the couple. We offer directions for future research and discuss the mechanisms by which hindsight bias might affect evaluations of our own and others’ relationships.
Article
Full-text available
Proposes that several biases in social judgment result from a failure to consider possibilities at odds with beliefs and perceptions of the moment. Individuals who are induced to consider the opposite position, therefore, should display less bias in social judgment. In 2 experiments, with 150 undergraduates, this reasoning was applied to 2 domains––biased assimilation of new evidence on social issues and biased hypothesis testing of personality impressions. Ss were induced to consider the opposite through explicit instructions to do so and through stimulus materials that made opposite possibilities more salient. In both experiments, the induction of a consider-the-opposite strategy had greater corrective effect than more demand-laden alternative instructions to be as fair and unbiased as possible. Results are consistent with previous research on perseverance, hindsight, and logical problem solving, and they suggest an effective method of retraining social judgment.
Article
Full-text available
Individuals can be asked to make preoutcome judgments about future events (e.g., a scheduled athletic contest) or past events of which they are ignorant (e.g., whether Benjamin Franklin was fired from his office as Postmaster General of the Colonies). In addition, they can be asked to indicate the likelihood of possible outcomes after outcome information is provided. For postoutcome judgments they are asked to rate the possible outcomes as they would have if they had not been given outcome knowledge. The general finding of previous investigations is that Ss are unable or unwilling to ignore outcome knowledge when making postoutcome judgments as revealed by the fact that postoutcome judgments are better than preoutcome judgments. The inability to ignore outcome knowledge can be labeled the "knew-it-all-along" effect. Two experiments were conducted with 179 undergraduates to assess the reliability, generality, and explanations for this effect by varying outcome knowledge, instructions, materials, and the presence or absence of preoutcome judgments. Results support the reliability and generality of the phenomenon. Preoutcome judgments reduced the size of the "knew-it-all-along" effect, but only when Ss were encouraged to remember their preoutcome judgments while making postoutcome judgments. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Studies of the psychology of hindsight have shown that reporting the outcome of a historical event increases the perceived likelihood of that outcome. Three experiments with a total of 463 paid volunteers show that similar hindsight effects occur when people evaluate the predictability of scientific results—they tend to believe they "knew all along" what the experiments would find. The hindsight effect was reduced, however, by forcing Ss to consider how the research could otherwise have turned out. Implications for the evaluation of scientific research by lay observers are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Two experiments with 268 paid volunteers investigated the possibility that assessment of confidence is biased by attempts to justify one's chosen answer. These attempts include selectively focusing on evidence supporting the chosen answer and disregarding evidence contradicting it. Exp I presented Ss with 2-alternative questions and required them to list reasons for and against each of the alternatives prior to choosing an answer and assessing the probability of its being correct. This procedure produced a marked improvement in the appropriateness of confidence judgments. Exp II simplified the manipulation by asking Ss first to choose an answer and then to list (a) 1 reason supporting that choice, (b) 1 reason contradicting it, or (c) 1 reason supporting and 1 reason contradicting. Only the listing of contradicting reasons improved the appropriateness of confidence. Correlational analyses of the data of Exp I strongly suggested that the confidence depends on the amount and strength of the evidence supporting the answer chosen. (21 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Those who consider the likelihood of an event after it has occurred exaggerate their likelihood of having been able to predict that event in advance. We attempted to eliminate this hindsight bias among 194 neuropsychologists. Foresight subjects read a case history and were asked to estimate the probability of three different diagnoses. Subjects in each of the three hindsight groups were told that one of the three diagnoses was correct and were asked to state what probability they would have assigned to each diagnosis if they were making the original diagnosis. Foresight-reasons and hindsight-reasons subjects performed the same task as their foresight and hindsight counterparts, except they had to list one reason why each of the possible diagnoses might be correct. The frequency of subjects succumbing to the hindsight bias was lower in the hindsight-reasons groups than in the hindsight groups not asked to list reasons χ–2(1, N = 140) = 4.12, p
Article
Full-text available
There is a paucity of information regarding the optimal method of presenting risk/benefit information to parents of pediatric research subjects. This study, therefore, was designed to examine the effect of different message formats on parents' understanding of research risks and benefits. An Internet-administered survey was completed by 4,685 parents who were randomized to receive risk/benefit information about a study of pediatric postoperative pain control presented in different message formats (text, tables, and pictographs). Survey questions assessed participants' gist and verbatim understanding of the information and their perceptions of the risks and benefits. Pictographs were associated with significantly (p < .05) greater likelihood of adequate gist and verbatim understanding compared with text and tables regardless of the participants' numeracy. Parents who received the information in pictograph format perceived the risks to be lower and the benefits to be higher compared with the other formats (p < .001). Furthermore, compared with text and tables, pictographs were perceived as more "effective," "helpful," and "trustworthy" in presenting risk/benefit information. These results underscore the difficulties associated with presenting risk/benefit information for clinical research but suggest a simple method for enhancing parents' informed understanding of the relevant statistics.
Chapter
How eliminating “risk illiteracy” among doctors and patients will lead to better health care decision making. Contrary to popular opinion, one of the main problems in providing uniformly excellent health care is not lack of money but lack of knowledge—on the part of both doctors and patients. The studies in this book show that many doctors and most patients do not understand the available medical evidence. Both patients and doctors are “risk illiterate”—frequently unable to tell the difference between actual risk and relative risk. Further, unwarranted disparity in treatment decisions is the rule rather than the exception in the United States and Europe. All of this contributes to much wasted spending in health care. The contributors to Better Doctors, Better Patients, Better Decisions investigate the roots of the problem, from the emphasis in medical research on technology and blockbuster drugs to the lack of education for both doctors and patients. They call for a new, more enlightened health care, with better medical education, journals that report study outcomes completely and transparently, and patients in control of their personal medical records, not afraid of statistics but able to use them to make informed decisions about their treatments.
Article
This paper is a report of a study of the relationship between nurses' clinical experience and calibration of their self-confidence and judgement accuracy for critical event risk assessment judgements. Miscalibration (i.e. under-confidence or over-confidence of confidence levels) has an important impact on the quality of nursing care. Despite this, little is known about how nurses' subjective confidence is calibrated with the accuracy of their judgments. A sample of 103 nursing students and 34 experienced nurses were exposed to 25 risk assessment vignettes. For each vignette they made dichotomous judgements of whether the patient in each scenario was at risk of a critical event, and assigned confidence ratings (0-100) to their judgement calls. The clinical vignettes and judgement criteria were generated from real patient cases. The methodology of confidence calibration was used to calculate calibration measures and generate calibration curves. Data were collected between March 2007 and January 2008. Experienced nurses were statistically significantly more confident than students but no more accurate. Whilst students tended towards under-confidence, experienced nurses were over-confident. Experienced nurses were no more calibrated than students. Experienced nurses were no better at discriminating between correct and incorrect judgements than students. These patterns were exacerbated when nurses and students were extremely over-confident or extremely under-confident. Nurses were systematically biased towards over/under-confidence in their critical event risk assessment judgements. In particular, experienced nurses were no better calibrated than their student counterparts; with student under-confidence countered by experienced nurses' greater susceptibility to over-confidence.
Article
Patients must be informed about risks before any treatment can be implemented. Yet serious problems in communicating these risks occur because of framing effects. To investigate the effects of different information frames when communicating health risks to people with high and low numeracy and determine whether these effects can be countered or eliminated by using different types of visual displays (i.e., icon arrays, horizontal bars, vertical bars, or pies). Experiment on probabilistic, nationally representative US (n = 492) and German (n = 495) samples, conducted in summer 2008. Participants' risk perceptions of the medical risk expressed in positive (i.e., chances of surviving after surgery) and negative (i.e., chances of dying after surgery) terms. Although low-numeracy people are more susceptible to framing than those with high numeracy, use of visual aids is an effective method to eliminate its effects. However, not all visual aids were equally effective: pie charts and vertical and horizontal bars almost completely removed the effect of framing. Icon arrays, however, led to a smaller decrease in the framing effect. Difficulties with understanding numerical information often do not reside in the mind, but in the representation of the problem.
Article
Professionals are frequently consulted to diagnose and predict human behavior; optimal treatment and planning often hinge on the consultant's judgmental accuracy. The consultant may rely on one of two contrasting approaches to decision-making--the clinical and actuarial methods. Research comparing these two approaches shows the actuarial method to be superior. Factors underlying the greater accuracy of actuarial methods, sources of resistance to the scientific findings, and the benefits of increased reliance on actuarial approaches are discussed.