ArticlePDF AvailableLiterature Review

Reducing Diagnostic Errors in Medicine

Authors:

Abstract and Figures

This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.
Content may be subject to copyright.
A
CADEMIC
M
EDICINE
,V
OL
.77,N
O
. 10/O
CTOBER
2002 981
S
PECIAL
T
HEME
A
RTICLE
Reducing Diagnostic Errors in Medicine:
What’s the Goal?
Mark Graber, MD, Ruthanna Gordon, MA, and Nancy Franklin, PhD
A
BSTRACT
This review considers the feasibility of reducing or elim-
inating the three major categories of diagnostic errors in
medicine:
1. ‘‘No-fault errors’’ occur when the disease is silent,
presents atypically, or mimics something more common.
These errors will inevitably decline as medical science
advances, new syndromes are identified, and diseases can
be detected more accurately or at earlier stages. These
errors can never be eradicated, unfortunately, because
new diseases emerge, tests are never perfect, patients are
sometimes noncompliant, and physicians will inevitably,
at times, choose the most likely diagnosis over the correct
one, illustrating the concept of necessary fallibility and
the probabilistic nature of choosing a diagnosis.
2. ‘‘System errors’’ play a role when diagnosis is delayed
or missed because of latent imperfections in the health
care system. These errors can be reduced by system im-
provements, but can never be eliminated because these
improvements lag behind and degrade over time, and
each new fix creates the opportunity for novel errors.
Tradeoffs also guarantee system errors will persist, when
resources are just shifted.
3. ‘‘Cognitive errors’’ reflect misdiagnosis from faulty
data collection or interpretation, flawed reasoning, or in-
complete knowledge. The limitations of human process-
ing and the inherent biases in using heuristics guarantee
that these errors will persist. Opportunities exist, how-
ever, for improving the cognitive aspect of diagnosis by
adopting system-level changes (e.g., second opinions, de-
cision-support systems, enhanced access to specialists)
and by training designed to improve cognition or cogni-
tive awareness.
Diagnostic error can be substantially reduced, but never
eradicated.
Acad. Med. 2002;77:981992.
The search for zero error rates is doomed from the start.
—D
ONALD
M. B
ERWICK
1
Stimulated by the Institute of Medicine report To Err
is Human, published in 1999, the health care indus-
try is rapidly mobilizing to address the problem of
preventable errors in medicine.
2
Although the di-
rection in which we need to move is clear, the ultimate
objective needs better definition. If we wish to reduce errors
in medical care, what is the appropriate goal?
Dr. Graber is chief of medical service at the VA Medical Center, Northport,
New York, and professor and vice chairman of the Department of Medicine
at SUNY Stony Brook, Stony Brook, New York; Ms. Gordon is a doctoral
candidate in psychology, and Dr. Franklin is associate professor of psychol-
ogy, both at SUNY Stony Brook.
Correspondence and requests for reprints should be addressed to Dr. Graber,
Chief, Medical Service– 111, VA Medical Center, Northport, NY 11768.
B
ACKGROUND
At the 1998 Annenberg Conference on Patient Safety,
Nancy W. Dickey, MD, past president of the American Med-
ical Association, asserted that ‘‘the only acceptable error rate
is zero.’’
3
Gordon M. Sprenger, CEO of the Allina Health
System and chair of the American Hospital Association
Board of Trustees, reiterated this theme at the 2001 Annen-
berg safety colloquium: ‘‘Let’s be absolutely clear on this:
The goal of the patient safety movement must be to elimi-
nate all errors. This is like climbing Mount Everest, but it
must be our goal and it can be done.’’
4
Diagnostic errors comprise a substantial and costly frac-
tion of all medical errors. In the Harvard Medical Practice
Study of hospitals in New York State, diagnostic errors rep-
resented the second largest cause of adverse events.
5
Simi-
larly, diagnostic errors are the second leading cause for mal-
practice suits against hospitals.
6
A recent study of autopsy
R
EDUCING
D
IAGNOSTIC
E
RRORS
,
CONTINUED
982 A
CADEMIC
M
EDICINE
,V
OL
.77,N
O
. 10/O
CTOBER
2002
Table 1
Categories of Diagnostic Errors
Error Type Example
No-fault errors
Unusual or silent presentation of disease Missed diagnosis of appendicitis in an elderly patient with no abdominal pain
Uncertainty regarding the state of the world Missed diagnosis because the patient is inconsistent or confusing in presenting his symptoms
Lack of patient cooperation Missed diagnosis of colon cancer in patient who refused screening colonoscopies
Limitations of medical knowledge Missed diagnosis of Lyme disease in the era before this was recognized as a specific entity
Failure of normative processes Wrong diagnosis of a common cold in a patient ultimately found to have mononucleosis
System errors
Technical failures
Faulty test or data Wrong diagnosis of urine infection from urine left too long before culture
Lack of appropriate equipment or tests Missed colon cancer because flexible sigmoidoscopy performed instead of colonoscopy
Organizational failures
Inadequate pursuit of noncompliant patient Abnormal test results not appreciated because patient missed scheduled appointment
Unavailability of needed expertise Fracture missed by emergency department staff (radiologist not available)
Inefficient processes Delay in diagnosis of lung cancer due to inefficient coordination of outpatient care
Failure to adequately supervise Diagnoses missed by trainees (supervising attending physician not available)
Patient neglect Abnormal test results detected but not followed up
External interference (e.g., from HMO) Delay or missed diagnosis because testing not approved by patient’s plan
Policy failures Delay in diagnosis of pulmonary embolus—nuclear medicine section not open on weekends
Inadequate training or orientation Delays in diagnosis related to new trainees’ not knowing how to navigate the system efficiently
Culture (e.g., tolerance of error) Sustained backlogs in reading X-ray films leading to delayed or wrong diagnoses
Failure to coordinate care Delay of inpatient diagnosis: ward team not informed patient was admitted
Cognitive errors
Inadequate knowledge Wrong diagnosis of ventricular tachycardia on ECG with electrical artifact simulating this
arrhythmia
Faulty data gathering Missed diagnosis of breast cancer from failure to perform breast examination
Faulty information processing Failing to perceive the lung nodule on a patient’s chest X-ray
Faulty metacognition Wrong diagnosis of degenerative arthritis (no further tests ordered) in a patient with septic
arthritis
findings identified diagnostic discrepancies in 20% of cases,
and the authors estimated that in almost half of these cases
knowledge of the correct diagnosis would have changed the
treatment plan.
7
Given their enormous human and economic impact, the
complete elimination of diagnostic errors would seem to be
an appropriate and worthwhile goal. The purpose of this re-
view is to consider the available evidence regarding the feas-
ibility of this endeavor. We approach this question by first
identifying three major types of diagnostic errors. For each
type, we then consider possible approaches to decreasing the
incidence of diagnostic errors. Finally, we examine whether
there are any practical or theoretical limits that would pre-
clude us from eliminating diagnostic errors altogether.
T
YPES OF
D
IAGNOSTIC
E
RRORS
The nature of clinical decision making has been clarified
over the past several decades, and a variety of systems have
been proposed to classify diagnostic errors.
8–15
For the pur-
poses of this discussion, we postulate that every diagnostic
error can be assigned to one of three broad etiologic cate-
gories (Table 1):
‘‘No-fault errors,’’ following Kassirer and Kopelman, in-
clude cases where the illness is silent, or masked, or pre-
sents in such an atypical fashion that divining the correct
diagnosis, with the current state of medical knowledge,
would not be expected.
13
Other examples would include
the rare condition misdiagnosed as something more com-
mon, and the diagnosis missed because the patient does
not present his or her symptoms clearly. A diagnosis missed
or delayed because of patient noncompliance might also
be viewed as a no-fault error.
System errors reflect latent flaws in the health care system.
Included in this category are weak policies, poor coordi-
R
EDUCING
D
IAGNOSTIC
E
RRORS
,
CONTINUED
A
CADEMIC
M
EDICINE
,V
OL
.77,N
O
. 10/O
CTOBER
2002 983
nation of care, inadequate training or supervision, defec-
tive communication, and the many system factors that de-
tract from optimal working conditions, such as stress,
fatigue, distractions, and excessive workload. These prob-
lems can affect all the diagnosticians in the involved
health care system.
Cognitive errors are those in which the problem is inade-
quate knowledge or faulty data gathering, inaccurate clin-
ical reasoning, or faulty verification.
9,13
Examples include
flawed perception, faulty logic, falling prey to biased heu-
ristics, and settling on a final diagnosis too early. These
are all errors on the part of an individual diagnostician.
Each of these three categories of diagnostic errors carries its
own prognosis for error reduction, and we consider them
each in turn.
No-fault Errors
Lucky is the patient (and his or her physician) whose disease
presents in classical textbook fashion. For many conditions
the classical presentation is the exception and the spectrum
of possible disease presentations is broad. An example is the
classical ‘‘thunderclap’’ headache of subarachnoid hemor-
rhage. This prototypical finding is only present in 2050%
of patients, and failure to appreciate the various other ways
that such patients may present accounts for a substantial
fraction of the patients with subarachnoid hemorrhage in
whom the diagnosis is missed.
16
Analogously, patients over
80 years old are more likely to have atypical presentations
of myocardial infarction than the typical chest pain that is
the hallmark of a heart attack in younger patients.
17
Every week, the New England Journal of Medicine enter-
tains and instructs clinicians with the ‘‘Case Records of the
Massachusetts General Hospital.’’ In this venerable exercise,
a case is presented to an expert clinician, who is challenged
to identify the ‘‘correct’’ diagnosis. Many of the factors that
contribute to diagnostic errors in everyday life have been
carefully eliminated in these vignettes: There are no lab er-
rors, no chance to fail to perceive the crucial finding, the
clinician has the luxury of time to research all the relevant
possibilities, and the diagnosis is made in the absence of
everyday stress, fatigue, and distractions. Even in this ide-
alized setting, however, the correct diagnosis is often missed.
In the cases presented from 1989 to 1996, the error rate
among all case discussants was 25% (excluding cases ana-
lyzed by the diagnostically gifted physicians from the pre-
senting hospital, who had an error rate of only 5%!).
18
In
most cases these errors reflect the unusual ways in which
even common conditions can manifest, and in some cases
the illness being discussed may even represent a new disease,
or a new variant. These no-fault errors are probably over-
represented in cases chosen to be diagnostic challenges, but
the ability of diseases to remain silent or present in atypical
fashion is encountered in every clinical setting.
Can we reduce no-fault diagnostic errors? To the extent
that no-fault diagnostic errors represent shortcomings of
medical knowledge or testing, it is a virtual certainty that
these errors will decrease over the long term as knowledge
advances. Before the appreciation of Lyme disease as a spe-
cific entity and our ability to specifically test for the condi-
tion, all of these cases were misdiagnosed as, for example,
atypical rheumatoid arthritis. The evolving ability to test for
disease at a point when the clinical manifestations are min-
imal or absent is another way in which no-fault errors will
be reduced. Consider the ability to detect pre-symptomatic
cancer with appropriate screening tests, or the ability to de-
tect silent hyperparathyroidism from the routine measure-
ment of serum calcium. These examples illustrate the con-
cept that advances in medical knowledge and disease
detection will inevitably reduce the number of no-fault er-
rors. This process may even accelerate as we become increas-
ingly able to use genetic markers to detect disease predis-
positions before there are any clinical manifestations.
Can we eliminate no-fault diagnostic errors? As a prac-
tical matter, it seems unlikely that we will ever have a com-
plete enough armamentarium to test for every possible dis-
ease. There will always be patients whose disease exists in a
silent, preclinical stage, eluding detection. In other patients,
the disease may be clinically manifest, but present in such
an atypical fashion that the true diagnosis is missed. A final
likelihood is that new diseases or new pathogens, or new
side effects of yet-to-be invented medications, will emerge
over time. The first patients to develop these novel entities
will be misdiagnosed until the new syndromes are defined
and characterized.
Other immortal no-fault errors are those arising from the
use of normative approaches to choose the most likely di-
agnosis. For example, when faced with uncertainty, the nor-
mative approach suggests we should pick as a working di-
agnosis the entity with the highest likelihood. Averaged over
all diagnoses this will produce the highest number of correct
ones.
19
As emphasized by Arkes, however, using the nor-
mative approach guarantees that, with some regularity, the
clinician will choose the most likely diagnosis instead of the
correct one.
3
It is inevitable that a more rare condition will
sometimes exist in cases where we suspect something more
common. Although technically this should be considered a
cognitive error, it is inappropriate to fault the clinician
whose diagnosis is wrong as a result. As Arkes concludes,
‘‘There is no solution to the problem, which is an unavoid-
able consequence of the probabilistic nature of the relation-
ship between disease and symptoms.’’
3
R
EDUCING
D
IAGNOSTIC
E
RRORS
,
CONTINUED
984 A
CADEMIC
M
EDICINE
,V
OL
.77,N
O
. 10/O
CTOBER
2002
Several patient-specific factors contribute to the impossi-
bility of eliminating no-fault diagnostic errors. One of these
is patient noncompliance. Although many interventions are
under way to improve compliance with medical care and
participation in screening programs, patients’ compliance is
never guaranteed. Their ultimate participation, or lack of
participation, in medical care may be influenced by busy per-
sonal schedules, religious beliefs, attraction to alternative
medicine, or distrust.
20
A second patient-related factor contributing to no-fault
errors is the inherent variability in how patients perceive
and describe their states of health or their active symptoms.
The information a patient gives may be confusing, contra-
dictory, or inaccurate.
21
Without understanding each pa-
tient’s personal background, context, and belief systems, it
may be impossible for the physician to accurately compre-
hend the true state of affairs.
This problem illustrates the philosophical argument of
‘‘necessary fallibility’’ presented by Gorovitz and Mac-
Intyre.
22
This concept applies to all fields of cognitive en-
deavor, including clinical reasoning in medicine, and sug-
gests that ultimately the state of the world is too complex
to be fully knowable: ‘‘It is inherent in the nature of medical
practice that error is unavoidable, not merely because of the
limitations of human knowledge or even the limits of human
intellect, but rather because of the fundamental epistemo-
logical feature of a science of particulars.’’ Science can never
predict the exact course of a hurricane because of the infi-
nitely many interacting environmental and topographical at-
tributes.
23
Similarly, medical diagnosticians will be forever
challenged by the subtle and unknowable interplay of vari-
ables (in the disease agent, in the host response, in the en-
vironment, in how the patient describes his or her symp-
toms, in testing, and even in the physician’s powers of
observation) that determine how a disease will present itself
and be perceived by the clinician. Kennedy has argued that
the state of uncertainty surrounding clinical decision making
is so profound and pervasive that it may be inappropriate to
judge the quality of medical diagnosis using the usual stan-
dards of rational decision making.
24
System Errors
The prevailing paradigm acknowledges that error in medical
care has two distinct roots: At the ‘‘sharp end’’ is the indi-
vidual provider who interacts with the patient and makes
the mistake. At the ‘‘blunt end’’ are the latent flaws in the
health care system that provide the setting, the framework,
and the predisposition for the error to occur.
25,26
Blunt-end
factors include the system’s organizational structure, culture,
policies and procedures, the resources provided, the ground
rules for communication and interaction, and performance
detractors such as excess provider workload.
Can we reduce diagnostic errors related to system issues?
Reflecting work by Reason,
25,27
Leape,
28,29
and others,
30,31
the
dominant role of system factors has assumed center stage in
both understanding and correcting errors in medicine. As
summarized by Bogner, ‘‘a systems approach is necessary to
effectively address human error in medicine,’’
32
and this
theme was repeatedly endorsed in the Institute of Medicine
report, To Err is Human.
2
Compared with error-improvement
strategies that focus on individual providers, system-level
changes have the advantage of potentially decreasing error
rates for all involved providers, and over extended periods
of time.
33
Laboratory errors provide an example of how powerful sys-
tem interventions can be in reducing diagnostic errors. The
scope and accuracy of medical diagnosis increased dramati-
cally in the 20th century in parallel with the emergence of
laboratory testing and other diagnostic tests, such as X-rays
and electrocardiograms. In the beginning, however, the ac-
curacy and reliability of medical testing varied widely. A sur-
vey in 1949 of 18 leading clinical laboratories in Connect-
icut found that over one third of the clinical lab results were
unacceptable. Similarly, a mid-century national survey by
the Centers for Disease Control and Prevention estimated
that more than a fourth of all laboratory testing results na-
tionwide were unacceptable.
34
This problem eventually re-
sulted in standardization and regulation, inspection, and in-
sistence on quality control. Although lab errors are still with
us, they are now rare as a result of such changes.
Diagnostic errors related to delays are common and rep-
resent a large area where system interventions could be ef-
fective. Delayed diagnosis often reflects inefficiency in di-
agnostic evaluation, suboptimal coordination of care, or lack
of effective communication. These are all system factors that
can be optimized through attention to system design and
performance.
Can we eliminate diagnostic errors related to system is-
sues? Despite great hope for reducing diagnostic errors re-
lated to system-dependent factors, it will not be possible to
totally eliminate system-related errors. At least four factors
support this conclusion:
System improvements degrade over time. Permanent im-
provement remains the ideal, but new policies are even-
tually forgotten, organizational changes vary with the as-
signed staff, and the enthusiasm for improvement wanes.
The ‘‘fix,’’ though correcting the old problem, may intro-
duce entirely new opportunities for error.
Systems must necessarily evolve in step with the evolution
of health care technology and management. Systems will
develop and mature along a learning curve, and system
repairs will always, to some extent, lag behind and fall
short of achieving perfection.
R
EDUCING
D
IAGNOSTIC
E
RRORS
,
CONTINUED
A
CADEMIC
M
EDICINE
,V
OL
.77,N
O
. 10/O
CTOBER
2002 985
Whereas the ideal fix would eliminate the chance for er-
rors, in practice we often encounter tradeoffs: the oppor-
tunity for errors is reduced in one system, but increased in
another. A current example is the movement to limit the
number of hours resident trainees can work without rest
or sleep. Although limits on work hours may decrease er-
rors related to fatigue, new problems may arise from the
inevitable hand-offs created by new coverage systems and
new problems of coordinating care when the physician
who knows the patient best is now present only eight to
12 hours of the 24-hour work day.
35,36
A related weakness
has been pointed out by Perrow:
Fixes, including safety devices, sometimes create new acci-
dents, and quite often merely allow those in charge to run
the system faster, or in worse weather, or with bigger explo-
sives.
37
Signal detection theory illustrates the inevitability of
tradeoffs. Originally introduced to describe the perceptual
performance of radar operators, signal detection theory can
describe any situation in which a yesno decision is made.
For example, the radiologist must decide whether a chest
X-ray shows a tumor or is normal. Problems arise, however,
for tumors that are difficult to appreciate, and when chance
confluences of normal shadows can simulate a tumor when
none is there. The radiologist must choose a threshold be-
yond which the report will indicate a tumor is present. With
a lower threshold, the radiologist will have a higher sensi-
tivity in detecting tumors, but at the expense of more false
positives. With a more stringent threshold, false alarms will
be minimized, but at the expense of missing some tumors.
Health system planners need to seek the right balance
point, one that reduces errors optimally across the whole
system, taking into account the tradeoffs that inevitably
arise. Consider the need for communication between the
emergency room and the admitting ward: Too little com-
munication risks poor coordination of care, and too much
would bog down both areas, leading to other errors. Physi-
cian ‘‘alerts’’ of abnormal labs are another example: If there
are too few ‘‘alerts’’ some lab abnormalities may be missed,
but if there are too many, the ‘‘alert’’ loses its functional
significance. The difficulty of optimally allocating medical
resources is a final example: Fixing a system problem in one
area should not consume so many resources that other areas
become vulnerable. The existence of errors may sometimes
indicate a need for change, or may just suggest the need to
re-evaluate the balance point of tradeoffs that were used to
set the initial policy.
Cognitive Errors
Perception. Diagnosis begins with perception. The phy-
sician must identify the physical findings, the pathologist
must recognize the abnormalities of histology, and the radi-
ologist must perceive the differences between normal and
abnormal densities. The available evidence suggests that
even in this first stage of the diagnostic process, seemingly
the simplest and most straightforward, errors occur at non-
trivial rates. For example, Berlin has summarized the studies
of diagnostic errors in radiology, finding rates that range from
4% in clinical practice series (large numbers of normal films)
to 30% in prospective studies incorporating larger numbers
of abnormal films.
38
In 80% of these cases, abnormal details
were not perceived, and in 20%, abnormal details were iden-
tified but misinterpreted.
39
Expertise in the type of visual
diagnosis used by radiologists seems to involve an interaction
of two distinct components: visual perception and domain-
specific knowledge of both the normal expected findings and
all possible abnormal findings.
40
Variability in the physical
examination is a related problem that detracts from the ac-
curacy of diagnosis.
41
Hypothesis generation. The initial information base often
cues strongly matching candidate diagnoses, or a likely clin-
ical framework for analysis. If this process occurs without
much deliberate effort, other reasonable possibilities may not
be similarly generated . An example is the patient who pre-
sents to the emergency department with chest pain from a
dissecting aneurysm. The clinician may mistakenly conclude
that the patient has pain related to myocardial ischemia and
miss the diagnosis of dissecting aneurysm because myocardial
ischemia is much more common, and therefore more ‘‘avail-
able’’ in memory.
Medical diagnosis is a specialized example of decision
making under uncertainty. In familiar contexts, clinicians
make decisions without much conscious deliberation, and
medical experts routinely practice in this fashion. Clinicians
typically use a variety of heuristics, or rules of thumb, for
efficiently arriving at decisions in the face of limited time or
data.
42–44
For example, diagnoses are established using heu-
ristics based on representativeness, availability, or extrapo-
lation. The power of heuristics is enormous, allowing clini-
cians to navigate the diagnostic challenges of everyday life
and make effective decisions, usually accurate, in real time
when arduous working out of probabilities is not possible.
Heuristic solutions free up cognitive resources so that they
can be applied toward other demands. The price for using
these powerful tools, however, is predictable error reflecting
the inherent biases associated with each of these heuristics.
42
Data interpretation. The probability of the initial hy-
pothesis is adjusted upwards or downwards using test results
to calculate a new probability using Bayes’ theorem.
19
Un-
fortunately, few clinicians are skilled in using Bayes’ theo-
rem, and in practice it is probably more common for tests
to be interpreted without taking into account the charac-
teristics (sensitivity and specificity) of the test itself.
R
EDUCING
D
IAGNOSTIC
E
RRORS
,
CONTINUED
986 A
CADEMIC
M
EDICINE
,V
OL
.77,N
O
. 10/O
CTOBER
2002
Verification. Clinicians, like all decision makers, strongly
favor their initial hypotheses and often stop searching for
additional possibilities. This tendency leads to a number of
cognitive errors collectively referred to as premature clo-
sure.
45
This includes factors such as overconfidence and con-
firmation bias, which foster the tendency to favor confirming
evidence over counter-evidence that might exclude the di-
agnosis.
19
Another problem is posed by the patient with two
or more medical disorders: Clinicians from their first days of
training are exhorted to search for unifying explanations of
a patient’s multiple symptoms. Occham’s razor specifically
instructs the clinician that it is more likely for a single dis-
ease to explain multiple symptoms than it is for multiple
diseases to do so. The test of time has confirmed the wisdom
of this chestnut, but it will obviously lead to diagnostic errors
in those patients who truly have more than one active pro-
cess.
Can we reduce cognitive errors? In selecting the title for
their seminal work on medical errors, the Institute of Med-
icine identified the essential conundrum we face. To Err is
Human captures a feeling of inevitability. The thought sug-
gests that we can never eliminate errors as long as we are
human. Is it possible to improve perception, memory, or de-
cision making? In this section we consider this question from
two perspectives. First we address efforts to directly improve
cognition. We then consider an alternative, indirect ap-
proach, attempting to improve diagnostic accuracy using a
systems-related approach.
A. Improving cognition directly
Can we train better thinkers? Bordage argues that we can begin
to train better diagnosticians by improving the quality of
training in physical diagnosis.
9
By teaching discriminative
skills and by providing more examples and repetition, stu-
dents can improve their clinical decision-making skills.
Can we learn to avoid biased judgment? Biased judgment is
common in clinical reasoning.
35
There are some isolated re-
ports of success in reducing the likelihood of bias. Larson et
al., for example, found that the diagnoses made by medical
trainees instructed about the pitfalls of premature closure
were more accurate than those made by peers who were not
similarly trained.
4
Short-term success was also identified after
students completed a course in statistics, showing improved
ability of the students to analyze statistical problems. In con-
trast, similar transference could not be demonstrated for
courses in logic,
46
and Regehr and Norman have concluded
that, in general, effective ways to teach students how to
avoid the pitfalls of using heuristics have yet to be identi-
fied.
10
Even with highly specialized training, physicians are lim-
ited by a cognitive system that has evolved to be good at
certain kinds of tasks and that faces predictable pitfalls. In
the domain of hypothesis generation and problem solving,
human cognition has evolved mechanisms that allow for ef-
ficiency at the expense of various forms of creativity. For
example, people sometimes fail to notice analogies to prior
experience. In a study by Gick and Holyoak, for example,
people were first shown a solution to a military problem that
involved marching several platoons simultaneously down
multiple roads that led to an attack site so that they all
arrived at once. When presented, minutes later, with a prob-
lem in which they needed to find a way to irradiate a tumor
without damaging surrounding tissue, they failed to notice
the analogy.
47
Problem-based learning. Medical educators have tried to im-
prove diagnostic reasoning through innovative curriculum
changes that emphasize skills in clinical reasoning. The best-
studied example has been the change to problem-based
learning at many leading institutions. For the major part of
the past century, medical knowledge was taught to students
one subject at a time, proceeding from basic subjects such
as anatomy and biochemistry in the first year to more ad-
vanced subjects such as pathophysiology in the second year.
In this framework, medical decision making was not taught
as a separate course, but evolved as the student’s knowledge
base expanded and the opportunity for clinical contact with
patients increased in the third and fourth years. In contrast,
the problem-based learning approach exposes students to
clinical problems from the very start. A basic hypothesis of
this approach is that clinical decision making involves spe-
cific skills and that these can be learned and applied effec-
tively to novel clinical situations, independent of learning
the subject matter (anatomy, biochemistry, etc.) first. The
impact of problem-based learning was critically reviewed re-
cently by Colliver.
48
Only three studies of problem-based
learning were identified that used random assignment of
medical students to training condition, and none revealed
any clear advantage of problem-based learning in terms of
the scores on standardized exams or overall performances. In
studies involving self-selection into a problem-based curric-
ulum, however, there was evidence for a positive impact: At
Rush Medical College at the end of a seven-month exposure
to problem-based learning, students gave more accurate di-
agnoses and provided reasoning chains that were more com-
prehensive and elaborated compared with students in the
standard curriculum.
49
Similarly, students in the problem-
based learning track at Bowman Gray School of Medicine
of Wake Forest University had board scores comparable to
those of students in the standard track, but scored better on
scales assessing factual knowledge, ability to perform the his-
tory and physical exam, deriving a differential diagnosis, and
R
EDUCING
D
IAGNOSTIC
E
RRORS
,
CONTINUED
A
CADEMIC
M
EDICINE
,V
OL
.77,N
O
. 10/O
CTOBER
2002 987
organizing and expressing information. Elstein,
15
Nor-
man,
50,51
and others
52
have advanced the concept of ‘‘content
specificity’’ to explain the general failure of problem-based
learning to produce superior diagnosticians: Experts have
knowledge that is more extensive and better integrated.
Metacognitive training. Ultimately, accurate diagnosis requires
not only extensive domain-specific knowledge and sophis-
ticated skills in clinical reasoning, but also the ability to be
actively aware of how effectively one is thinking.
51,53
Baron
describes one application of this process, the ability to pre-
serve active open-mindedness:
Good decision making involves sufficient search for possibil-
ities, evidence, and goals, and fairness in the search for evi-
dence and in the use of evidence....People typically depart
from this model by favoring possibilities that are already
strong. We must make an effort to counteract this bias by
looking actively for other possibilities and evidence . . . ac-
tively open-minded thinking.
19
Baron goes on to present evidence that individuals who
practice open-minded thinking show reduced tendencies to
have cognitive biases and produce ‘‘better’’ decisions.
Clinical educators adopt this approach when they empha-
size to their students the importance of deriving a complete
differential diagnosis. This ensures that multiple possibilities
are considered, at least at the outset of the search for the
diagnosis. A simple strategy that may also work to promote
actively open-minded thinking is the ‘‘crystal ball experi-
ence,’’ a device used by military trainers to promote open-
minded thinking in military planning.
54,55
In this exercise,
the participants are asked to devise a plan to achieve a par-
ticular objective. After presenting the plan they are told
that, according to the crystal ball which can foresee the fu-
ture, this plan won’t work. What plan would they suggest
instead?
54
This promotes the active examination of flaws in
the original plan and a search for alternatives. All good
bridge players know this strategy, but too few clinicians do.
B. Adopting systems solutions to cognitive errors
Independent of our ability to improve cognition directly, it
should be possible to improve diagnostic accuracy indirectly
by a systems-level approach. Changes at the systems level
can compensate for the predictable patterns of thought that
lead to error.
Improving perception. The way data are presented can have a
profound influence on how easily abnormalities are detected.
Highlighting abnormal laboratory test results makes them
easier to detect when they are presented in a list with many
normal test values. Similarly, perception is enhanced by
graphic presentations and presentations that facilitate rec-
ognition of trends. The future holds great promise for similar
advances in enhancing perception using technologic ad-
vances that incorporate such principles of human-factors en-
gineering. An alternative approach is to supplement human
perception using computer-aided diagnosis. Jiang et al. re-
cently demonstrated the potential to improve the radiologic
detection of cancer in reading mammograms: Computer-
aided diagnosis improved both the sensitivity and the spec-
ificity of this test more than did independent readings by
two radiologists.
56
Availability of expertise. Emergency department physicians
misinterpret 116% of plain radiographs and up to 35% of
cranial tomographic studies.
57
A direct approach to this
problem would be to better train these non-radiologists in
radiographic interpretation. The indirect, systems-level ap-
proach would be to ensure that trained radiologists are avail-
able to help interpret these studies, the approach endorsed
by the American College of Radiology. Supporting the sys-
tems-level approach, Espinosa and Nolan found that the rate
of missed findings on X-rays taken in an emergency depart-
ment was reduced from 1.2% to 0.3% by requiring second
reviews of each study by radiologists.
58
Unfortunately, only
20% of U.S. hospitals have radiology staff present 24 hours
a day. Alternative approaches include using tele-radiology to
assist front-line clinicians, or on-site radiology trainees. The
relative impacts of these three interventions are still being
evaluated.
57
Second opinions. Second opinions have proven to be a val-
uable strategy for reducing medication errors. The Institute
for Safe Medication Practices endorses the use of second
checks for complex or risk-prone medication requests such
as parenteral nutrition, chemotherapy, or neonatal therapeu-
tics. This same approach might similarly be used to reduce
diagnostic errors. Kronz and coworkers studied the potential
benefit of this idea by requiring a second opinion on every
surgical pathology diagnosis referred to the Johns Hopkins
Hospital over a 21-month period.
59
The second opinions led
to clinically relevant changes of the diagnoses in 1.4% of
6,171 cases. In selected types of cases, the corrected error
rates were even higher: 5.1% in tissue from the female re-
productive tract, and 9.5% in serosal samples.
Clinical guidelines and clinical decision-support systems. Given
the documented variability in the extents to which clini-
cians act coherently and in accord with the laws of proba-
bility, it is reasonable to hope that clinical practice guide-
lines could reduce the rate of diagnostic errors. Guidelines
standardize the approaches to clinical problems and mini-
mize the variability in response patterns. To the extent that
R
EDUCING
D
IAGNOSTIC
E
RRORS
,
CONTINUED
988 A
CADEMIC
M
EDICINE
,V
OL
.77,N
O
. 10/O
CTOBER
2002
they incorporate appropriate base rates of disease, apply cor-
rect probability estimates, and minimize the errors induced
by the use of heuristics, guidelines should in theory improve
the accuracy of diagnosis and management.
60
Guidelines,
however, are themselves heuristics, and so they cannot pro-
vide a full solution. In addition, clinicians in practice are
sometimes unaware of guidelines relevant to their patients,
and even when they are aware, often do not follow the
guidelines appropriately. Ionnidis and Lau recently provided
an evidence-based review of the efficacy of guidelines and
other interventions designed to reduce medical errors.
61
Al-
though guidelines reduced errors in treatment and preven-
tion settings, this was not the case when guidelines were
used to reduce diagnostic errors. In the only methodologi-
cally acceptable study they identified regarding diagnostic
errors, the use of a clinical guideline actually increased the
rate of missed radiologic fractures in an emergency depart-
ment.
62
Probably the most promising approach to improving di-
agnostic accuracy is to incorporate decision aids directly into
the active practice of medicine using computer-assisted
‘‘expert systems.’’ Examples where clinical decision-support
systems excel are numerous, including improved compliance
with guidelines, improved antibiotic utilization, and im-
proved use of preventive health measures.
63
Hunt et al. re-
cently provided an evidence-based review of clinical deci-
sion-support systems, identifying 68 published evaluations
that reported patient outcomes or changes in provider be-
haviors. Studies were grouped into four categories: drug dos-
ing, preventive care, clinical diagnosis, and general process
of care. Overall, two thirds of the studies showed positive
impacts on patient outcomes or provider behaviors, but, in-
terestingly, none of the four studies of diagnosis (three on
abdominal or chest pain, one on pediatric primary care prob-
lems) were positive.
64
The reasons diagnostic processes were
not improved in these studies are not clear, but there are
both theoretical and practical limitations on the input data
set that might limit the functionality of expert systems. The
quality of the input data set is critical in determining the
quality of the hypotheses the expert system will generate. In
practice, clinicians may not have the time to input all of
the data needed. A greater concern, identified by Regehr and
Norman, is that the input data set may be biased by any
initial hypotheses that were being entertained.
10
As empha-
sized by Bordage, we tend to see what we are looking for.
9
Thus, cognitive shortcomings can undermine potential im-
provements from system changes.
Can we eliminate cognitive errors? The first insurmount-
able barrier in eliminating cognitive errors is the impossi-
bility of knowing all of the relevant facts all of the time. We
discussed this issue earlier, and classified it as a no-fault error
in the sense that much of this uncertainty lies outside the
diagnostician’s control.
Even if we could know all the relevant facts, we would
be unable to process them quickly. The limitations of human
information processing have been described in Simon’s
‘‘principle of bounded rationality.’’
65
According to Simon,
Because of the limits on their computing speeds and power,
intelligent systems must use approximate methods. Optimality
is beyond their capabilities; their rationality is bounded....
Human short-term memory can hold only a half dozen
chunks, an act of recognition takes nearly a second, and the
simplest human reactions are measured in tens and hundreds
of milliseconds, rather than microseconds, nanoseconds, or
picoseconds.
A direct consequence of bounded rationality is the inev-
itability of diagnostic errors: ‘‘When intelligence explores
unfamiliar domains, it falls back on ‘weak methods,’ which
are independent of domain knowledge. People ‘satisfice’
look for good-enough solutionsinstead of hopelessly
searching for the best.’’
65
This applies to the use of heuristics, which are enormously
powerful, yet inherently flawed. James Reason, the British
psychologist who has studied human error extensively, sum-
marizes the problem perfectly: ‘‘Our propensity for certain
types of error is the price we pay for the brain’s remarkable
ability to think and act intuitively.’’
25
Heuristics play the
odds: Sometimes, particularly under unusual circumstances,
these rules of thumb lead to wrong decisions. The theme of
tradeoffs, which comes up so often in the context of im-
proving decision making, is especially appropriate to the use
of heuristics, which inherently involves the tradeoff between
efficiency and accuracy. Finally, the heuristics themselves,
and for that matter any cognitive process contributing to
decision making, can be misapplied, particularly if we are
fatigued, distracted, stressed, or unfamiliar with the condi-
tion in question.
Studying some of the major industrial catastrophes of our
time, Perrow identified an additional argument for the in-
evitability of diagnostic errors. In his ‘‘normal accident the-
ory,’’ he concludes that systems that are tightly coupled, with
complex and hidden interactions, will inevitably produce ac-
cidents.
35
Although Perrow did not study medical mishaps,
we propose that medical diagnosis involves analogous pro-
cesses that are hidden and complex. The diagnostic process
is very much a black box, in which the physician applies
imperfect knowledge to somehow make sense out of a case
presentation and lab data that are typically nonspecific and
incomplete. Moreover, it is probably rare that the physician
applies an appropriate amount of strategic mental monitor-
ing to make sure all these cognitive processes are accurate
and appropriate. In this framework, diagnostic errors may be
as inevitable as those that befall nuclear power plants, air-
craft, refineries, and mines.
R
EDUCING
D
IAGNOSTIC
E
RRORS
,
CONTINUED
A
CADEMIC
M
EDICINE
,V
OL
.77,N
O
. 10/O
CTOBER
2002 989
Table 2
Potential for Reducing or Eliminating the Three Types of Diagnostic Errors
No-fault Errors System Errors Cognitive Errors
Reducing errors
Potential for reduction Limited Substantial Potentially Substantial
Mechanisms for reduction Improve testing and screening
Expand medical knowledge
Optimize the sensitivity, specificity,
availability, and timeliness of
testing, and improve test reporting
Improve the efficiency and
timeliness of diagnostic evaluation
Improve system processes that
impact diagnosis such as
communication and coordination
of care
Improve the supervision of trainees
Reduce detractors such as fatigue,
distractions, and stress
Cognitive solutions: Cognitive training to
improve metacognition, understand
pitfalls of heuristics, and promote
active open-mindedness
System solutions: Enhance data
presentation to facilitate perception;
use second opinions; increase use of
specialists; develop and use clinical
decision support systems
Eliminating errors
Potential for elimination None None None
Reasons errors will persist The limits of medical knowledge
Inherent uncertainty about the
state of the world
Disease is silent or presents
atypically
New diseases
Failure of normative processes
Degradation of system
improvements
System complexity
Tradeoffs
Perception failures
Cognitive limitations: working memory,
knowledge, attention, processing,
metacognition
Risks of using heuristics; cognitive
biases
Diagnostic reasoning is complex and
inscrutable
C
ONCLUSIONS
As the second leading cause of medical error, diagnostic error
is a major health care concern and worthy of much more
attention than it has received. The sentinel-event registry
established by the Joint Commission on Accreditation of
Healthcare Organizations (JCAHO) does not even track di-
agnostic error as a category.
66
Likewise, of the 79 practices
recently evaluated by the Agency for Healthcare Research
and Quality that might decrease medical errors, only two
directly or indirectly deal with diagnostic errors.
67
Diagnostic
error has probably remained in the background of the cur-
rent dialogue on medical error because the causes are more
subtle and the solutions are less obvious than they are for
problems such as medication error and wrong-site surgery.
Another reason to focus on reducing diagnostic error is
that, in contrast to other types of medical errors, there is
little opportunity to minimize the impact on the patient
once the error is made. The emphasis must be on preventing
the error in the first place.
The potential for reducing or eliminating diagnostic errors
in each of the three main categories (no-fault, system-re-
lated, cognitive) is summarized in Table 2. With regard to
reducing diagnostic errors, there is clearly reason to be op-
timistic: In each of the three categories the potential to re-
duce diagnostic errors is both real and achievable. Even no-
fault errors are likely to diminish over time as we are
increasingly able to detect diseases in preclinical stages and
illnesses with atypical presentations.
Experts uniformly advocate a focus on identifying and re-
pairing latent system flaws as the most productive approach
to improving the safety of medical care,
2
and our analysis
suggests this approach would also reduce diagnostic errors.
Latent flaws in the health care system that contribute to
diagnostic errors should be studied and addressed. Areas that
should be targeted for intervention include supervision of
trainees, availability of expertise, coordination of care, com-
munication procedures, training and orientation, quality and
availability of tests, suboptimal thinking environments
(those producing undue stress, fatigue, distractions, excessive
R
EDUCING
D
IAGNOSTIC
E
RRORS
,
CONTINUED
990 A
CADEMIC
M
EDICINE
,V
OL
.77,N
O
. 10/O
CTOBER
2002
Figure 1. The relationships between reliability and
cost in diagnostic decision making. As clinicians
improve their diagnostic competencies from begin-
ning level skills (use of inductive reasoning) to in-
termediate levels (use of heuristics) to expert level
skills, reliability and accuracy improve, with
de-
creased
cost and effort (descending arrows). In any
given case we can improve diagnostic accuracy
(e.g., with second opinions or monitoring) but with
increased
cost, time, or effort (ascending arrow).
workload), and inefficient processes that lead to delays in
diagnosis.
In contrast, the possibility that we could reduce diagnostic
errors by focusing on cognitive elements has remained
largely unexplored. This may reflect the general sentiment
of the current performance improvement literature, which
presents system-level approaches as the preferred route to
achieve organizational excellence, as opposed to approaches
to change how people think. This skepticism regarding cog-
nitive interventions was recently captured by Nolan: ‘‘Al-
though we cannot change the aspects of human cognition
that cause us to err, we can design systems that reduce errors
and make them safer for patients.’’
31
We would like to make
the case that there may be substantial potential for improv-
ing the cognitive component of medical diagnosis, and pro-
pose that this should be a major research focus to improve
patient safety. There are many potential avenues of explo-
ration: training to improve metacognition, courses on diag-
nostic reasoning and pitfalls, and second-generation prob-
lem-based learning approaches that build on the lessons
learned from the successful first-generation programs.
52
Even if we are not yet able to improve cognitive perfor-
mance per se, it may be possible to apply systems solutions
to problem-prone cognitive tasks such as perception or dif-
ferential diagnosis. Examples include mandated ‘‘second
readings,’’ enhanced availability of subject-matter experts,
improved supervision of trainees, and developing more ef-
fective guidelines and clinical decision-support systems. We
acknowledge, however, that virtually none of the approaches
outlined have been validated in practice, and several have
little more than anecdotal support from fields outside med-
icine. The field is just being defined, and the opportunity for
improvement in this direction is large.
In thinking about how to reduce diagnostic errors, a re-
curring theme is the problem of tradeoffs. We can be more
certain, but at a price, be it dollars, effort, or time (Figure
1). Where should we set the bar? We can keep an open
mind, we can perform a more extended search for possibil-
ities, we can be more careful in interpreting and assessing
data. At some point, however, these strategies become too
expensive, or even self-defeating. For example, we can at-
tempt to rule out every possibility in a given case, but this
can lead to excessive testing, with its own risks and costs to
the patient, and also the likelihood that on occasion a false-
positive test result will lead us astray. Increased monitoring
of cognition would also systematically increase delays in
reaching a diagnosis, as extra tests and treatments are or-
dered to increase certainty. Decision making slows if we su-
perimpose cognitive or systemic checks and balancescan
we afford this?
At least in some cases the answer is, surprisingly, ‘‘Yes.’’
For example, mandatory second opinions before certain types
of elective surgery reduce the number of unnecessary oper-
ations, saving two to four dollars for every dollar spent.
68
Is
it possible that programs that mandate second readings in
pathology or radiology might realize similar savings? The
possibility that optimizing diagnostic accuracy might actually
be the most efficient approach is not a new concept: Con-
sider how easy it is for the expert to solve a problem, com-
pared with the labor-intensive approach taken by the nov-
ice
25
(Figure 1). Although we shouldn’t forget the years of
training needed to develop expertise, once it exists, we
should design our systems to take advantage of experts’ skills,
to achieve highly accurate diagnoses with the least effort and
cost.
Whereas the potential to reduce diagnostic errors is real
and substantial, we conclude that eliminating such errors is
not a realistic goal. There are fundamental and insurmount-
able factors that preclude this possibility. We should focus
instead on increasing the visibility of diagnostic errors in the
current patient safety dialogue, prioritizing opportunities to
reduce diagnostic errors wherever we can, and setting re-
search priorities to study potential avenues for improving
R
EDUCING
D
IAGNOSTIC
E
RRORS
,
CONTINUED
A
CADEMIC
M
EDICINE
,V
OL
.77,N
O
. 10/O
CTOBER
2002 991
diagnostic accuracy in the future, such as interventions
aimed at improved cognition.
This work was supported by a grant from the National Patient Safety Foun-
dation.
R
EFERENCES
1. Berwick DM. Taking action to improve safety: how to increase the odds
of success. In: Enhancing Patient Safety and Reducing Errors in Health
Care. Chicago, IL: National Patient Safety Foundation, 1999:1–11.
2. Institute of Medicine. To Err is Human; Building a Safer Health Sys-
tem. Washington DC: National Academy Press, 1999.
3. Arkes H. Why medical errors can’t be eliminated: uncertainties and
the hindsight bias. Chron Higher Educ. May 19, 2000.
4. Sprenger GM. Dare to tell it all. 2001. Let’s Talk. Communicating Risk
and Safety in Health Care. Plenery Lecture, delivered at the third An-
nenberg Conference on Enhancing Patient Safety, May 17, 2001, St.
Paul, MN.
5. Leape L, Brennan TA, Laird N, et al. The nature of adverse events in
hospitalized patients. Results of the Harvard Medical Practice Study II.
N Engl J Med. 1991;324:377– 84.
6. Bartlett EE. Physicians’ cognitive errors and their liability conse-
quences. J Healthcare Risk Manage. Fall 1998:62–9.
7. Tai DYH, El-Bilbeisi H, Tewari S, Mascha EJ, Wiedermann HP, Ar-
roliga AC. A study of consecutive autopsies in a medical ICU: a com-
parison of clinical cause of death and autopsy diagnosis. Chest. 2001;
119:530–6.
8. Kassirer JP. Diagnostic reasoning. Ann Intern Med. 1989;110:893–900.
9. Bordage G. Why did I miss the diagnosis? Some cognitive explanations
and educational implications. Acad Med. 1999;74(10 suppl):S138–
S143.
10. Regehr G, Norman GR. Issues in cognitive psychology: implications
for professional education. Acad Med. 1996;71:988–1001.
11. Patel VL, Arocha JF, Kaufman DR. A primer on aspects of cognition
for medical informatics. J Am Med Informat Assoc. 2001;8:324–43.
12. Norman GR. The epistemology of clinical reasoning: perspectives from
philosophy, psychology, and neuroscience. Acad Med. 2000;75(10
suppl):S127–S136.
13. Kassirer JP, Kopelman RI. Cognitive errors in diagnosis: instantiation,
classification, and consequences. Am J Med. 1989;86:433–41.
14. Schmidt HG, Norman GR, Boshuizen HPA. A cognitive perspective
on medical expertise: theory and implications. Acad Med. 1990;65:
611–21.
15. Elstein AS. Clinical reasoning in medicine. In: Higgs J, Jones M (eds).
Clinical Reasoning in the Health Professions. Oxford, England: But-
terworth–Heinemann, 1995:49–59.
16. Edlow JA, Caplan LR. Avoiding pitfalls in the diagnosis of subarach-
noid hemorrhage. N Engl J Med. 2000;342:29–35.
17. Amendo MT, Brown BA, Kossow LB, Weinberg GM. Headache as the
sole presentation of acute myocardial infarction in two elderly patients.
Am J Geriatr Cardiol. 2001;10:100–1.
18. Saint S, Go AS, Frances C, Tierney LM Jr. Case records of the Mas-
sachusetts General Hospital—a home court advantage? N Engl J Med.
1995;333:883–4.
19. Baron J. Thinking and Deciding. 3rd ed. Cambridge, U.K.: Cambridge
University Press, 2000.
20. Gross PR, Levitt N, Lewis MW. The flight from science and reason.
Ann NY Acad Sci. 1996.
21. Kassirer JP, Kopelman RI. Learning Clinical Reasoning. Baltimore, MD:
Williams and Wilkins, 1991.
22. Gorovitz S, Macintyre A. Toward a theory of medical fallibility. Has-
tings Center Rep. 1975;5:13– 23.
23. Gawande A. Final cut. Medical arrogance and the decline of the au-
topsy. The New Yorker. March 19, 2001:94–9.
24. Kennedy M. Inexact sciences: professional education and the devel-
opment of expertise. Review of Research in Education. 1987;14:133–
68.
25. Reason J. Human Error. Cambridge, U.K.: Cambridge University Press,
1990.
26. Cook RI, Woods DD. Operating at the sharp end: the complexity of
human error. In: Bogner MS (ed). Human Error in Medicine. Hillsdale,
NJ: Lawrence Erlbaum Associates, 1994:255– 310.
27. Reason J. Managing the Risks of Organizational Accidents. Brookfield,
VT: Ashgate, 1997.
28. Leape L, Lawthers AG, Brennan TA, et al. Preventing medical injury.
Qual Rev Bull. 1993;19:144– 9.
29. Leape LL. The preventability of medical injury. In: Bogner MS (ed).
Human Error in Medicine. Hillsdale, NJ: Lawrence Erlbaum Associates,
1994:13–26.
30. Moray N. Error reduction as a system problem. In: Bogner MS (ed).
Human Error in Medicine. Hillsdale, NJ: Lawrence Erlbaum Associates,
1994:67–92.
31. Nolan TW. System changes to improve patient safety. BMJ. 2000;320:
771–3.
32. Bogner MS. Human error in medicine: a frontier for change. In: Bogner
MS (ed). Human Error in Medicine. Hillsdale, NJ: Lawrence Erlbaum
Associates, 1994:373–83.
33. Spath PL. Reducing errors through work system improvements. In:
Spath P (ed). Error Reduction in Health Care: A Systems Approach
to Improving Patient Safety. San Francisco, CA: Jossey–Bass, 2000:
199–234.
34. Reiser SJ. Medicine and the Reign of Technology. Cambridge, U.K.:
Cambridge University Press, 1978.
35. Jauhar S. When rules for better care exact their own cost. The New
York Times. Jan 5, 1999.
36. Peterson LA, Brennan TA, O’Neil AC, Cook EF, Lee TH. Does
housestaff discontinuity of care increase the risk for preventable adverse
events? Ann Intern Med. 1994;121:866– 72.
37. Perrow C. Normal Accidents: Living with High-Risk Technologies.
Princeton, NJ: Princeton University Press, 1999.
38. Berlin L. Defending the ‘‘missed’’ radiographic diagnosis. Am J Radiol.
2001;176:317–22.
39. Berlin L, Hendrix RW. Perceptual errors and negligence. Am J Radiol.
1998;170:863–7.
40. Norman GR, Coblentz CL, Brooks LR, Babcook CJ. Expertise in visual
diagnosis: a review of the literature. Acad Med. 1992;67(10 suppl):
S78–S83.
41. Sackett DL. A primer on the precision and accuracy of the clinical
examination. JAMA. 1992;267:2638–44.
42. Kahneman D, Slovic P, Tversky A. Judgement Under Uncertainty:
Heuristics and Biases. Cambridge, U.K.: Cambridge University Press,
1982.
43. Elstein AS. Heuristics and biases: selected errors in clinical reasoning.
Acad Med. 1999;74:791–4.
44. Dawson NV, Arkes HR. Systematic errors in medical decision making:
judgment limitations. J Gen Intern Med. 1987;2:183– 7.
45. Voytovich AE, Rippey RM, Suffredini A. Premature closure in diag-
nostic reasoning. J Med Educ. 1985;60:302–7.
46. Cheng PW, Holyoak KJ, Nisbett RE, Oliver LM. Pragmatic versus syn-
tactic approaches to training deductive reasoning. Cogn Psychol. 1986;
18:293–328.
47. Gick ML, Holyoak K. Schema induction and analogical transfer. Cogn
Psychol. 1983;15:1–38.
R
EDUCING
D
IAGNOSTIC
E
RRORS
,
CONTINUED
992 A
CADEMIC
M
EDICINE
,V
OL
.77,N
O
. 10/O
CTOBER
2002
48. Tversky A, Kahneman D. Availability: a heuristic for judging frequency
and probability. Cogn Psychol. 1973;5:207–32.
49. Hmelo CE. Cognitive consequences of problem-based learning for the
early development of medical expertise. Teach Learn Med. 1998;10:
92–100.
50. Eva KW, Neville AJ, Norman GR. Exploring the etiology of content
specificity: factors influencing analogic transfer and problem solving.
Acad Med. 1998;73(10 suppl):S1–S5.
51. Norman GR. Problem-solving skills, solving problems, and problem-
based learning. Med Educ. 1988;22:279–86.
52. Perkins DL, Salomon G. Are cognitive skills context-bound? Education
Research. 1989;18:16–25.
53. Higgs J, Jones M. Clinical reasoning. In: Higgs J, Jones M (eds). Clin-
ical Reasoning in the Health Professions. Oxford, U.K.: Butterworth-
Heinemann, 1995:3–23.
54. Mitchell DJ, Russo JE, Pennington N. Back to the future: temporal
perspective in the explanation of events. J Behav Decis Making. 1989;
2:25–38.
55. Klein G. Sources of Power: How People Make Decisions. Cambridge,
MA: The MIT Press, 1998.
56. Jiang Y, Nishikawa RM, Schmidt RA, Metz CE, Doi K. Relative gains
in diagnostic accuracy between computer-aided diagnosis and indepen-
dent double reading. In: Krupinski EA. Medical Imaging 2000: Image
Perception and Performance (Proceedings of SPIE, vol. 3981, 2000).
Progress in Biomedial Optics and Imaging. 2000; 1:10–5.
57. Kripalani S, Williams MV, Rask K. Reducing errors in the interpreta-
tion of plain radiographs and computed tomography scans. In: Shojania
KG, Duncan BW, McDonald KM, Wachter RM (eds). Making Health
Care Safer. A Critical Analysis of Patient Safety Practices. Rockville,
MD: Agency for Healthcare Research and Quality, 2001.
58. Espinosa JA, Nolan TW. Reducing errors made by emergency physi-
cians in interpreting radiographs; longitudinal study. BMJ. 2000;320:
737–40.
59. Kronz JD, Westra WH, Epstein JI. Mandatory second opinion surgical
pathology at a large referral hospital. Cancer. 1999;86:2426–35.
60. Garfield FB, Garfield JM. Clinical judgement and clinical practice
guidelines. Int J Technol Assess in Health Care. 2000;16:1050–60.
61. Ioannides JPA, Lau J. Evidence on interventions to reduce medical
errors. An overview and recommendations for future research. J Gen
Intern Med. 2001;16:325–34.
62. Klassen TP, Ropp LJ, Sutcliffe T, et al. A randomized controlled trial
of radiograph ordering for extremity trauma in a pediatric emergency
department. Ann Emerg Med. 1993;22:1524– 9.
63. Trowbridge R, Weingarten S. Clinical decision support systems. In:
Shojania KG, Duncan BW, McDonald KM, Wachter RM (eds). Mak-
ing Health Care Safer. A Critical Analysis of Patient Safety Practices.
Rockville, MD: Agency for Healthcare Research and Quality, 2001.
64. Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer–based
clinical decision support systems on physician performance and patient
outcomes: a systematic review. JAMA. 1998;283:2816–21.
65. Simon HA. Invariants of human behavior. Annu Rev Psychol. 1990;
41:1–19.
66. Sentinel event statistics. Joint Commission on Accreditation of
Healthcare Organizations. http://www.jcaho.org. Accessed 5/3/02.
67. Shojania KG, Duncan BW, McDonald CJ, Wachter RM. Making
Health Care Safer: A Critical Analysis of Patient Safety Practices. Ev-
idence Report/Technology Assessment #43; AHRQ Publication No 01-
E058. Rockville, MD: Agency for Healthcare Research and Quality,
2001.
68. Ruchlin HS, Finkel ML, McCarthy EG. The efficiency of second-opin-
ion consultation programs: a cost–benefit perspective. Med Care. 1982;
20:3–19.
... Moreover, during the night shift in the ED, physicians are sometimes forced to complete a consultation, physical examination, and diagnosis within 30 minutes; thus, the lack of complete information, high workload, and pressure might also contribute to misdiagnosis [11,14]. Diagnostic errors in the ED are frequent worldwide [15][16][17], and they can cause severe ethical, economic, or even legal problems. To minimize the chance of misdiagnosis, the first step is awareness of diagnostic error, which motivates physicians to implement preventive and corrective measures. ...
Article
Full-text available
Background Heart failure is a clinical syndrome characterized by decreased cardiac output, leading to systemic organ hypoxia and resulting in dyspnea, pulmonary edema, organ congestion, and pleural effusion. Owing to the diverse clinical manifestations of heart failure, early diagnosis can be challenging, and misdiagnosis may occur occasionally. The use of echocardiography and blood brain natriuretic peptide can aid in obtaining a more accurate diagnosis. Case presentation This article presents two case reports of patients who were misdiagnosed with acute cholecystitis. Both patients were young Mongolia males (age 26 and 39 years) who presented to the emergency department with acute upper abdominal pain, abdominal ultrasound revealed gallbladder enlargement, and blood tests suggested mild elevation of bilirubin levels. However, despite the absence of procalcitonin and C-reactive protein elevation, the patients were admitted to the general surgical department with a diagnosis of “acute cholecystitis.” Both patients were given treatment for cholecystitis, but their vital signs did not improve, while later examinations confirmed heart failure. After treatment with diuretics and cardiac glycosides, both patients’ symptoms were relieved. Conclusion We aim to highlight the clinical manifestations of heart failure and differentiate it from rare conditions such as acute cholecystitis. Physicians should make accurate diagnoses on the basis of physical examinations, laboratory testing and imaging, and surveys while avoiding diagnostic heuristics or mindsets. By sharing these two case reports, we hope to increase awareness to prevent potential complications and improve patient outcomes.
... Among the contextual (e.g., task-related) factors that increase the likelihood of experts' errors, the literature identifies rapid response time requirements (Klein et al. 1993, Dismukes et al. 2017; task complexity (Bonner 1994); financial incentives (Chen et al. 2011); lack of appropriate technology or instrumentation ; limited access to information and analysis or being provided ambiguous information (Klein et al. 1993, Bodnaruk and Simonov 2015, Zwaan and Singh 2015; exposure to extraneous information (Dismukes et al. 2017); variations in task demands (e.g., requested by supervisors); and social/organizational influences (Dismukes et al. 2017), including whether other experts are involved in the decision process (Zwaan and Singh 2015), lack of feedback, the extent to which the goal is well defined, and the need to collaborate with other individuals (Klein et al. 1993). In addition, expert errors are driven by psychologically-specific causes, such as fatigue (Linder et al. 2014), distractions, excessive workload, and time pressure (Norman et al. 1989;Graber et al. 2002; Dong, Saar-Tsechansky, and Geva: Assessing Experts' Decision Quality Dror et al. 2005 as well as experts' own emotional state (Fenton-O'Creevy et al. 2011). For example, judges were observed to issue harsher decisions just before their lunch break (Danziger et al. 2011), and physicians working night shifts were significantly more likely to neglect some portion of standard procedures (Smith-Coggins et al. 1994). ...
Article
Full-text available
Expert workers make non-trivial decisions with significant implications. Experts’ decision accuracy is, thus, a fundamental aspect of their judgment quality, key to both management and consumers of experts’ services. Yet, in many important settings, transparency in experts’ decision quality is rarely possible because ground truth data for evaluating the experts’ decisions is costly and available only for a limited set of decisions. Furthermore, different experts typically handle exclusive sets of decisions, and thus, prior solutions that rely on the aggregation of multiple experts’ decisions for the same instance are inapplicable. We first formulate the problem of estimating experts’ decision accuracy in this setting and then develop a machine–learning–based framework to address it. Our method effectively leverages both abundant historical data on workers’ past decisions and scarce decision instances with ground truth labels. Using both semi-synthetic data based on publicly available data sets and purposefully compiled data sets on real workers’ decisions, we conduct extensive empirical evaluations of our method’s performance relative to alternatives. The results show that our approach is superior to existing alternatives across diverse settings, including settings that involve different data domains, experts’ qualities, and amounts of ground truth data. To our knowledge, this paper is the first to posit and address the problem of estimating experts’ decision accuracies from historical data with scarce ground truth, and it is the first to offer comprehensive results for this problem setting, establishing the performances that can be achieved across settings as well as the state-of-the-art performance on which future work can build. This paper was accepted by Anindya Ghose, information systems. Funding: T. Geva acknowledges research grants from the Jeremy Coller Foundation and from the Henry Crown Institute for Business Research. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2021.03357 .
... Although CT scans are employed to confirm the diagnosis prior to surgery, their use is linked to diagnostic delays that need to be addressed [24,25]. Implementing a decision support system, such as an automated detection and prioritization application, could enhance clinical workflows by reducing both the misdiagnosis rates and diagnostic delays [26]. Harris et al. [6] measured the notification time of the application from image download into the platform to visible notification of the application results. ...
Article
Full-text available
This multicenter retrospective study evaluated the diagnostic performance of a deep learning (DL)-based application for detecting, classifying, and highlighting suspected aortic dissections (ADs) on chest and thoraco-abdominal CT angiography (CTA) scans. CTA scans from over 200 U.S. and European cities acquired on 52 scanner models from six manufacturers were retrospectively collected and processed by CINA-CHEST (AD) (Avicenna.AI, La Ciotat, France) device. The diagnostic performance of the device was compared with the ground truth established by the majority agreement of three U.S. board-certified radiologists. Furthermore, the DL algorithm’s time to notification was evaluated to demonstrate clinical effectiveness. The study included 1303 CTAs (mean age 58.8 ± 16.4 years old, 46.7% male, 10.5% positive). The device demonstrated a sensitivity of 94.2% [95% CI: 88.8–97.5%] and a specificity of 97.3% [95% CI: 96.2–98.1%]. The application classified positive cases by the AD type with an accuracy of 99.5% [95% CI: 98.9–99.8%] for type A and 97.5 [95% CI: 96.4–98.3%] for type B. The application did not miss any type A cases. The device flagged 32 cases incorrectly, primarily due to acquisition artefacts and aortic pathologies mimicking AD. The mean time to process and notify of potential AD cases was 27.9 ± 8.7 s. This deep learning-based application demonstrated a strong performance in detecting and classifying aortic dissection cases, potentially enabling faster triage of these urgent cases in clinical settings.
... Contrary to cases where patients themselves seek a second opinion for confirmation of a diagnosis or due to unsatisfactory interactions with their doctors, second opinions initiated by other parties, especially those initiated by doctors, often aim to restrict the use of lowvalue treatments (which are those offering minimal or no benefit, posing potential harm, or yielding marginal benefits at disproportionately high costs [3]). Therefore, within the realm of clinical decision making, a second opinion serves as a significant decision-support, which enables another physician to either confirm or alter the proposed treatment plan [4] and has been proven to significantly reduce medication errors [5], including diagnostic mistakes [2]. ...
Article
Full-text available
Background The frequency of hip and knee arthroplasty surgeries has been rising steadily in recent decades. This trend is attributed to an aging population, leading to increased demands on healthcare systems. Fast Track (FT) surgical protocols, perioperative procedures designed to expedite patient recovery and early mobilization, have demonstrated efficacy in reducing hospital stays, convalescence periods, and associated costs. However, the criteria for selecting patients for FT procedures have not fully capitalized on the available patient data, including patient-reported outcome measures (PROMs). Methods Our study focused on developing machine learning (ML) models to support decision making in assigning patients to FT procedures, utilizing data from patients’ self-reported health status. These models are specifically designed to predict the potential health status improvement in patients initially selected for FT. Our approach focused on techniques inspired by the concept of controllable AI. This includes eXplainable AI (XAI), which aims to make the model’s recommendations comprehensible to clinicians, and cautious prediction, a method used to alert clinicians about potential control losses, thereby enhancing the models’ trustworthiness and reliability. Results Our models were trained and tested using a dataset comprising 899 records from individual patients admitted to the FT program at IRCCS Ospedale Galeazzi-Sant’Ambrogio. After training and selecting hyper-parameters, the models were assessed using a separate internal test set. The interpretable models demonstrated performance on par or even better than the most effective ‘black-box’ model (Random Forest). These models achieved sensitivity, specificity, and positive predictive value (PPV) exceeding 70%, with an area under the curve (AUC) greater than 80%. The cautious prediction models exhibited enhanced performance while maintaining satisfactory coverage (over 50%). Further, when externally validated on a separate cohort from the same hospital-comprising patients from a subsequent time period-the models showed no pragmatically notable decline in performance. Conclusions Our results demonstrate the effectiveness of utilizing PROMs as basis to develop ML models for planning assignments to FT procedures. Notably, the application of controllable AI techniques, particularly those based on XAI and cautious prediction, emerges as a promising approach. These techniques provide reliable and interpretable support, essential for informed decision-making in clinical processes.
... In diagnosis, the notion that errors have various causes that can guide strategies to prevent them and reduce diagnostic errors certainly reflects early work in diagnostic safety. Broadly speaking, the sum of this work has shown that diagnostic errors are the result of failures caused by clinicians (e.g., cognitive errors), the system, or both [13,[16][17][18][19]. ...
Article
Full-text available
Diagnostic errors in health care are a global threat to patient safety. Researchers have traditionally focused diagnostic safety efforts on identifying errors and their causes with the goal of reducing diagnostic error rates. More recently, complementary approaches to diagnostic errors have focused on improving diagnostic performance drawn from the safety sciences. These approaches have been called Safety-II and Safety-III, which apply resilience engineering and system safety principles, respectively. This review explores the safety science paradigms and their implications for analyzing diagnostic errors, highlighting their distinct yet complementary perspectives. The integration of Safety-I, Safety-II, and Safety-III paradigms presents a promising pathway for improving diagnosis. Diagnostic researchers not yet familiar with the various approaches and potential paradigm shift in diagnostic safety research may use this review as a starting point for considering Safety-I, Safety-II, and Safety-III in their efforts to both reduce diagnostic errors and improve diagnostic performance.
... A cause for the adverse outcomes could be diagnostic errors, where diseases are not recognized during the acute contact and are, therefore, missed or delayed [30][31][32][33]. Diagnostic errors can occur from unacknowledged cognitive biases, causing impaired decision-making during the diagnostic process [33][34][35]. ...
Article
Full-text available
Background Nonspecific discharge diagnoses after acute hospital courses represent patients discharged without an established cause of their complaints. These patients should have a low risk of adverse outcomes as serious conditions should have been ruled out. We aimed to investigate the mortality and readmissions following nonspecific discharge diagnoses compared to disease-specific diagnoses and assessed different nonspecific subgroups. Methods Register-based cohort study including hospital courses beginning in emergency departments across 3 regions of Denmark during March 2019–February 2020. We identified nonspecific diagnoses from the R- and Z03-chapter in the ICD-10 classification and excluded injuries, among others—remaining diagnoses were considered disease-specific. Outcomes were 30-day mortality and readmission, the groups were compared by Cox regression hazard ratios (HR), unadjusted and adjusted for socioeconomics, comorbidity, administrative information and laboratory results. We stratified into short (3–<12 h) or lengthier (12–168 h) hospital courses. Results We included 192,185 hospital courses where nonspecific discharge diagnoses accounted for 50.7% of short and 25.9% of lengthier discharges. The cumulative risk of mortality for nonspecific vs. disease-specific discharge diagnoses was 0.6% (0.6–0.7%) vs. 0.8% (0.7–0.9%) after short and 1.6% (1.5–1.7%) vs. 2.6% (2.5–2.7%) after lengthier courses with adjusted HRs of 0.97 (0.83–1.13) and 0.94 (0.85–1.05), respectively. The cumulative risk of readmission for nonspecific vs. disease-specific discharge diagnoses was 7.3% (7.1–7.5%) vs. 8.4% (8.2–8.6%) after short and 11.1% (10.8–11.5%) vs. 13.7% (13.4–13.9%) after lengthier courses with adjusted HRs of 0.94 (0.90–0.98) and 0.95 (0.91–0.99), respectively. We identified 50 clinical subgroups of nonspecific diagnoses, of which Abdominal pain ( n = 12,462; 17.1%) and Chest pain ( n = 9,599; 13.1%) were the most frequent. The subgroups described differences in characteristics with mean age 41.9 to 80.8 years and mean length of stay 7.1 to 59.5 h, and outcomes with < 0.2–8.1% risk of 30-day mortality and 3.5–22.6% risk of 30-day readmission. Conclusions In unadjusted analyses, nonspecific diagnoses had a lower risk of mortality and readmission than disease-specific diagnoses but had a similar risk after adjustments. We identified 509 clinical subgroups of nonspecific diagnoses with vastly different characteristics and prognosis.
Article
Full-text available
Expertní systémy jsou programy, které se snaží usnadnit lidské rozhodovací postupy ve velmi specifických či odborných oblastech. Složené jsou ze znalostní báze a odvozovacího mechanismu. Zadané informace kombinují a využívají k tvorbě nových, do souboru nevložených závěrů. Systém jako celek tak dovede podávat odpovědi na otázky, k jejichž zodpovězení by jinak bylo potřeba vícero lidských expertů pro zvážení všech zúčastněných oblastí zájmu. V oblasti vzácných onemocnění je tento koncept extrémně výhodný, vzhledem k nutnosti spolupráce více odborníků a z podstaty věci méně časté osobní zkušenosti jednotlivých ošetřujících s konkrétním onemocněním. Projekt ERN Cranio přichází s možností vytvořit síť, která propojuje jednotlivá evropská pracoviště, shromažďuje data o pacientech se vzácným kraniofaciálním onemocněním a tvoří prostředí, které pomáhá v diagnostice a terapii těchto onemocnění. Tento článek definuje expertní systémy, představuje jejich možnosti a limity. Prezentuje koncept projektu ERN Cranio a monitoruje současnou situaci zabývající se diagnostikou vzácných onemocnění.
Article
This paper delves into heuristic decision-making by nurses during the triage process, aiming to elucidate how organisational factors influence nurses’ decision-making regarding the assignment of priority codes to patients, and to assess the effectiveness of Clinical Decision Support Systems (CDSS) in this context. Drawing on an experimental dataset of 25 triage cases evaluated by 35 nurses via interviews, the study was conducted in two Spanish Emergency Departments using CDSS. Findings indicate that organisational factors predominantly influence decisions in cases with complete and coherent information. However, in cases where information is incoherent or missing, individual nurse characteristics guide decision-making. Furthermore, it suggests that CDSS should be tailored to nurses’ clinical reasoning to serve as effective support for individual decision-making processes.
Article
Full-text available
Objetivo: Describir el desempeño del proceso preanalítico durante el análisis de gases sanguíneos (AGS) en el INSN-2018. Materiales y métodos: Se desarrolló un estudio observacional, descriptivo y transversal que identificó puntos críticos durante en el análisis en solicitudes de 2428 pacientes aplicando la métrica sigma (σ). Resultados: El 62 % presenta errores preanalíticos en el AGS, 93 % corresponde a jeringas y 7 % a capilares. En jeringas, los errores son relativos a solicitudes ilegibles (13,3 %), reescritas (2,4 %) y temporales (1,1 %); y en la muestra, por presencia de coágulos (1,6 %), burbujas (12,3 %), jeringas sin tapón (56,1 %) y al escaso volumen (12,6 %). En capilares, los errores relativos fueron solicitudes ilegibles (10,8 %), reescritas (8,8 %) y temporales (3,1 %); y en la muestra fue la burbuja (1,3 %). Los servicios críticos presentan mayor error preanalítico en jeringas (72 %) y capilares (67 %) que otros servicios. Por otro lado, los volúmenes menores a 0,4 mL se consideraron inaceptables según los valores sigma. Conclusiones: Se evidenció que existe una alta tasa de errores preanalíticos, provenientes de las unidades críticas, donde las muestras tomadas de jeringas presentaron los mayores problemas. A nivel hospitalario se recomienda, implementar políticas de calidad enfocadas en la fase preanalítica y comprar dispositivos adecuados para la recolección de la muestra para el AGS.
Book
As seen in Malcolm Gladwell's Blink: the modern classic on how people make decisions by drawing on prior experience and using a combination of intuition and analysis. We have all seen images of firefighters rescuing people from burning buildings and paramedics treating bombing victims. How do these individuals make the split-second decisions that save lives? Most studies of decision making, based on artificial tasks assigned in laboratory settings, view people as biased and unskilled. In this modern classic, Gary A. Klein proposes a naturalistic approach to decision making, which views people as gaining experience that then enables them to use a combination of intuition and analysis to make decisions. To illustrate this approach, Klein tells stories of people—from pilots to chess masters—acting under such real-life constraints as time pressure, high stakes, personal responsibility, and shifting conditions. Since its publication, Sources of Power has been enormously influential. The book has sold more than 50,000 copies, has been translated into six languages, has been cited in professional journals that range from Journal of Marketing Research to Journal of Nursing, and is mentioned by Malcolm Gladwell in Blink. Author Gary Klein has collaborated with Nobel laureate Daniel Kahneman and served on a team that redesigned the White House Situation Room to support more effective decision making. The model of decision-making Klein proposes in the book has been adopted in many fields, including law enforcement training and petrochemical plant operation.
Article
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.