Reducing Diagnostic Errors in Medicine:
What’s the Goal?
Mark Graber, MD, Ruthanna Gordon, MA, and Nancy Franklin, PhD
This review considers the feasibility of reducing or elim-
inating the three major categories of diagnostic errors in
1. ‘‘No-fault errors’’ occur when the disease is silent,
presents atypically, or mimics something more common.
These errors will inevitably decline as medical science
advances, new syndromes are identiﬁed, and diseases can
be detected more accurately or at earlier stages. These
errors can never be eradicated, unfortunately, because
new diseases emerge, tests are never perfect, patients are
sometimes noncompliant, and physicians will inevitably,
at times, choose the most likely diagnosis over the correct
one, illustrating the concept of necessary fallibility and
the probabilistic nature of choosing a diagnosis.
2. ‘‘System errors’’ play a role when diagnosis is delayed
or missed because of latent imperfections in the health
care system. These errors can be reduced by system im-
provements, but can never be eliminated because these
improvements lag behind and degrade over time, and
each new ﬁx creates the opportunity for novel errors.
Tradeoffs also guarantee system errors will persist, when
resources are just shifted.
3. ‘‘Cognitive errors’’ reﬂect misdiagnosis from faulty
data collection or interpretation, ﬂawed reasoning, or in-
complete knowledge. The limitations of human process-
ing and the inherent biases in using heuristics guarantee
that these errors will persist. Opportunities exist, how-
ever, for improving the cognitive aspect of diagnosis by
adopting system-level changes (e.g., second opinions, de-
cision-support systems, enhanced access to specialists)
and by training designed to improve cognition or cogni-
Diagnostic error can be substantially reduced, but never
Acad. Med. 2002;77:981–992.
The search for zero error rates is doomed from the start.
Stimulated by the Institute of Medicine report To Err
is Human, published in 1999, the health care indus-
try is rapidly mobilizing to address the problem of
preventable errors in medicine.
Although the di-
rection in which we need to move is clear, the ultimate
objective needs better deﬁnition. If we wish to reduce errors
in medical care, what is the appropriate goal?
Dr. Graber is chief of medical service at the VA Medical Center, Northport,
New York, and professor and vice chairman of the Department of Medicine
at SUNY Stony Brook, Stony Brook, New York; Ms. Gordon is a doctoral
candidate in psychology, and Dr. Franklin is associate professor of psychol-
ogy, both at SUNY Stony Brook.
Correspondence and requests for reprints should be addressed to Dr. Graber,
Chief, Medical Service– 111, VA Medical Center, Northport, NY 11768.
At the 1998 Annenberg Conference on Patient Safety,
Nancy W. Dickey, MD, past president of the American Med-
ical Association, asserted that ‘‘the only acceptable error rate
Gordon M. Sprenger, CEO of the Allina Health
System and chair of the American Hospital Association
Board of Trustees, reiterated this theme at the 2001 Annen-
berg safety colloquium: ‘‘Let’s be absolutely clear on this:
The goal of the patient safety movement must be to elimi-
nate all errors. This is like climbing Mount Everest, but it
must be our goal and it can be done.’’
Diagnostic errors comprise a substantial and costly frac-
tion of all medical errors. In the Harvard Medical Practice
Study of hospitals in New York State, diagnostic errors rep-
resented the second largest cause of adverse events.
larly, diagnostic errors are the second leading cause for mal-
practice suits against hospitals.
A recent study of autopsy
Categories of Diagnostic Errors
Error Type Example
Unusual or silent presentation of disease Missed diagnosis of appendicitis in an elderly patient with no abdominal pain
Uncertainty regarding the state of the world Missed diagnosis because the patient is inconsistent or confusing in presenting his symptoms
Lack of patient cooperation Missed diagnosis of colon cancer in patient who refused screening colonoscopies
Limitations of medical knowledge Missed diagnosis of Lyme disease in the era before this was recognized as a speciﬁc entity
Failure of normative processes Wrong diagnosis of a common cold in a patient ultimately found to have mononucleosis
Faulty test or data Wrong diagnosis of urine infection from urine left too long before culture
Lack of appropriate equipment or tests Missed colon cancer because ﬂexible sigmoidoscopy performed instead of colonoscopy
Inadequate pursuit of noncompliant patient Abnormal test results not appreciated because patient missed scheduled appointment
Unavailability of needed expertise Fracture missed by emergency department staff (radiologist not available)
Inefﬁcient processes Delay in diagnosis of lung cancer due to inefﬁcient coordination of outpatient care
Failure to adequately supervise Diagnoses missed by trainees (supervising attending physician not available)
Patient neglect Abnormal test results detected but not followed up
External interference (e.g., from HMO) Delay or missed diagnosis because testing not approved by patient’s plan
Policy failures Delay in diagnosis of pulmonary embolus—nuclear medicine section not open on weekends
Inadequate training or orientation Delays in diagnosis related to new trainees’ not knowing how to navigate the system efﬁciently
Culture (e.g., tolerance of error) Sustained backlogs in reading X-ray ﬁlms leading to delayed or wrong diagnoses
Failure to coordinate care Delay of inpatient diagnosis: ward team not informed patient was admitted
Inadequate knowledge Wrong diagnosis of ventricular tachycardia on ECG with electrical artifact simulating this
Faulty data gathering Missed diagnosis of breast cancer from failure to perform breast examination
Faulty information processing Failing to perceive the lung nodule on a patient’s chest X-ray
Faulty metacognition Wrong diagnosis of degenerative arthritis (no further tests ordered) in a patient with septic
ﬁndings identiﬁed diagnostic discrepancies in 20% of cases,
and the authors estimated that in almost half of these cases
knowledge of the correct diagnosis would have changed the
Given their enormous human and economic impact, the
complete elimination of diagnostic errors would seem to be
an appropriate and worthwhile goal. The purpose of this re-
view is to consider the available evidence regarding the feas-
ibility of this endeavor. We approach this question by ﬁrst
identifying three major types of diagnostic errors. For each
type, we then consider possible approaches to decreasing the
incidence of diagnostic errors. Finally, we examine whether
there are any practical or theoretical limits that would pre-
clude us from eliminating diagnostic errors altogether.
The nature of clinical decision making has been clariﬁed
over the past several decades, and a variety of systems have
been proposed to classify diagnostic errors.
For the pur-
poses of this discussion, we postulate that every diagnostic
error can be assigned to one of three broad etiologic cate-
gories (Table 1):
䡲‘‘No-fault errors,’’ following Kassirer and Kopelman, in-
clude cases where the illness is silent, or masked, or pre-
sents in such an atypical fashion that divining the correct
diagnosis, with the current state of medical knowledge,
would not be expected.
Other examples would include
the rare condition misdiagnosed as something more com-
mon, and the diagnosis missed because the patient does
not present his or her symptoms clearly. A diagnosis missed
or delayed because of patient noncompliance might also
be viewed as a no-fault error.
䡲System errors reﬂect latent ﬂaws in the health care system.
Included in this category are weak policies, poor coordi-
nation of care, inadequate training or supervision, defec-
tive communication, and the many system factors that de-
tract from optimal working conditions, such as stress,
fatigue, distractions, and excessive workload. These prob-
lems can affect all the diagnosticians in the involved
health care system.
䡲Cognitive errors are those in which the problem is inade-
quate knowledge or faulty data gathering, inaccurate clin-
ical reasoning, or faulty veriﬁcation.
ﬂawed perception, faulty logic, falling prey to biased heu-
ristics, and settling on a ﬁnal diagnosis too early. These
are all errors on the part of an individual diagnostician.
Each of these three categories of diagnostic errors carries its
own prognosis for error reduction, and we consider them
each in turn.
Lucky is the patient (and his or her physician) whose disease
presents in classical textbook fashion. For many conditions
the classical presentation is the exception and the spectrum
of possible disease presentations is broad. An example is the
classical ‘‘thunderclap’’ headache of subarachnoid hemor-
rhage. This prototypical ﬁnding is only present in 20–50%
of patients, and failure to appreciate the various other ways
that such patients may present accounts for a substantial
fraction of the patients with subarachnoid hemorrhage in
whom the diagnosis is missed.
Analogously, patients over
80 years old are more likely to have atypical presentations
of myocardial infarction than the typical chest pain that is
the hallmark of a heart attack in younger patients.
Every week, the New England Journal of Medicine enter-
tains and instructs clinicians with the ‘‘Case Records of the
Massachusetts General Hospital.’’ In this venerable exercise,
a case is presented to an expert clinician, who is challenged
to identify the ‘‘correct’’ diagnosis. Many of the factors that
contribute to diagnostic errors in everyday life have been
carefully eliminated in these vignettes: There are no lab er-
rors, no chance to fail to perceive the crucial ﬁnding, the
clinician has the luxury of time to research all the relevant
possibilities, and the diagnosis is made in the absence of
everyday stress, fatigue, and distractions. Even in this ide-
alized setting, however, the correct diagnosis is often missed.
In the cases presented from 1989 to 1996, the error rate
among all case discussants was 25% (excluding cases ana-
lyzed by the diagnostically gifted physicians from the pre-
senting hospital, who had an error rate of only 5%!).
most cases these errors reﬂect the unusual ways in which
even common conditions can manifest, and in some cases
the illness being discussed may even represent a new disease,
or a new variant. These no-fault errors are probably over-
represented in cases chosen to be diagnostic challenges, but
the ability of diseases to remain silent or present in atypical
fashion is encountered in every clinical setting.
Can we reduce no-fault diagnostic errors? To the extent
that no-fault diagnostic errors represent shortcomings of
medical knowledge or testing, it is a virtual certainty that
these errors will decrease over the long term as knowledge
advances. Before the appreciation of Lyme disease as a spe-
ciﬁc entity and our ability to speciﬁcally test for the condi-
tion, all of these cases were misdiagnosed as, for example,
atypical rheumatoid arthritis. The evolving ability to test for
disease at a point when the clinical manifestations are min-
imal or absent is another way in which no-fault errors will
be reduced. Consider the ability to detect pre-symptomatic
cancer with appropriate screening tests, or the ability to de-
tect silent hyperparathyroidism from the routine measure-
ment of serum calcium. These examples illustrate the con-
cept that advances in medical knowledge and disease
detection will inevitably reduce the number of no-fault er-
rors. This process may even accelerate as we become increas-
ingly able to use genetic markers to detect disease predis-
positions before there are any clinical manifestations.
Can we eliminate no-fault diagnostic errors? As a prac-
tical matter, it seems unlikely that we will ever have a com-
plete enough armamentarium to test for every possible dis-
ease. There will always be patients whose disease exists in a
silent, preclinical stage, eluding detection. In other patients,
the disease may be clinically manifest, but present in such
an atypical fashion that the true diagnosis is missed. A ﬁnal
likelihood is that new diseases or new pathogens, or new
side effects of yet-to-be invented medications, will emerge
over time. The ﬁrst patients to develop these novel entities
will be misdiagnosed until the new syndromes are deﬁned
Other immortal no-fault errors are those arising from the
use of normative approaches to choose the most likely di-
agnosis. For example, when faced with uncertainty, the nor-
mative approach suggests we should pick as a working di-
agnosis the entity with the highest likelihood. Averaged over
all diagnoses this will produce the highest number of correct
As emphasized by Arkes, however, using the nor-
mative approach guarantees that, with some regularity, the
clinician will choose the most likely diagnosis instead of the
It is inevitable that a more rare condition will
sometimes exist in cases where we suspect something more
common. Although technically this should be considered a
cognitive error, it is inappropriate to fault the clinician
whose diagnosis is wrong as a result. As Arkes concludes,
‘‘There is no solution to the problem, which is an unavoid-
able consequence of the probabilistic nature of the relation-
ship between disease and symptoms.’’
Several patient-speciﬁc factors contribute to the impossi-
bility of eliminating no-fault diagnostic errors. One of these
is patient noncompliance. Although many interventions are
under way to improve compliance with medical care and
participation in screening programs, patients’ compliance is
never guaranteed. Their ultimate participation, or lack of
participation, in medical care may be inﬂuenced by busy per-
sonal schedules, religious beliefs, attraction to alternative
medicine, or distrust.
A second patient-related factor contributing to no-fault
errors is the inherent variability in how patients perceive
and describe their states of health or their active symptoms.
The information a patient gives may be confusing, contra-
dictory, or inaccurate.
Without understanding each pa-
tient’s personal background, context, and belief systems, it
may be impossible for the physician to accurately compre-
hend the true state of affairs.
This problem illustrates the philosophical argument of
‘‘necessary fallibility’’ presented by Gorovitz and Mac-
This concept applies to all ﬁelds of cognitive en-
deavor, including clinical reasoning in medicine, and sug-
gests that ultimately the state of the world is too complex
to be fully knowable: ‘‘It is inherent in the nature of medical
practice that error is unavoidable, not merely because of the
limitations of human knowledge or even the limits of human
intellect, but rather because of the fundamental epistemo-
logical feature of a science of particulars.’’ Science can never
predict the exact course of a hurricane because of the inﬁ-
nitely many interacting environmental and topographical at-
Similarly, medical diagnosticians will be forever
challenged by the subtle and unknowable interplay of vari-
ables (in the disease agent, in the host response, in the en-
vironment, in how the patient describes his or her symp-
toms, in testing, and even in the physician’s powers of
observation) that determine how a disease will present itself
and be perceived by the clinician. Kennedy has argued that
the state of uncertainty surrounding clinical decision making
is so profound and pervasive that it may be inappropriate to
judge the quality of medical diagnosis using the usual stan-
dards of rational decision making.
The prevailing paradigm acknowledges that error in medical
care has two distinct roots: At the ‘‘sharp end’’ is the indi-
vidual provider who interacts with the patient and makes
the mistake. At the ‘‘blunt end’’ are the latent ﬂaws in the
health care system that provide the setting, the framework,
and the predisposition for the error to occur.
factors include the system’s organizational structure, culture,
policies and procedures, the resources provided, the ground
rules for communication and interaction, and performance
detractors such as excess provider workload.
Can we reduce diagnostic errors related to system issues?
Reﬂecting work by Reason,
dominant role of system factors has assumed center stage in
both understanding and correcting errors in medicine. As
summarized by Bogner, ‘‘a systems approach is necessary to
effectively address human error in medicine,’’
theme was repeatedly endorsed in the Institute of Medicine
report, To Err is Human.
Compared with error-improvement
strategies that focus on individual providers, system-level
changes have the advantage of potentially decreasing error
rates for all involved providers, and over extended periods
Laboratory errors provide an example of how powerful sys-
tem interventions can be in reducing diagnostic errors. The
scope and accuracy of medical diagnosis increased dramati-
cally in the 20th century in parallel with the emergence of
laboratory testing and other diagnostic tests, such as X-rays
and electrocardiograms. In the beginning, however, the ac-
curacy and reliability of medical testing varied widely. A sur-
vey in 1949 of 18 leading clinical laboratories in Connect-
icut found that over one third of the clinical lab results were
unacceptable. Similarly, a mid-century national survey by
the Centers for Disease Control and Prevention estimated
that more than a fourth of all laboratory testing results na-
tionwide were unacceptable.
This problem eventually re-
sulted in standardization and regulation, inspection, and in-
sistence on quality control. Although lab errors are still with
us, they are now rare as a result of such changes.
Diagnostic errors related to delays are common and rep-
resent a large area where system interventions could be ef-
fective. Delayed diagnosis often reﬂects inefﬁciency in di-
agnostic evaluation, suboptimal coordination of care, or lack
of effective communication. These are all system factors that
can be optimized through attention to system design and
Can we eliminate diagnostic errors related to system is-
sues? Despite great hope for reducing diagnostic errors re-
lated to system-dependent factors, it will not be possible to
totally eliminate system-related errors. At least four factors
support this conclusion:
䡲System improvements degrade over time. Permanent im-
provement remains the ideal, but new policies are even-
tually forgotten, organizational changes vary with the as-
signed staff, and the enthusiasm for improvement wanes.
䡲The ‘‘ﬁx,’’ though correcting the old problem, may intro-
duce entirely new opportunities for error.
䡲Systems must necessarily evolve in step with the evolution
of health care technology and management. Systems will
develop and mature along a learning curve, and system
repairs will always, to some extent, lag behind and fall
short of achieving perfection.
䡲Whereas the ideal ﬁx would eliminate the chance for er-
rors, in practice we often encounter tradeoffs: the oppor-
tunity for errors is reduced in one system, but increased in
another. A current example is the movement to limit the
number of hours resident trainees can work without rest
or sleep. Although limits on work hours may decrease er-
rors related to fatigue, new problems may arise from the
inevitable hand-offs created by new coverage systems and
new problems of coordinating care when the physician
who knows the patient best is now present only eight to
12 hours of the 24-hour work day.
A related weakness
has been pointed out by Perrow:
Fixes, including safety devices, sometimes create new acci-
dents, and quite often merely allow those in charge to run
the system faster, or in worse weather, or with bigger explo-
Signal detection theory illustrates the inevitability of
tradeoffs. Originally introduced to describe the perceptual
performance of radar operators, signal detection theory can
describe any situation in which a yes–no decision is made.
For example, the radiologist must decide whether a chest
X-ray shows a tumor or is normal. Problems arise, however,
for tumors that are difﬁcult to appreciate, and when chance
conﬂuences of normal shadows can simulate a tumor when
none is there. The radiologist must choose a threshold be-
yond which the report will indicate a tumor is present. With
a lower threshold, the radiologist will have a higher sensi-
tivity in detecting tumors, but at the expense of more false
positives. With a more stringent threshold, false alarms will
be minimized, but at the expense of missing some tumors.
Health system planners need to seek the right balance
point, one that reduces errors optimally across the whole
system, taking into account the tradeoffs that inevitably
arise. Consider the need for communication between the
emergency room and the admitting ward: Too little com-
munication risks poor coordination of care, and too much
would bog down both areas, leading to other errors. Physi-
cian ‘‘alerts’’ of abnormal labs are another example: If there
are too few ‘‘alerts’’ some lab abnormalities may be missed,
but if there are too many, the ‘‘alert’’ loses its functional
signiﬁcance. The difﬁculty of optimally allocating medical
resources is a ﬁnal example: Fixing a system problem in one
area should not consume so many resources that other areas
become vulnerable. The existence of errors may sometimes
indicate a need for change, or may just suggest the need to
re-evaluate the balance point of tradeoffs that were used to
set the initial policy.
Perception. Diagnosis begins with perception. The phy-
sician must identify the physical ﬁndings, the pathologist
must recognize the abnormalities of histology, and the radi-
ologist must perceive the differences between normal and
abnormal densities. The available evidence suggests that
even in this ﬁrst stage of the diagnostic process, seemingly
the simplest and most straightforward, errors occur at non-
trivial rates. For example, Berlin has summarized the studies
of diagnostic errors in radiology, ﬁnding rates that range from
4% in clinical practice series (large numbers of normal ﬁlms)
to 30% in prospective studies incorporating larger numbers
of abnormal ﬁlms.
In 80% of these cases, abnormal details
were not perceived, and in 20%, abnormal details were iden-
tiﬁed but misinterpreted.
Expertise in the type of visual
diagnosis used by radiologists seems to involve an interaction
of two distinct components: visual perception and domain-
speciﬁc knowledge of both the normal expected ﬁndings and
all possible abnormal ﬁndings.
Variability in the physical
examination is a related problem that detracts from the ac-
curacy of diagnosis.
Hypothesis generation. The initial information base often
cues strongly matching candidate diagnoses, or a likely clin-
ical framework for analysis. If this process occurs without
much deliberate effort, other reasonable possibilities may not
be similarly generated . An example is the patient who pre-
sents to the emergency department with chest pain from a
dissecting aneurysm. The clinician may mistakenly conclude
that the patient has pain related to myocardial ischemia and
miss the diagnosis of dissecting aneurysm because myocardial
ischemia is much more common, and therefore more ‘‘avail-
able’’ in memory.
Medical diagnosis is a specialized example of decision
making under uncertainty. In familiar contexts, clinicians
make decisions without much conscious deliberation, and
medical experts routinely practice in this fashion. Clinicians
typically use a variety of heuristics, or rules of thumb, for
efﬁciently arriving at decisions in the face of limited time or
For example, diagnoses are established using heu-
ristics based on representativeness, availability, or extrapo-
lation. The power of heuristics is enormous, allowing clini-
cians to navigate the diagnostic challenges of everyday life
and make effective decisions, usually accurate, in real time
when arduous working out of probabilities is not possible.
Heuristic solutions free up cognitive resources so that they
can be applied toward other demands. The price for using
these powerful tools, however, is predictable error reﬂecting
the inherent biases associated with each of these heuristics.
Data interpretation. The probability of the initial hy-
pothesis is adjusted upwards or downwards using test results
to calculate a new probability using Bayes’ theorem.
fortunately, few clinicians are skilled in using Bayes’ theo-
rem, and in practice it is probably more common for tests
to be interpreted without taking into account the charac-
teristics (sensitivity and speciﬁcity) of the test itself.
Veriﬁcation. Clinicians, like all decision makers, strongly
favor their initial hypotheses and often stop searching for
additional possibilities. This tendency leads to a number of
cognitive errors collectively referred to as premature clo-
This includes factors such as overconﬁdence and con-
ﬁrmation bias, which foster the tendency to favor conﬁrming
evidence over counter-evidence that might exclude the di-
Another problem is posed by the patient with two
or more medical disorders: Clinicians from their ﬁrst days of
training are exhorted to search for unifying explanations of
a patient’s multiple symptoms. Occham’s razor speciﬁcally
instructs the clinician that it is more likely for a single dis-
ease to explain multiple symptoms than it is for multiple
diseases to do so. The test of time has conﬁrmed the wisdom
of this chestnut, but it will obviously lead to diagnostic errors
in those patients who truly have more than one active pro-
Can we reduce cognitive errors? In selecting the title for
their seminal work on medical errors, the Institute of Med-
icine identiﬁed the essential conundrum we face. To Err is
Human captures a feeling of inevitability. The thought sug-
gests that we can never eliminate errors as long as we are
human. Is it possible to improve perception, memory, or de-
cision making? In this section we consider this question from
two perspectives. First we address efforts to directly improve
cognition. We then consider an alternative, indirect ap-
proach, attempting to improve diagnostic accuracy using a
A. Improving cognition directly
Can we train better thinkers? Bordage argues that we can begin
to train better diagnosticians by improving the quality of
training in physical diagnosis.
By teaching discriminative
skills and by providing more examples and repetition, stu-
dents can improve their clinical decision-making skills.
Can we learn to avoid biased judgment? Biased judgment is
common in clinical reasoning.
There are some isolated re-
ports of success in reducing the likelihood of bias. Larson et
al., for example, found that the diagnoses made by medical
trainees instructed about the pitfalls of premature closure
were more accurate than those made by peers who were not
Short-term success was also identiﬁed after
students completed a course in statistics, showing improved
ability of the students to analyze statistical problems. In con-
trast, similar transference could not be demonstrated for
courses in logic,
and Regehr and Norman have concluded
that, in general, effective ways to teach students how to
avoid the pitfalls of using heuristics have yet to be identi-
Even with highly specialized training, physicians are lim-
ited by a cognitive system that has evolved to be good at
certain kinds of tasks and that faces predictable pitfalls. In
the domain of hypothesis generation and problem solving,
human cognition has evolved mechanisms that allow for ef-
ﬁciency at the expense of various forms of creativity. For
example, people sometimes fail to notice analogies to prior
experience. In a study by Gick and Holyoak, for example,
people were ﬁrst shown a solution to a military problem that
involved marching several platoons simultaneously down
multiple roads that led to an attack site so that they all
arrived at once. When presented, minutes later, with a prob-
lem in which they needed to ﬁnd a way to irradiate a tumor
without damaging surrounding tissue, they failed to notice
Problem-based learning. Medical educators have tried to im-
prove diagnostic reasoning through innovative curriculum
changes that emphasize skills in clinical reasoning. The best-
studied example has been the change to problem-based
learning at many leading institutions. For the major part of
the past century, medical knowledge was taught to students
one subject at a time, proceeding from basic subjects such
as anatomy and biochemistry in the ﬁrst year to more ad-
vanced subjects such as pathophysiology in the second year.
In this framework, medical decision making was not taught
as a separate course, but evolved as the student’s knowledge
base expanded and the opportunity for clinical contact with
patients increased in the third and fourth years. In contrast,
the problem-based learning approach exposes students to
clinical problems from the very start. A basic hypothesis of
this approach is that clinical decision making involves spe-
ciﬁc skills and that these can be learned and applied effec-
tively to novel clinical situations, independent of learning
the subject matter (anatomy, biochemistry, etc.) ﬁrst. The
impact of problem-based learning was critically reviewed re-
cently by Colliver.
Only three studies of problem-based
learning were identiﬁed that used random assignment of
medical students to training condition, and none revealed
any clear advantage of problem-based learning in terms of
the scores on standardized exams or overall performances. In
studies involving self-selection into a problem-based curric-
ulum, however, there was evidence for a positive impact: At
Rush Medical College at the end of a seven-month exposure
to problem-based learning, students gave more accurate di-
agnoses and provided reasoning chains that were more com-
prehensive and elaborated compared with students in the
Similarly, students in the problem-
based learning track at Bowman Gray School of Medicine
of Wake Forest University had board scores comparable to
those of students in the standard track, but scored better on
scales assessing factual knowledge, ability to perform the his-
tory and physical exam, deriving a differential diagnosis, and
organizing and expressing information. Elstein,
have advanced the concept of ‘‘content
speciﬁcity’’ to explain the general failure of problem-based
learning to produce superior diagnosticians: Experts have
knowledge that is more extensive and better integrated.
Metacognitive training. Ultimately, accurate diagnosis requires
not only extensive domain-speciﬁc knowledge and sophis-
ticated skills in clinical reasoning, but also the ability to be
actively aware of how effectively one is thinking.
describes one application of this process, the ability to pre-
serve active open-mindedness:
Good decision making involves sufﬁcient search for possibil-
ities, evidence, and goals, and fairness in the search for evi-
dence and in the use of evidence....People typically depart
from this model by favoring possibilities that are already
strong. We must make an effort to counteract this bias by
looking actively for other possibilities and evidence . . . ac-
tively open-minded thinking.
Baron goes on to present evidence that individuals who
practice open-minded thinking show reduced tendencies to
have cognitive biases and produce ‘‘better’’ decisions.
Clinical educators adopt this approach when they empha-
size to their students the importance of deriving a complete
differential diagnosis. This ensures that multiple possibilities
are considered, at least at the outset of the search for the
diagnosis. A simple strategy that may also work to promote
actively open-minded thinking is the ‘‘crystal ball experi-
ence,’’ a device used by military trainers to promote open-
minded thinking in military planning.
In this exercise,
the participants are asked to devise a plan to achieve a par-
ticular objective. After presenting the plan they are told
that, according to the crystal ball which can foresee the fu-
ture, this plan won’t work. What plan would they suggest
This promotes the active examination of ﬂaws in
the original plan and a search for alternatives. All good
bridge players know this strategy, but too few clinicians do.
B. Adopting systems solutions to cognitive errors
Independent of our ability to improve cognition directly, it
should be possible to improve diagnostic accuracy indirectly
by a systems-level approach. Changes at the systems level
can compensate for the predictable patterns of thought that
lead to error.
Improving perception. The way data are presented can have a
profound inﬂuence on how easily abnormalities are detected.
Highlighting abnormal laboratory test results makes them
easier to detect when they are presented in a list with many
normal test values. Similarly, perception is enhanced by
graphic presentations and presentations that facilitate rec-
ognition of trends. The future holds great promise for similar
advances in enhancing perception using technologic ad-
vances that incorporate such principles of human-factors en-
gineering. An alternative approach is to supplement human
perception using computer-aided diagnosis. Jiang et al. re-
cently demonstrated the potential to improve the radiologic
detection of cancer in reading mammograms: Computer-
aided diagnosis improved both the sensitivity and the spec-
iﬁcity of this test more than did independent readings by
Availability of expertise. Emergency department physicians
misinterpret 1–16% of plain radiographs and up to 35% of
cranial tomographic studies.
A direct approach to this
problem would be to better train these non-radiologists in
radiographic interpretation. The indirect, systems-level ap-
proach would be to ensure that trained radiologists are avail-
able to help interpret these studies, the approach endorsed
by the American College of Radiology. Supporting the sys-
tems-level approach, Espinosa and Nolan found that the rate
of missed ﬁndings on X-rays taken in an emergency depart-
ment was reduced from 1.2% to 0.3% by requiring second
reviews of each study by radiologists.
20% of U.S. hospitals have radiology staff present 24 hours
a day. Alternative approaches include using tele-radiology to
assist front-line clinicians, or on-site radiology trainees. The
relative impacts of these three interventions are still being
Second opinions. Second opinions have proven to be a val-
uable strategy for reducing medication errors. The Institute
for Safe Medication Practices endorses the use of second
checks for complex or risk-prone medication requests such
as parenteral nutrition, chemotherapy, or neonatal therapeu-
tics. This same approach might similarly be used to reduce
diagnostic errors. Kronz and coworkers studied the potential
beneﬁt of this idea by requiring a second opinion on every
surgical pathology diagnosis referred to the Johns Hopkins
Hospital over a 21-month period.
The second opinions led
to clinically relevant changes of the diagnoses in 1.4% of
6,171 cases. In selected types of cases, the corrected error
rates were even higher: 5.1% in tissue from the female re-
productive tract, and 9.5% in serosal samples.
Clinical guidelines and clinical decision-support systems. Given
the documented variability in the extents to which clini-
cians act coherently and in accord with the laws of proba-
bility, it is reasonable to hope that clinical practice guide-
lines could reduce the rate of diagnostic errors. Guidelines
standardize the approaches to clinical problems and mini-
mize the variability in response patterns. To the extent that
they incorporate appropriate base rates of disease, apply cor-
rect probability estimates, and minimize the errors induced
by the use of heuristics, guidelines should in theory improve
the accuracy of diagnosis and management.
however, are themselves heuristics, and so they cannot pro-
vide a full solution. In addition, clinicians in practice are
sometimes unaware of guidelines relevant to their patients,
and even when they are aware, often do not follow the
guidelines appropriately. Ionnidis and Lau recently provided
an evidence-based review of the efﬁcacy of guidelines and
other interventions designed to reduce medical errors.
though guidelines reduced errors in treatment and preven-
tion settings, this was not the case when guidelines were
used to reduce diagnostic errors. In the only methodologi-
cally acceptable study they identiﬁed regarding diagnostic
errors, the use of a clinical guideline actually increased the
rate of missed radiologic fractures in an emergency depart-
Probably the most promising approach to improving di-
agnostic accuracy is to incorporate decision aids directly into
the active practice of medicine using computer-assisted
‘‘expert systems.’’ Examples where clinical decision-support
systems excel are numerous, including improved compliance
with guidelines, improved antibiotic utilization, and im-
proved use of preventive health measures.
Hunt et al. re-
cently provided an evidence-based review of clinical deci-
sion-support systems, identifying 68 published evaluations
that reported patient outcomes or changes in provider be-
haviors. Studies were grouped into four categories: drug dos-
ing, preventive care, clinical diagnosis, and general process
of care. Overall, two thirds of the studies showed positive
impacts on patient outcomes or provider behaviors, but, in-
terestingly, none of the four studies of diagnosis (three on
abdominal or chest pain, one on pediatric primary care prob-
lems) were positive.
The reasons diagnostic processes were
not improved in these studies are not clear, but there are
both theoretical and practical limitations on the input data
set that might limit the functionality of expert systems. The
quality of the input data set is critical in determining the
quality of the hypotheses the expert system will generate. In
practice, clinicians may not have the time to input all of
the data needed. A greater concern, identiﬁed by Regehr and
Norman, is that the input data set may be biased by any
initial hypotheses that were being entertained.
sized by Bordage, we tend to see what we are looking for.
Thus, cognitive shortcomings can undermine potential im-
provements from system changes.
Can we eliminate cognitive errors? The ﬁrst insurmount-
able barrier in eliminating cognitive errors is the impossi-
bility of knowing all of the relevant facts all of the time. We
discussed this issue earlier, and classiﬁed it as a no-fault error
in the sense that much of this uncertainty lies outside the
Even if we could know all the relevant facts, we would
be unable to process them quickly. The limitations of human
information processing have been described in Simon’s
‘‘principle of bounded rationality.’’
According to Simon,
Because of the limits on their computing speeds and power,
intelligent systems must use approximate methods. Optimality
is beyond their capabilities; their rationality is bounded....
Human short-term memory can hold only a half dozen
chunks, an act of recognition takes nearly a second, and the
simplest human reactions are measured in tens and hundreds
of milliseconds, rather than microseconds, nanoseconds, or
A direct consequence of bounded rationality is the inev-
itability of diagnostic errors: ‘‘When intelligence explores
unfamiliar domains, it falls back on ‘weak methods,’ which
are independent of domain knowledge. People ‘satisﬁce’—
look for good-enough solutions—instead of hopelessly
searching for the best.’’
This applies to the use of heuristics, which are enormously
powerful, yet inherently ﬂawed. James Reason, the British
psychologist who has studied human error extensively, sum-
marizes the problem perfectly: ‘‘Our propensity for certain
types of error is the price we pay for the brain’s remarkable
ability to think and act intuitively.’’
Heuristics play the
odds: Sometimes, particularly under unusual circumstances,
these rules of thumb lead to wrong decisions. The theme of
tradeoffs, which comes up so often in the context of im-
proving decision making, is especially appropriate to the use
of heuristics, which inherently involves the tradeoff between
efﬁciency and accuracy. Finally, the heuristics themselves,
and for that matter any cognitive process contributing to
decision making, can be misapplied, particularly if we are
fatigued, distracted, stressed, or unfamiliar with the condi-
tion in question.
Studying some of the major industrial catastrophes of our
time, Perrow identiﬁed an additional argument for the in-
evitability of diagnostic errors. In his ‘‘normal accident the-
ory,’’ he concludes that systems that are tightly coupled, with
complex and hidden interactions, will inevitably produce ac-
Although Perrow did not study medical mishaps,
we propose that medical diagnosis involves analogous pro-
cesses that are hidden and complex. The diagnostic process
is very much a black box, in which the physician applies
imperfect knowledge to somehow make sense out of a case
presentation and lab data that are typically nonspeciﬁc and
incomplete. Moreover, it is probably rare that the physician
applies an appropriate amount of strategic mental monitor-
ing to make sure all these cognitive processes are accurate
and appropriate. In this framework, diagnostic errors may be
as inevitable as those that befall nuclear power plants, air-
craft, reﬁneries, and mines.
Potential for Reducing or Eliminating the Three Types of Diagnostic Errors
No-fault Errors System Errors Cognitive Errors
Potential for reduction Limited Substantial Potentially Substantial
Mechanisms for reduction Improve testing and screening
Expand medical knowledge
Optimize the sensitivity, speciﬁcity,
availability, and timeliness of
testing, and improve test reporting
Improve the efﬁciency and
timeliness of diagnostic evaluation
Improve system processes that
impact diagnosis such as
communication and coordination
Improve the supervision of trainees
Reduce detractors such as fatigue,
distractions, and stress
Cognitive solutions: Cognitive training to
improve metacognition, understand
pitfalls of heuristics, and promote
System solutions: Enhance data
presentation to facilitate perception;
use second opinions; increase use of
specialists; develop and use clinical
decision support systems
Potential for elimination None None None
Reasons errors will persist The limits of medical knowledge
Inherent uncertainty about the
state of the world
Disease is silent or presents
Failure of normative processes
Degradation of system
Cognitive limitations: working memory,
knowledge, attention, processing,
Risks of using heuristics; cognitive
Diagnostic reasoning is complex and
As the second leading cause of medical error, diagnostic error
is a major health care concern and worthy of much more
attention than it has received. The sentinel-event registry
established by the Joint Commission on Accreditation of
Healthcare Organizations (JCAHO) does not even track di-
agnostic error as a category.
Likewise, of the 79 practices
recently evaluated by the Agency for Healthcare Research
and Quality that might decrease medical errors, only two
directly or indirectly deal with diagnostic errors.
error has probably remained in the background of the cur-
rent dialogue on medical error because the causes are more
subtle and the solutions are less obvious than they are for
problems such as medication error and wrong-site surgery.
Another reason to focus on reducing diagnostic error is
that, in contrast to other types of medical errors, there is
little opportunity to minimize the impact on the patient
once the error is made. The emphasis must be on preventing
the error in the ﬁrst place.
The potential for reducing or eliminating diagnostic errors
in each of the three main categories (no-fault, system-re-
lated, cognitive) is summarized in Table 2. With regard to
reducing diagnostic errors, there is clearly reason to be op-
timistic: In each of the three categories the potential to re-
duce diagnostic errors is both real and achievable. Even no-
fault errors are likely to diminish over time as we are
increasingly able to detect diseases in preclinical stages and
illnesses with atypical presentations.
Experts uniformly advocate a focus on identifying and re-
pairing latent system ﬂaws as the most productive approach
to improving the safety of medical care,
and our analysis
suggests this approach would also reduce diagnostic errors.
Latent ﬂaws in the health care system that contribute to
diagnostic errors should be studied and addressed. Areas that
should be targeted for intervention include supervision of
trainees, availability of expertise, coordination of care, com-
munication procedures, training and orientation, quality and
availability of tests, suboptimal thinking environments
(those producing undue stress, fatigue, distractions, excessive
Figure 1. The relationships between reliability and
cost in diagnostic decision making. As clinicians
improve their diagnostic competencies from begin-
ning level skills (use of inductive reasoning) to in-
termediate levels (use of heuristics) to expert level
skills, reliability and accuracy improve, with
cost and effort (descending arrows). In any
given case we can improve diagnostic accuracy
(e.g., with second opinions or monitoring) but with
cost, time, or effort (ascending arrow).
workload), and inefﬁcient processes that lead to delays in
In contrast, the possibility that we could reduce diagnostic
errors by focusing on cognitive elements has remained
largely unexplored. This may reﬂect the general sentiment
of the current performance improvement literature, which
presents system-level approaches as the preferred route to
achieve organizational excellence, as opposed to approaches
to change how people think. This skepticism regarding cog-
nitive interventions was recently captured by Nolan: ‘‘Al-
though we cannot change the aspects of human cognition
that cause us to err, we can design systems that reduce errors
and make them safer for patients.’’
We would like to make
the case that there may be substantial potential for improv-
ing the cognitive component of medical diagnosis, and pro-
pose that this should be a major research focus to improve
patient safety. There are many potential avenues of explo-
ration: training to improve metacognition, courses on diag-
nostic reasoning and pitfalls, and second-generation prob-
lem-based learning approaches that build on the lessons
learned from the successful ﬁrst-generation programs.
Even if we are not yet able to improve cognitive perfor-
mance per se, it may be possible to apply systems solutions
to problem-prone cognitive tasks such as perception or dif-
ferential diagnosis. Examples include mandated ‘‘second
readings,’’ enhanced availability of subject-matter experts,
improved supervision of trainees, and developing more ef-
fective guidelines and clinical decision-support systems. We
acknowledge, however, that virtually none of the approaches
outlined have been validated in practice, and several have
little more than anecdotal support from ﬁelds outside med-
icine. The ﬁeld is just being deﬁned, and the opportunity for
improvement in this direction is large.
In thinking about how to reduce diagnostic errors, a re-
curring theme is the problem of tradeoffs. We can be more
certain, but at a price, be it dollars, effort, or time (Figure
1). Where should we set the bar? We can keep an open
mind, we can perform a more extended search for possibil-
ities, we can be more careful in interpreting and assessing
data. At some point, however, these strategies become too
expensive, or even self-defeating. For example, we can at-
tempt to rule out every possibility in a given case, but this
can lead to excessive testing, with its own risks and costs to
the patient, and also the likelihood that on occasion a false-
positive test result will lead us astray. Increased monitoring
of cognition would also systematically increase delays in
reaching a diagnosis, as extra tests and treatments are or-
dered to increase certainty. Decision making slows if we su-
perimpose cognitive or systemic checks and balances—can
we afford this?
At least in some cases the answer is, surprisingly, ‘‘Yes.’’
For example, mandatory second opinions before certain types
of elective surgery reduce the number of unnecessary oper-
ations, saving two to four dollars for every dollar spent.
it possible that programs that mandate second readings in
pathology or radiology might realize similar savings? The
possibility that optimizing diagnostic accuracy might actually
be the most efﬁcient approach is not a new concept: Con-
sider how easy it is for the expert to solve a problem, com-
pared with the labor-intensive approach taken by the nov-
(Figure 1). Although we shouldn’t forget the years of
training needed to develop expertise, once it exists, we
should design our systems to take advantage of experts’ skills,
to achieve highly accurate diagnoses with the least effort and
Whereas the potential to reduce diagnostic errors is real
and substantial, we conclude that eliminating such errors is
not a realistic goal. There are fundamental and insurmount-
able factors that preclude this possibility. We should focus
instead on increasing the visibility of diagnostic errors in the
current patient safety dialogue, prioritizing opportunities to
reduce diagnostic errors wherever we can, and setting re-
search priorities to study potential avenues for improving
diagnostic accuracy in the future, such as interventions
aimed at improved cognition.
This work was supported by a grant from the National Patient Safety Foun-
1. Berwick DM. Taking action to improve safety: how to increase the odds
of success. In: Enhancing Patient Safety and Reducing Errors in Health
Care. Chicago, IL: National Patient Safety Foundation, 1999:1–11.
2. Institute of Medicine. To Err is Human; Building a Safer Health Sys-
tem. Washington DC: National Academy Press, 1999.
3. Arkes H. Why medical errors can’t be eliminated: uncertainties and
the hindsight bias. Chron Higher Educ. May 19, 2000.
4. Sprenger GM. Dare to tell it all. 2001. Let’s Talk. Communicating Risk
and Safety in Health Care. Plenery Lecture, delivered at the third An-
nenberg Conference on Enhancing Patient Safety, May 17, 2001, St.
5. Leape L, Brennan TA, Laird N, et al. The nature of adverse events in
hospitalized patients. Results of the Harvard Medical Practice Study II.
N Engl J Med. 1991;324:377– 84.
6. Bartlett EE. Physicians’ cognitive errors and their liability conse-
quences. J Healthcare Risk Manage. Fall 1998:62–9.
7. Tai DYH, El-Bilbeisi H, Tewari S, Mascha EJ, Wiedermann HP, Ar-
roliga AC. A study of consecutive autopsies in a medical ICU: a com-
parison of clinical cause of death and autopsy diagnosis. Chest. 2001;
8. Kassirer JP. Diagnostic reasoning. Ann Intern Med. 1989;110:893–900.
9. Bordage G. Why did I miss the diagnosis? Some cognitive explanations
and educational implications. Acad Med. 1999;74(10 suppl):S138–
10. Regehr G, Norman GR. Issues in cognitive psychology: implications
for professional education. Acad Med. 1996;71:988–1001.
11. Patel VL, Arocha JF, Kaufman DR. A primer on aspects of cognition
for medical informatics. J Am Med Informat Assoc. 2001;8:324–43.
12. Norman GR. The epistemology of clinical reasoning: perspectives from
philosophy, psychology, and neuroscience. Acad Med. 2000;75(10
13. Kassirer JP, Kopelman RI. Cognitive errors in diagnosis: instantiation,
classiﬁcation, and consequences. Am J Med. 1989;86:433–41.
14. Schmidt HG, Norman GR, Boshuizen HPA. A cognitive perspective
on medical expertise: theory and implications. Acad Med. 1990;65:
15. Elstein AS. Clinical reasoning in medicine. In: Higgs J, Jones M (eds).
Clinical Reasoning in the Health Professions. Oxford, England: But-
16. Edlow JA, Caplan LR. Avoiding pitfalls in the diagnosis of subarach-
noid hemorrhage. N Engl J Med. 2000;342:29–35.
17. Amendo MT, Brown BA, Kossow LB, Weinberg GM. Headache as the
sole presentation of acute myocardial infarction in two elderly patients.
Am J Geriatr Cardiol. 2001;10:100–1.
18. Saint S, Go AS, Frances C, Tierney LM Jr. Case records of the Mas-
sachusetts General Hospital—a home court advantage? N Engl J Med.
19. Baron J. Thinking and Deciding. 3rd ed. Cambridge, U.K.: Cambridge
University Press, 2000.
20. Gross PR, Levitt N, Lewis MW. The ﬂight from science and reason.
Ann NY Acad Sci. 1996.
21. Kassirer JP, Kopelman RI. Learning Clinical Reasoning. Baltimore, MD:
Williams and Wilkins, 1991.
22. Gorovitz S, Macintyre A. Toward a theory of medical fallibility. Has-
tings Center Rep. 1975;5:13– 23.
23. Gawande A. Final cut. Medical arrogance and the decline of the au-
topsy. The New Yorker. March 19, 2001:94–9.
24. Kennedy M. Inexact sciences: professional education and the devel-
opment of expertise. Review of Research in Education. 1987;14:133–
25. Reason J. Human Error. Cambridge, U.K.: Cambridge University Press,
26. Cook RI, Woods DD. Operating at the sharp end: the complexity of
human error. In: Bogner MS (ed). Human Error in Medicine. Hillsdale,
NJ: Lawrence Erlbaum Associates, 1994:255– 310.
27. Reason J. Managing the Risks of Organizational Accidents. Brookﬁeld,
VT: Ashgate, 1997.
28. Leape L, Lawthers AG, Brennan TA, et al. Preventing medical injury.
Qual Rev Bull. 1993;19:144– 9.
29. Leape LL. The preventability of medical injury. In: Bogner MS (ed).
Human Error in Medicine. Hillsdale, NJ: Lawrence Erlbaum Associates,
30. Moray N. Error reduction as a system problem. In: Bogner MS (ed).
Human Error in Medicine. Hillsdale, NJ: Lawrence Erlbaum Associates,
31. Nolan TW. System changes to improve patient safety. BMJ. 2000;320:
32. Bogner MS. Human error in medicine: a frontier for change. In: Bogner
MS (ed). Human Error in Medicine. Hillsdale, NJ: Lawrence Erlbaum
33. Spath PL. Reducing errors through work system improvements. In:
Spath P (ed). Error Reduction in Health Care: A Systems Approach
to Improving Patient Safety. San Francisco, CA: Jossey–Bass, 2000:
34. Reiser SJ. Medicine and the Reign of Technology. Cambridge, U.K.:
Cambridge University Press, 1978.
35. Jauhar S. When rules for better care exact their own cost. The New
York Times. Jan 5, 1999.
36. Peterson LA, Brennan TA, O’Neil AC, Cook EF, Lee TH. Does
housestaff discontinuity of care increase the risk for preventable adverse
events? Ann Intern Med. 1994;121:866– 72.
37. Perrow C. Normal Accidents: Living with High-Risk Technologies.
Princeton, NJ: Princeton University Press, 1999.
38. Berlin L. Defending the ‘‘missed’’ radiographic diagnosis. Am J Radiol.
39. Berlin L, Hendrix RW. Perceptual errors and negligence. Am J Radiol.
40. Norman GR, Coblentz CL, Brooks LR, Babcook CJ. Expertise in visual
diagnosis: a review of the literature. Acad Med. 1992;67(10 suppl):
41. Sackett DL. A primer on the precision and accuracy of the clinical
examination. JAMA. 1992;267:2638–44.
42. Kahneman D, Slovic P, Tversky A. Judgement Under Uncertainty:
Heuristics and Biases. Cambridge, U.K.: Cambridge University Press,
43. Elstein AS. Heuristics and biases: selected errors in clinical reasoning.
Acad Med. 1999;74:791–4.
44. Dawson NV, Arkes HR. Systematic errors in medical decision making:
judgment limitations. J Gen Intern Med. 1987;2:183– 7.
45. Voytovich AE, Rippey RM, Suffredini A. Premature closure in diag-
nostic reasoning. J Med Educ. 1985;60:302–7.
46. Cheng PW, Holyoak KJ, Nisbett RE, Oliver LM. Pragmatic versus syn-
tactic approaches to training deductive reasoning. Cogn Psychol. 1986;
47. Gick ML, Holyoak K. Schema induction and analogical transfer. Cogn
48. Tversky A, Kahneman D. Availability: a heuristic for judging frequency
and probability. Cogn Psychol. 1973;5:207–32.
49. Hmelo CE. Cognitive consequences of problem-based learning for the
early development of medical expertise. Teach Learn Med. 1998;10:
50. Eva KW, Neville AJ, Norman GR. Exploring the etiology of content
speciﬁcity: factors inﬂuencing analogic transfer and problem solving.
Acad Med. 1998;73(10 suppl):S1–S5.
51. Norman GR. Problem-solving skills, solving problems, and problem-
based learning. Med Educ. 1988;22:279–86.
52. Perkins DL, Salomon G. Are cognitive skills context-bound? Education
53. Higgs J, Jones M. Clinical reasoning. In: Higgs J, Jones M (eds). Clin-
ical Reasoning in the Health Professions. Oxford, U.K.: Butterworth-
54. Mitchell DJ, Russo JE, Pennington N. Back to the future: temporal
perspective in the explanation of events. J Behav Decis Making. 1989;
55. Klein G. Sources of Power: How People Make Decisions. Cambridge,
MA: The MIT Press, 1998.
56. Jiang Y, Nishikawa RM, Schmidt RA, Metz CE, Doi K. Relative gains
in diagnostic accuracy between computer-aided diagnosis and indepen-
dent double reading. In: Krupinski EA. Medical Imaging 2000: Image
Perception and Performance (Proceedings of SPIE, vol. 3981, 2000).
Progress in Biomedial Optics and Imaging. 2000; 1:10–5.
57. Kripalani S, Williams MV, Rask K. Reducing errors in the interpreta-
tion of plain radiographs and computed tomography scans. In: Shojania
KG, Duncan BW, McDonald KM, Wachter RM (eds). Making Health
Care Safer. A Critical Analysis of Patient Safety Practices. Rockville,
MD: Agency for Healthcare Research and Quality, 2001.
58. Espinosa JA, Nolan TW. Reducing errors made by emergency physi-
cians in interpreting radiographs; longitudinal study. BMJ. 2000;320:
59. Kronz JD, Westra WH, Epstein JI. Mandatory second opinion surgical
pathology at a large referral hospital. Cancer. 1999;86:2426–35.
60. Garﬁeld FB, Garﬁeld JM. Clinical judgement and clinical practice
guidelines. Int J Technol Assess in Health Care. 2000;16:1050–60.
61. Ioannides JPA, Lau J. Evidence on interventions to reduce medical
errors. An overview and recommendations for future research. J Gen
Intern Med. 2001;16:325–34.
62. Klassen TP, Ropp LJ, Sutcliffe T, et al. A randomized controlled trial
of radiograph ordering for extremity trauma in a pediatric emergency
department. Ann Emerg Med. 1993;22:1524– 9.
63. Trowbridge R, Weingarten S. Clinical decision support systems. In:
Shojania KG, Duncan BW, McDonald KM, Wachter RM (eds). Mak-
ing Health Care Safer. A Critical Analysis of Patient Safety Practices.
Rockville, MD: Agency for Healthcare Research and Quality, 2001.
64. Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer–based
clinical decision support systems on physician performance and patient
outcomes: a systematic review. JAMA. 1998;283:2816–21.
65. Simon HA. Invariants of human behavior. Annu Rev Psychol. 1990;
66. Sentinel event statistics. Joint Commission on Accreditation of
Healthcare Organizations. 具http://www.jcaho.org典. Accessed 5/3/02.
67. Shojania KG, Duncan BW, McDonald CJ, Wachter RM. Making
Health Care Safer: A Critical Analysis of Patient Safety Practices. Ev-
idence Report/Technology Assessment #43; AHRQ Publication No 01-
E058. Rockville, MD: Agency for Healthcare Research and Quality,
68. Ruchlin HS, Finkel ML, McCarthy EG. The efﬁciency of second-opin-
ion consultation programs: a cost–beneﬁt perspective. Med Care. 1982;