ChapterPDF Available

Biases in Medical Decision Making


Abstract and Figures

Many researchers have studied good clinical reasoning, pointing out that physicians often fail in this. For some authors physicians often make mistakes because they unknowingly fail to observe the laws of formal logic and their reasoning becomes influenced by contextual factors. The medical decision may be considered fundamentally biased since the use of judgment heuristics and a combination of cognitive-related and system-related factors limits physicians' rationality. A number of biases can affect the ways in which doctors gather and use evidence in making diagnoses. Biases also exist in how doctors make treatment decisions once a definitive diagnosis has been made. These biases are not peculiar to the medical domain but, rather, are manifestations of suboptimal reasoning to which people are susceptible in general. Nonetheless, they can have potentially serious consequences in medical settings, such as a misdiagnosis or patient mismanagement. Doctors' reasoning is vulnerable to a number of biases that can lead to errors in diagnosis and treatment, but means for alleviating these biases are available. In this chapter the reader is introduced to basic concepts of medical decision making as well as the most studied heuristics and biases in this domain. In the second section of the chapter debiasing techniques are also described, revealing their strengths and weaknesses. While traditional understanding of clinical reasoning has failed to consider contextual factors, most debiasing techniques seem to fail in promoting sound and safer medical practices. Technological solutions, being data-driven, are fundamental in increasing care safety, but they need to consider human factors. Thus balanced models, cognitive-driven and technology based are needed in day-to-day applications to actually improve the clinical decisions.
Content may be subject to copyright.
This is pre-print version. The final version is published in Psychology of Bias
(Nova Science Publisher)
Many researchers have studied good clinical reasoning, pointing out
that physicians often fail in this. For some authors physicians often make
mistakes because they unknowingly fail to observe the laws of formal
logic and their reasoning becomes influenced by contextual factors.
The medical decision may be considered fundamentally biased since
the use of judgment heuristics and a combination of cognitive-related and
system-related factors limits physicians’ rationality.
A number of biases can affect the ways in which doctors gather and
use evidence in making diagnoses. Biases also exist in how doctors make
treatment decisions once a definitive diagnosis has been made. These
biases are not peculiar to the medical domain but, rather, are
manifestations of suboptimal reasoning to which people are susceptible in
general. None the less, they can have potentially serious consequences in
medical settings, such as a misdiagnosis or patient mismanagement.
Doctors' reasoning is vulnerable to a number of biases that can lead
to errors in diagnosis and treatment, but means for alleviating these biases
are available.
In this chapter the reader is introduced to basic concepts of medical
decision making as well as the most studied heuristics and biases in this
domain. In the second section of the chapter debiasing techniques are also
described, revealing their strengths and weaknesses. While traditional
understanding of clinical reasoning has failed to consider contextual
Claudio Lucchiari and Gabriella Pravettoni
factors, most debiasing techniques seem to fail in promoting sound and
safer medical activities. Technological solutions, being data-driven, are
fundamental in increasing care safety, but they need to consider human
factors. Thus balanced models, cognitive-driven and technology based are
needed in day-to-day applications to actually improve the clinical
“Medicine is a science of uncertainty and an art of probability”
William Osler
From a cognitive perspective, the study of medical decision making is of
particular importance since physicians are faced with critical and uncertain
decisions every day and they are supposed to be excellent decision makers in
these conditions. In fact, a bad or a suboptimal decision may result in heavy
consequences both for the patient’s health and the physician’s reputation.
Hence, in medical decision making it is desirable that the gap between an ideal
(rational ?) choice and the actual choice be reduced as much as possible.
Physicians need to know what they actually do when making a decision. That
is, they must know how their mind works. This is an exquisite cognitive issue,
but are doctors actually aware of its relevance in their daily activities? In this
chapter we will try to give an answer to the question, providing a brief picture
of this relevant and complex problem, starting from a survey on medical errors
and concluding with a cognitive analysis of clinical reasoning.
1.1. The Error in Medicine
Errors can be classified according to some basic features with regard to
task, role and knowledge [1, 2]. Task-based errors lie outside of awareness
since they are the result of automatic mind mechanisms. Errors based on role
and knowledge, instead, are generally due to a failure of conscious thought.
The main causes of task-based errors are linked to attention misleading
processes. They can be described as errors due to a failure of the mind’s
Biases in Medial Decision Making
monitoring system. When a procedure becomes routine, for instance, the
individual vigilance when performing a task tends to decrease. If some factors
intervene in changing some aspects of the situation, for example, increasing
contextual demands, a low vigilance level can easily cause a mistake.
Both internal and external factors may intervene in contributing to such
mistakes. The most important internal factors (physiological in this case) are
fatigue, stress and insomnia. Furthermore, a certain emotional arousal, e.g.,
frustration, fear, boredom or anxiety, may significantly modulate vigilance
and, more generally, the ability to maintain a required attention level on a task
Also, there are a number of external factors able to affect attention. Task
interruptions, work overload, or bad interpersonal relationships are all factors
with high impact on the attention system. Indeed, these external factors may
attract attention away from the task, diminishing the ability to correctly
perform it even when the same task was already successfully executed many
times in other situations.
It’s clear that a low vigilance level easily produces behavioral and
cognitive failures, such as slips and lapses [3]. However, we can also argue
that cognitive heuristics and biases are more likely to be automatically applied
within a similar mindset. For instance, the use of a simple pattern-recognition
strategy could unintentionally lead a physician to pursue a given diagnostic
journey instead of another simply due to a lack of attention and low cognitive
Role-based and knowledge-based errors, instead, are defects of reasoning
induced by the use of judgment heuristics, and social and motivational biases
[4]. The errors of this type may be influenced by physiological, psychological
and contextual factors. Several studies have emphasized the influence of stress
on performance (meaning, in this case, the ability to generate normative
reasoning). In particular, according to the famous Yerkes-Dodson law [5], the
level of performance is achieved with a moderate arousal, while an extreme
arousal (boredom / panic) worsens the cognitive performance. Actually, under
stress our attention tends to focus only on one source of information, losing its
effectiveness since it misses a comprehensive analysis.
Also environment-related factors can facilitate this type of error. Lack of
clear guidelines and protocols, limited access to assessment tools, or
inadequate data accessibility or presentation are typical examples of contextual
Claudio Lucchiari and Gabriella Pravettoni
factors that may push a physician to rely on heuristic reasoning when
approaching pressing decisions.
According to Vincent [6], in analyzing the aspects that lead to a negative
choice as well as to a mistake, the whole context should be considered
following a system approach since the combination of different factors
generally causes latent errors that are indirectly observable and difficult to
isolate. Actually, the study of the cognitive mechanisms implied in medical
decision making cannot be fully appraised without situating them in a specific
context. For instance, a noisy environment can have a great impact on
cognitive functioning. In particular, the role of task interruptions has been
Interruptions and multitasking (carrying out multiple tasks
simultaneously) are unavoidable in busy clinical environments, and patient
care safety may be compromised if clinicians are unable to deal with them.
The combination of multitasking and interruptions is a potent latent source of
clinical errors.
A study performed by Hall, Pederson and Fairley [7] has demonstrated
that interruptions to hospital personnel significantly increase the rate and
severity of medication administration errors. Thus, the uncontrolled presence
of interruptions in a clinical practice is a dangerous event, and the need to
control the unnecessary interruptions and multitasking is strong. In fact, a
recent study performed in a number of Australian hospitals [8] has shown that
interrupted physicians may also recover the lost time and complete a clinical
task even faster than in uninterrupted situations. However this may be done at
a price since interruptions seem to require people to change work rhythms, in
addiction to affecting the emotional sphere. An Italian study confirmed the
impact of interruptions on patient care quality and safety, proving that
contextual factors may interfere with cognitive mechanisms in generating
biased tasks and work-related stress [9].
1.2. The Dimension of Medical Error
The real size of medical error is difficult to appraise considering all its
complexity. Data are sparse and not easily comparable. Indeed, different
methodologies are used and great dissimilarities among national health
Biases in Medial Decision Making
systems, medical contexts and specialties do exist. Thus data collected in a
given context should be of poor value when applied to another one.
A further degree of approximation derives from the fact that data
collection in this field suffers from an intrinsic bias. Actually, large studies can
only analyze retrospectively database (reporting systems) containing errors
spontaneously reported by physicians and institutions. This is quite a limiting
feature since error reporting systems, even though widely applied all over the
world, are not fed systematically and only a minor part of medical errors are
actually recorded.
For instance, an analysis by Wald and Shojania [10] estimated that only
1.5% of all adverse events result in an incident report, and only 6% of adverse
drug events are identified properly.
Other research attempted to analyze diagnostic errors using different data
sources, for instance, analyzing post-mortem exams and correlating them to
ante-mortem diagnoses. However, this approach is resource-demanding, it
limits the possibility of a systematic use of large samples, and it cannot
produce generalized conclusions.
Keeping these limits in mind, it’s difficult to give a statistical
representation of the true dimension of medical errors all over the world.
However, here we report some relevant data in order to focus attention on the
relevance of a cognitive analysis of clinical settings that will be described in
the next sections.
Generally speaking, an adverse event is an unintended injury or
complication that arises during health care management, resulting in death,
disability, or a prolonged stay in the hospital. Hospital chart reviews in various
countries indicate that adverse events in acute care hospital admissions range
from 2.9% in the United States to 16.6% in Australia. A study in Canada,
covering hospital admissions during the year 2000, found an adverse event rate
of 7.5% of patient admissions [11]. Among these, 37% were preventable.
About 5% of them caused permanent disability, 1.6% was associated with
death, and a majority resulted in a temporary disability or prolonged hospital
stays (averaging 6 additional days). It was estimated that 185,000 patients
suffer adverse events annually following medical and surgical admission.
Other studies pointed out that the adverse event rate in New Zealand was
12.9% of hospital admissions, while British studies suggest an adverse event
rate ranging from 5% to 10% for hospital admissions, half of which were
Claudio Lucchiari and Gabriella Pravettoni
preventable and one-third leading to disability or death [12]. In conclusion,
adverse events during hospital admissions are a serious worldwide problem,
occurring in approximately 9% of all admitted patients and leading to a lethal
outcome in about 7% of cases. However, adverse events are the result of
different factors, and also a part of them are actually linked to a pure clinical
error (e.g., an adverse event may be caused by a fall).
Considering clinical events, it is astonishing that the error rate seems to
remain constant over time and space, as demonstrated in two studies (one in
the US and one in Germany) that indicate how the error rate has not
substantially changed over the last three decades, remaining firmly anchored
in both countries at a rate of around 10%, although a recent systematic review
reported a higher rate, more likely between 8.3% and 24% [13].
The diagnostic process is probably the more relevant component in
medical decision making from a cognitive point of view. In fact, physicians
need to work as an information processing system, able to collect data from
the environment, infer judgments from contingent cues, and produce scenarios
and proper actions. In each of these steps a physician is guided by mind
functioning. This means that correct or wrong judgments and decisions are the
consequence of a given biased or unbiased cognitive course.
Hence, which skills should a physician develop in order to avoid errors?
Following a classical view, a physician should be a logical decision
maker, using his/her cognitive skills in a rational way, combining information,
data and probabilities as a computer would do. Is this a realistic picture?
Should a physician only be trained to use logical skills?
In the next paragraphs we will try to provide a wider picture of the
complex reality of medical decision making, showing which factors contribute
in shaping diagnostic reasoning.
We believe that the safety and quality of care are not only a matter of
technical skills, scientific knowledge and guidelines. In this sense, the medical
domain represents an intriguing field for applying cognitive science. In fact,
cognitive science may help physicians properly apply their knowledge in
searching and filtering through the clinical and laboratory data, in developing a
Biases in Medial Decision Making
differential diagnosis, and, finally, in making a rational decision without
falling into dangerous cognitive traps.
Much research has been devoted to this important topic but little is still
known about the deep nature of the diagnostic process, both when it succeeds
or fails. Diagnostic error accounts for a substantial portion of all medical
errors, receiving increased attention in the last 30 years [14].
In particular, the current debate is focused on the importance of
competence certification in order to minimize the risk of a diagnostic error. At
least four areas of competence were identified as necessary [15]: medical
interviewing, physical examination, data analysis and clinical judgment.
We completely agree that a physician must be competent in these areas to
minimize error risk, but it is strongly likely that this is not enough: errors are
due not only to a lack of competence and knowledge; cognitive biases are
often as important as the lack of specific knowledge in contributing to medical
errors [16, 17].
2.1. Diagnostic Errors
In his classic studies of clinical reasoning, Elstein [18] estimated the rate
of diagnostic errors to be approximately 15% of all diagnoses, fairly consistent
with the 10% to 15% error rate determined in autopsy studies [19].
However other researchers have found more alarming data. The study of
Leap [20] showed astonishing numbers in the domain of diagnosis; the author,
in fact, quoted several autopsy studies with rates of missed diagnoses as high
as 35-40%.
Consistent evidence resulted from a survey on patients with 35% of the
responders who reported to have experienced at least one medical mistake in
the previous 5 years (involving themselves, family or friends); 50% of those
errors were described as a misdiagnosis [21].
Shojania and colleagues [22] reviewed autopsy studies focused on the
diagnosis of pulmonary tuberculosis highlighting that almost 50% of these
diagnoses were not indicated as ante-mortem. A similar percentage was found
by Pidenda et al. [23] analyzing cases of fatal pulmonary embolism.
Claudio Lucchiari and Gabriella Pravettoni
In an American study on botulism addressing 129 infants recovered in
California during a 5 year period, only 50% of the observed cases were given a
diagnosis of infant botulism upon the admission.
Finally, Schiff and colleagues [24] analyzed 583 physician-reported errors
by surveying physicians from 22 hospitals across the US. The survey indicated
that 69% of reported errors were moderate or severe and that most of the
missed or delayed diagnoses were pulmonary embolism and drug reactions,
while stroke and coronary syndrome had a significant, though lower,
It is interesting to note that the most reported errors were attributed to a
problem in the analysis of information. Surveyed physicians admitted failures
or delays in identifying significant clues and in prioritizing clinical
information, addressing cognitive-related difficulties. Just limiting to these
data, it appears evident that diagnostic errors have a significant impact on the
whole healthcare system.
According to Graber and colleagues [25] diagnostic errors in internal
medicine may be described using a three axis taxonomy. Each axis describes a
possible source of error:
x) cognitive-related factors (e.g., bounded rationality, availability
heuristic, motivation and so on);
y) system-related factors (e.g. lack of procedures, noisy environments,
communication management and the use of technology)
z) no-fault (e.g. a particularly rare presentation of disease or third person
Authors found that on a sample of 100 errors, 46% derived from a
combination of cognitive and system-related factors while 28% could be
completely accounted for by a cognitive failure. Combining these three factors
we should be able to tag all possible errors, even though the cognitive domain
must be semantically extended to include emotions, moods, motivational
aspects and physiological factors (fatigue, stress and arousal) as we have
previously seen.
In this way, the model becomes very similar to the Reason’s Swiss cheese
model [26] where the author suggests that adverse events result from multiple
breakdowns in the series of barriers that normally prevent mistakes. Similarly,
Biases in Medial Decision Making
in many clinical cases a diagnostic failure is the result of the combination of
latent and contextual elements of both system-related and cognitive-related
factors. This model strongly suggests the use of a system approach as
previously stated.
2.2. The Medical Choice: Beyond the Given Data (Toward
A patient entering the hospital is an information carrier about his/her
health status. This information, after being processed at a system level through
some organizational routines, reaches the medical staff. The clinical process
then starts, following a simple or complex route depending on the case and the
particular context.
To appraise this journey, we need to focus on how the information
“becomes” a mental representation used to give rise to a specific diagnosis. To
do this we introduce here the cognitive concept of framing, the starting point
for any diagnostic course.
Framing is an automatic, fast cognitive process that leads a physician in
the search for both general and specific information coming from a variety of
different sources: patients, family, colleagues, laboratories, nursing staff,
specific databases and tacit knowledge. All this information needs to be
weighted for relevance and tested for reliability before being integrated into a
given template or, more properly, in a mental model [27]. Doctors need to
build one as soon as possible in order to simplify the situation, delete the
“unnecessary” information and guide future actions.
The activation of the first mental model starts with the diagnostic process.
In fact, using this mental structure based on schemes stored in the long-term
memory, a physician may evaluate the consequences of each possible choice
(diagnostic or therapeutic interventions), in order to plan future actions, choose
scenarios, or even partially or entirely review the active mental model in a
recursive process.
This process apparently requires only the use of a formally defined logical
path. In reality, each step of this cognitive course is modulated by several
factors, some individual-related or linked to interpersonal exchanges, others
context-related, e.g., working conditions, interruptions, time, availability of
Claudio Lucchiari and Gabriella Pravettoni
beds, diagnostic tools and general resources. The combination of these factors
can significantly affect the linearity of the process and make it in some
unfortunate cases a real hazard both for doctors and patients.
In particular, initial data processing constitutes a really insidious trap.
Indeed, some studies found that cognitive failures happen particularly in the
information synthesis stage, when a physician is called to process and combine
information and data.
Furthermore, physicians are reported to be particularly faulty in evaluating
different causes and factors simultaneously, leading them to choose only one
possible sense making process in order to simplify the clinical situation instead
of integrating information in a complex manner. Naturally an excessive
simplification, or the inability to take more than one possible pathway into
consideration, may leave the diagnostic process vulnerable to a number of
cognitive biases.
Hence, the use of a poor mental model (due to a lack of experience or
other factors, such as overconfidence) may be considered the weakest point of
a diagnostic process since it might direct the physician’s judgment on a biased
journey where attractive cognitive shortcuts may cover up dangerous
behavioral mines.
It is clear that cognitive biases can occur at all stages of medical decision-
making, but those related to the framing process are probably the most
significant. For instance, Wilson [28], through an empirical study, estimated
that one in six medical errors occurs in synthesizing available information
(implicitly organized in a mental model) or in deciding by means of that
Nevertheless, all these constraints are implicit and physicians are rarely
aware of their effect. No wonder then that in a study by Blendon [29], doctors
were inclined to indicate the main error source to be an insufficient nursing
staff and an excessive workload, while they made no mention of possible
causes of a cognitive nature. Doctors in this study shifted their attention
toward aspects that are difficult to control or, more generally, related to the
organization of work (i.e., at a system level). This datum reflects a widespread
trend of deeming medical errors as attributable to contingent factors,
resources, time constraints, incompetence or negligence [30].
It is particularly interesting that physicians often fail to realize the impact
of implicit mechanisms on their decisions. Indeed, when asked for, physicians
Biases in Medial Decision Making
often state that they systematically use analytical reasoning when making a
decision, indicating the existence of a clear mismatch between beliefs and the
actual behavior. This datum was often found in other contexts involving expert
decision makers, especially when facing uncertain decisions. For instance, if
we asked professional poker players to describe how they make decisions
when playing, they’d probably refer to game theory or to other analytical
models; unfortunately this is just the bright side of the moon. Actually, in
complex and uncertain situations, we found particular difficulties in fully
accounting for how we decide, and then the use of post-hoc explanations
(based on rational considerations about past events) is the rule most of the
In this context, justifying a choice is much easier and more convenient
than understanding it; this is another source of decision biases. As a matter of
fact, when a decision maker knows they must be accountable (i.e., that some
could ask for a justification of his/her decisions), the decision could be
affected by a justification bias that could sound like this: “Opt for the more
justifiable alternative”. In such a way, the favored option should be the more
justifiable one and not necessarily the most rational. This could be an
extremely attractive pathway for doctors since they are generally called to
explain their choices within a frame of legal and personal responsibility.
Unfortunately, when the cognitive and implicit nature of a choice (being good
or bad) is not recognized it becomes very difficult for a doctor to determine the
real sources of an error, since one’s own choice is generally perceived as
coherent with previous experiences.
In particular, physicians often show they are led by the desire to confirm a
preliminary diagnosis, thus failing to seek confutation evidence. This is the
milieu of the so-called confirmation bias. This bias was found to be
particularly insidious in clinical settings. For instance, a recent study on
German psychiatrists and students [31] found that a total of 13% of the 75
psychiatrists and 25% of 75 students showed confirmation bias when
searching for new information after having made a preliminary diagnosis.
Furthermore, subjects involved in a confirmatory strategy were significantly
less likely to make the correct diagnosis compared to participants using a
confutation or balanced search strategy. In fact, 70% using the confirmatory
strategy failed to arrive at the correct diagnosis, whereas an opposite figure
(27% wrong and 73% correct) were found in psychiatrists using confutation or
Claudio Lucchiari and Gabriella Pravettoni
balanced search strategies. Similar data strongly suggest that confirmation bias
highly increases the risk of incorrect diagnostic decisions.
Besides confirmatory search strategies many cognitive shortcuts are used
by physicians to close an issue (i.e. a clinical scenario) thus saving time as
well as cognitive and emotional resources. The study of these simplification
strategies, mostly implicitly activated, is a main topic of the cognitive
approach to medical decision making. It is interesting to note that the
simplification processes are inevitably the result of complex, rapid data-
processing in which some information is sought, requested, filtered, distorted
or even erased in a way similar to the one described by the prospect theory
Indeed, the physicians’ mind activity may also be described as a top-down
process based on specific cognitive heuristics [33, 34, 35, 36, 37]. Since
heuristics are rules of thumb that facilitate our cognitive work, providing
inexpensive and effective ways for solving complex problems in a short time,
it is arguable that doctors often rely on them even though they are not aware of
it. Several heuristics that apply to medicine are reported in literature [38],
revealing that more than 40 different cognitive biases may affect clinical
reasoning [39, 40, 41].
Here we will describe the most commonly found effects, starting with the
well-known five cognitive traps [42] and concluding with more specific biases.
2.3. Cognitive Heuristics and Biases
Availability heuristic. Information, data, risk probabilities and anything
that can be associated with a decision are processed using knowledge already
stored in memory that is easy to recall. Thus, the probability of an event, for
instance, may be evaluated by the ease of recalling information in some way
related to that event.
This heuristic lead physicians to avoid a formal process for the estimation
of disease frequency in a given population. Furthermore, when evaluating
signs and symptoms of a patient a physician could be led by the ease with
which he/she can recall a particular diagnostic hypothesis. The availability
heuristic can result in flawed clinical judgments by various mechanisms. For
Biases in Medial Decision Making
example, emphasizing a physician’s past experience (e.g., the number of cases
encountered recently could direct attention to some specific hypothesis
underweighting actual differences) or underestimating the importance of the
actual rates of disease presentation.
Representativeness. Basing judgment on representations excludes
considerations concerning probabilities and leads to a fast diagnosis that may
suffer from a severe evaluation error. Indeed, the representativeness heuristic
leads to estimating probabilities of an event based on its similarity to a
conventional framework without considering base rate information or other
significant data.
The representativeness heuristic works simply by associating the actual
occurrence of an item (e.g., a given symptom or sign as well as a whole
clinical picture) with an already prompt (in memory) prototypical item of a
certain category. From a cognitive point of view, then, it consists of a
fundamental mechanism able to improve our data processing and storing. In
fact, matching two patterns (one in the environment and the other in the long
term memory) using just a few cues lets us activate associative memories,
generating implications and hypotheses, and preparing actions with little effort
in a very short time. Unfortunately, a similar mind mechanism could lead a
physician or nurse toward an extremely deviant clinical judgment.
In fact, when we read a clinical scenario we are attracted by some
information significantly associated with what we think represents a prototypal
situation (e.g., a given pathology). However, this fact can lead to an
unsatisfying judgment course from different perspectives:
Information considered to be significant for its general
representativeness could instead be irrelevant data in the actual
presentation. Some data would be overweighed and others, potentially
significant, would be underweighted.
The prototypical description of a disease could be biased due to our
experience (probably limited only to some particular presentations),
our recent activities or to a rapid and/or unnoted change of
environmental factors (e.g., a common pathology could change its
typical presentation due to the changing of climate).
Representativeness may be strengthened by overconfidence (see later).
Claudio Lucchiari and Gabriella Pravettoni
Anchoring heuristic. This heuristic consists in the tendency of a doctor not
taking into account the most appropriate diagnostic information, being instead
attracted by apparently salient information in some way highlighted by the
situation. When a person gets anchored to some data or hypothesis, the
subsequent cognitive course is automatically referred to that anchor, so it is
difficult to considered alternatives beyond it..
In particular, the anchoring effect occurs when clinicians faithfully adhere
to an initial impression even when conflicting and contradictory data have
been accumulating. Indeed, the first impression, so important in clinical
settings, may give rise to a sound anchor that is a reference point to consider
first of all when evaluating subsequent information. Furthermore, the anchor
will guide the diagnostic process by suggesting tests and examinations
potentially able to confirm the initial hypothesis and discharging alternative
clinical courses.
Confirmation bias. When a diagnostic hypothesis is developed on a weak
or ambiguous basis, the tendency to seek confirmations rather than
confutations leads physicians to ignore information that may generate some
kind of conflict with the first idea. Hence, the confirmation bias reflects the
tendency to seek verification of one’s certainties while avoiding conflicting
information and judgments.
In particular, the confirmation bias acts by leading physicians to research
and actually uses information that fits with their existing expectations. In
doing so, they will try to test the first hypothesis by looking for clinical
examinations compatible with it, but ignoring other possibilities that could
instead lead to a falsification of a first frame.
The cognitive mechanisms linked to the confirmation bias may probably
be found in the higher imaginability of confirmation pathways as opposed to
those for confutation. For instance, it is surely easier for a physician to think
about a clinical test potentially able to confirm a diagnosis rather than a test
associated with alternative and incompatible ones. In clinical settings, the
confirmation bias is particularly important since it is frequently found to be the
cause of a delayed or missed diagnosis. However, the confirmation bias may
also affect other aspects of the clinical practice. For instance, a confirmation
bias may affect the ability of a nurse to understand that he/she is actually
providing the wrong therapy, just attracting nurse’s attention on information
Biases in Medial Decision Making
confirming his/her conviction and literally cancelling from mind other data
(e.g. an information reported on a label) able to disconfirm it.
Premature closure. It consists in the tendency to close a clinical case as
soon as possible, leading a physician to a cognitive and emotional discharge.
Premature closure directs a physician toward a quick diagnosis (often based on
a pattern recognition mechanism), failing to consider other possible diagnoses
and stopping further data collection. A doctor seems to jump to a conclusion to
satisfy a cognitive need.
Premature closure sometime ends the diagnostic journey even before a
suspected diagnosis is actually confirmed by an appropriate clinical
examination. This biased process may be due to contextual factors (temporal
constraints, work-related stress, interpersonal dynamics and the like) and/or to
individual characteristics. In particular, premature closure can be linked to the
so-called need for cognitive closure personality trait [43].
Need for closure (NFC) refers to the notion that some situations elicit an
epistemic state of wanting a quick solution. Individuals with a high need for
cognitive closure may be more likely to use cognitive heuristics in making
judgments and decisions over individuals with a low need for cognitive
closure. Low-NFC individuals postpone judgment until they have processed as
much information as possible, or until time and energy is depleted.
It is possible to state that the NFC trait maybe an important issue to
address in clinical settings since it could increase the risk of a biased judgment
in some particular conditions..
For instance, a physician could feel so uncomfortable with a given clinical
case as to develop a strong need for a cognitive closure before having a clear
awareness of the entire situation. This need could be the main cause of a
misdiagnosis. It is obvious that the higher a physician is in NFC, the more
likely contextual factors will promote a premature closure bias.
A particular kind of premature closure is the so-called diagnosis
momentum bias, consisting of closing a clinical case using argumentations
stated by colleagues or experts even when none of these statements really
prove the correctness of the suggested conclusion. In this bias, cognitive and
social aspects interact in promoting a premature, unmotivated closure.
The blind obedience effect may also have a weight on premature closure.
Actually, an excessive or unmotivated trust in others’ opinions or in the
Claudio Lucchiari and Gabriella Pravettoni
statistical power of a clinical examination may induce the closing of a case,
even when further tests are needed to achieve a final decision.
Overconfidence bias. It consists in the tendency of decision-makers to be
confident in their judgment ability beyond any rational consideration. Well-
calibrated judgments are actually the exception. Generally speaking,
physicians have been shown to be overconfident in their own decision skills.
For example, one study [44] concluded that on average when doctors were
88% confident that their patient had pneumonia, in fact only 20% of such
patients had pneumonia. Being overconfident with a clinical judgment may
imply several dangerous consequences. For instance, a physician may fail to
search for further information or consult an expert colleague due to an
overconfidence bias.
Illusory correlation. It consists in the tendency to find correlations
between events either weakly associated or not correlated at all. For instance,
if a doctor is confident that a therapy is curative in a given clinical scenario,
he/she will tend to correlate any clinical improvement to the effect of the
administered therapy. This is particularly evident in ambiguous clinical
presentations where a clear diagnosis is not stated nor is it possible to
prescribe a therapy proven to be efficacious. A spontaneous improvement
could then be read as the causal effect of an action whilst it could be just due
to the reaction of the organism. It is clear that a similar false correlation might
undermine our ability to understand the situation and learn through clinical
experience. Illusory correlations can lead physicians to produce a biased
learning as well as false beliefs, difficult to test or to defeat in the future.
Attribution error. This bias involves the use of negative stereotypes that
lead clinicians to ignore or minimize the likelihood of a serious disease. For
example, a doctor might assume that a senseless patient with an odor of
alcohol is “just another drunk”, missing a diagnosis of hypoglycemia or not
recognizing the consequences of an intracranial injury.
Affect heuristic. Emotional cues may heavily affect judgment ability and
consequently introduce contextual disturbance factors. These factors may play
a major role in the development of the first impression and in guiding the
subsequent judgment process. For example, specific clinical presentations as
well as personal characteristics of patients may convey lesser or greater
degrees of affective reactivity thus influencing a physician’s judgment. In this
case, the locution visceral bias is often used.
Biases in Medial Decision Making
Slovic and colleagues. [45] have concluded that the affect heuristic
influence the risk perception process since emotionally salient information has
the ability to grab attention, while information with less emotional content is
not considered risky. It’s easy to state that the affect heuristic may imply the
underestimation of the importance of some clinical data, independent of any
scientific argumentations.
Information processing biases. Physicians, nurses as well as patients often
fail in processing health-related information. In particular, the use of numbers,
statistics and math rules might introduce severe difficulties in making and
sharing good choices and, more specifically, in shaping a diagnosis.
A number of studies have shown that conditional reasoning are
particularly difficult. Thus, when asking physicians to estimate the probability
of an occurrence under some specified condition they often show erroneous
judgment. This is probably due to a base rate neglect bias, which is the
inability to consider a-priori probabilities.
For instance, imagine that a clinical test for cancer is 80% reliable. Now
suppose that a patient had a positive test. What does it mean?
Most people (and physicians) use the reliability datum (80%) as an
anchor, adjusting the estimation around it. In this way, the base rate
probability (that is the probability that a specific patient has cancer, given
some signs and symptoms without considering the diagnostic test) is
discarded. In some situations, the true conditional probability and the
estimated probability without considering a-priori information may be
completely different. In particular, physicians were often found to overweight
positive tests without completely appraising their statistical value.
Also the format of the information may impact a physician’s ability to
provide a correct diagnosis or control a clinical process. For instance, if a
physician or nurse normally performs actions or makes decisions using a
specific data format (default format) or measuring unit, facing data reported in
other formats may introduce a severe difficulty in controlling the situation.
This effect is also insidious in therapy administration and is linked to many
adverse clinical events.
A more general effect associated with information processing is the so-
called unpacking effect. When some options (or possible diagnoses) are
clearly stated, they will grab a physician’s attention so to increase their
estimated probability with respect to other unstated options. In a famous study
Claudio Lucchiari and Gabriella Pravettoni
by Redelmeier and Shafir [46] physicians were shown to be inclined to
overweight the probability associated with the options clearly stated, while the
item “other possibilities” were systemically underweighted. This effect is
probably due to an imaginability effect: the more difficult it is to imagine an
option, the lesser its estimated probability.
Loss aversion bias. When a choice is framed as potentially leading to
losses, decision-makers tend to show s higher propensity for risk, while gain
frames lead to conservative decisions. For instance, whether a physician
judges that a certain therapeutic choice could increase the survivorship of a
patient with a severe diagnosis, the possible choice may be framed as
“avoiding a loss”. In this way, the physician will take risky options into
serious consideration and probably he/she’ll try to convince a patient to pursue
Conversely, considering a therapeutic option as a way to improve quality
of life, framing it as “increase gains/advantages”, a physician would probably
overweight risks on benefits, thus suggesting a more conservative or
traditional pathway. This bias may explain why experimental and risky
therapeutic choices are easily proposed (and accepted by patients) in severe,
often hopeless, situations even though, rationally speaking, they represent poor
choices. On the contrary, alternative medicine or non-traditional options are
often declined even if they present few risks just because people (both
physicians and patients) have difficulties in weighting possible benefits in
positive frames. For example, quality of life is often considered an important
issue only in compassionate cases, while in most conditions physical health is
considered the main outcome to pursue.
The omission bias is a correlated effect. It consists in the tendency to
underweight risks associated with an omission behavior, i.e., the choice to
avoid actions.
Self-served bias. This bias consists in the tendency to consider problems
(such as a diagnostic puzzle) only from one's own perspective. This bias poses
important limits to the communication and information exchange between
experts and clinical teams. However, the patient/doctor communication may
also suffer from this bias, overall in conditions where a joint decision is
Taken as a whole, cognitive heuristics can lead to systematic and
predictable errors, or biases that resemble optical illusions. Systematic and
Biases in Medial Decision Making
predictable cognitive biases in judgment and decision-making may explain the
persistence of the core of diagnostic errors and the difficulties to prevent them.
The use of heuristics by physicians as well as other professional decision-
makers is due to a basic principle of the human mind: the existence of two
separate ways of thinking. Assuming a dual model perspective will enhance
our understanding of medical decision making.
Dual process theories describe the existence of two interactive systems of
thought. System 1 is described as intuitive, automatic, implicit and associative
[47, 48]. System 2 corresponds to serial and slow processes activated when
necessary in order to approach new and/or particularly complex problems. It is
analytical in nature, requires a conscious effort to be activated, and it
consumes resources.
System 1 has also been described as having a phase-locked activation
while system 2 shows more tonic activity [49]. This last feature also highlight
the functional link between system 1 and other phasic systems, the emotional
one, for example. Roughly speaking, a physician may approach a diagnostic
problem, like any other problems, using the analytical and/or intuitive mind.
Even though it could be argued that a rational diagnosis should be based
on analytical reasoning (that is the outcome of system 2), it is now recognized
that this is not necessarily the case. In fact, most diagnoses cannot be
described as the result of an orthogonal process that is a geometrical analysis
of factors considered in lines and planes. Conversely, most diagnoses are the
result of a set of choices that are not completely accountable. Hence, it is
likely that an important part of the medical work is performed within the realm
of intuition.
Indeed, though just some years ago intuition was seen as the dark side of
the mind especially when applied to expert decisions, now the situation is
completely different. Within the dual process theories intuitive thinking is no
longer regarded as a hazard, but as a strong necessity to complete complex
tasks in a short time.
Claudio Lucchiari and Gabriella Pravettoni
3.1. Is There Room for Intuition in Medicine?
Modern medicine is often described as being dominated by the so-called
evidence based medicine (EBM) paradigm. In this paradigm, a decision is
considered the result of an analytical process. To understand the role that
intuition may play in medical decision making, we need to briefly describe the
implications of the EBM paradigm.
Only some years ago medical decision making took place in a social and
cultural context dominated by the opinion of experts (it was called
"authoritarian medicine”). The evidence was limited to a single case report or
a series of case studies, drawing on anecdotal knowledge or using simple
observational, usually retrospective, studies.
While acknowledging that observations are a basic source of information
in science, it is also true that they are basically useful to generate hypotheses
rather than confirm them.
The first randomized study on the effectiveness of streptomycin in the
treatment of tuberculosis dates back to 1947 (published by BMJ in 1948) [50]
and since then the quality of evidence has been improving, increasingly based
on observations collected in a systematic and quantitative way. In this manner
the medicine became based on the authority of evidence.
In this paradigm each decision must be based on scientific data clearly
indicating whether risks are overcome by benefits. Furthermore, among the
purposes of evidence-based medicine, there is the assessment of the quality of
the available evidence and the explanation of the decision-making based on
data rather than opinions. We can say that the EBM paradigm implies the
development and use of a well-structured knowledge, acquired thanks to
systematic studies, described in a prescriptive way in clinical tools called
guidelines. These tools should guide medical decision making so to avoid
cognitive biases and improper decisions.
It is difficult to determine who produced the first guideline: Moses with
the ten commandments written on stone tablets (a sound guideline, we should
say) or maybe Hammurabi, king of Babylon, with the code named after him,
dating back to 2500 BC. What is sure is that in the health domain the industry
of guidelines is flourishing. However, linking Moses’ tablets with guidelines
strongly suggests a normative approach to medical decision making, the same
Biases in Medial Decision Making
approach that is continually challenged in day-to-day clinical activity. Let’s
start with a definition.
According to the US Institute of Medicine a guideline may be defined as
follows: "Recommendations for clinical behavior, produced through a
systematic process in order to assist doctors and patients in deciding what is
the most appropriate care arrangements in a given clinical situations" [51].
The aim of guidelines is to improve quality, appropriateness and cost-
effectiveness of health interventions, providing education and updates. The
guidelines should be applicable, effective, reliable, reproducible, and flexible;
it should also be written in an understandable way and updated periodically.
It is very important to emphasize that guidelines should not replace
clinical judgment and reasoning. They have no legal strength, although during
litigation trials they are often considered a normative reference point for
defining a good clinical practice. Each guideline, thus, should be considered as
a decision support for an expert decision-maker. However a similar tool must
be reliable but also compatible with the medical education and the day-by-day
clinical practice.
Are we sure that doctors have the expertise and opportunity to properly
decide using evidence and guidelines?
Finding the ultimate answer is difficult, but most doctors seem to
approach clinical puzzles preferably using shortcuts and heuristics, maybe just
because they are obliged to behave like that. This may be due to a bug in the
work organization, poor knowledge, scarce time, or it could simply be
explained by looking at the nature of the human mind.
The exercise of intelligence is generally considered an intentional,
conscious, knowledge-based and logic-driven deciding method. However,
achieving a diagnosis through a formal procedure can lead to errors due to
incompetence (lack of knowledge, erroneous understanding of probability and
the like), to cognitive limitations (inability to assign probabilities to different
pre-test diagnostic hypotheses, low level of attention and so on), to poor
information (e.g., tests are not perfectly accurate, guidelines are not reliable or
updated and so on).
In addition, the development of procedures and guidelines based on
evidence requires the systematic acquisition of such evidence, another
complex cognitive course, and the development of skills that help the
physician manipulate and integrate data and information so to match internal
Claudio Lucchiari and Gabriella Pravettoni
knowledge and individual competence with the external knowledge shaped in
guidelines. Each of these steps implies that a physician uses complex
processes that he’s not completely aware of not completely under
consciousness control, overall in conditions of temporal constraints.
The whole task requires a mental capacity that cannot be reduced to pure
rationality since it is also automatically enriched by the activity of unconscious
processing (not based on formal logic), often metaphorically called "gut
For instance, after having considered all the pros and cons of laboratory
tests as well as signs and symptoms, an expert clinician might arrive at a
conclusion that despite everything "this patient really does not convince me”,
then choosing a pathway different from the one prescribed by a guideline.
The term "gut feeling" or "intuition" is used interchangeably to refer to a
judgment that quickly emerges into consciousness, that we are not fully aware
of but has enough strength to push additional doubts and is potentially able to
produce new and non-predictable (creative) actions.
In the medical context intuition may be interpreted as a short circuit of
cognitive hypothesis generation in which the reasons of the hypothesis are not
easily understood. It should be added that intuition uses seemingly small
fragments of information which is activated in an unexpected, unpredictable
way and can be decisive in difficult cases.
To better understand the phenomenon of intuition in medical choices, we
bring a classic scenario, called “Diarrhea and strange behavior in a 3-year-old-
A doctor is called by the mother of a three year old little girl. The mother
reports that her daughter has diarrhea and that she is behaving "strangely".
The doctor knows the parents are no alarmists. Thus, remembering that in
children the initial symptoms of meningitis are nonspecific, he stops visiting
other patients and goes to visit the child, subsequently deciding to ask for a
lumbar puncture. The child had meningitis.
The above example is a classic intuitive diagnostic success. The doctor
has been guided by the availability heuristic and the result was a good choice.
Biases in Medial Decision Making
Previously we have seen, however, that some errors can also be produced by
invoking this heuristic: we can now conclude that a systematic reliance on
intuition is similar to the reliance on one’s own "lucky star", but sometimes
intuition is a powerful tool, considering that in medicine many factors may
alter the rational decision model described in guidelines. Furthermore, the
uncertainty and ambiguity of many clinical pictures could delay or even stop a
decision process without the unconscious guide of intuition.
Summing up, we may describe clinical intuition as follows:
a quick, unconscious process, particularly useful for medical experts but
in some situations dangerous (especially for novices);
a context-dependent process. Intuition is not a general skill or
knowledge, but it is tied to a particular expertise domain. Similar to
the so-called at-a-glance judgment, intuition works well in the context
in which it was first developed and it is not easily generalized to other
contexts. Missing to realize this feature of intuition may introduce
dangerous biases or actual mistakes in the clinical reasoning;
It is supported by the experience accumulated and grounded in the
individual’s implicit knowledge, meaning the amount of information
organized in schemas and models that one uses without awareness in
approaching the world, solving problems or developing judgments;
It activates selective attention to seemingly minor details or cues;
It can correctly interpret and integrate many complex pieces of data
without effort and awareness;
It may be behind the first generation of diagnostic hypotheses and then it
should act systematically in the initial phase of the clinical work.
However, it also occurs at advanced stages, when the formal process
finds no way to solve a puzzle, or when it leads to unconvincing
outcomes. Intuition may overcome normative procedures since it can
generate hypotheses even from seemingly minor details that are not
considered interesting or useful by the majority of experts and/or
procedures involved.
We can conclude that a clinician, as a kind of detective, should be induced
to cultivate and develop intuitive skills and use them proficiently in his/her
clinical practice.
Claudio Lucchiari and Gabriella Pravettoni
However, in one of our studies (unpublished data) we have found that
physicians rarely recognize the weight of intuition in making decisions during
their activity. Actually, physicians mostly report their decisions to be based on
procedures and guidelines (local, national or international), not recognizing the
importance of more implicit cognitive processes.
At the same time, physicians reported that due to time constraints the
possibility to directly consult all literary evidence and guidelines before
making a clinical decision is severely limited. Thus, physicians seem to suffer
from some typical cognitive biases: they generally reveal overconfidence with
the rational status of their decisions; at the same time, descriptions and
explanations of their clinical activities seem to suffer from a hindsight bias
since they start from an outcome to justify a choice instead of focusing
attention on how things really happened in their mind!
The need to advance physicians’ awareness concerning the role of
intuition in their day-to-day practice is then highly suggested. Furthermore,
physicians should be trained to appreciate the role of intuition and to balance
intuitive and analytical diagnostic skills. This is particularly clear if we
consider the role or work experience and age on diagnostic abilities.
For instance, novices seem to have a strong need to rely more on
analytical methods than experts since the former must develop a valid tacit
knowledge to go for intuitive decisions. For instance, Gabbay and Le May [52]
describe the need of novices to openly use guidelines, a process that requires
the activation of system 2, whereas most expert doctors prefer to rely
preferably on mindlines, which are subtle strategies developed through
experience and heavily based on system 1, though not exclusively. However
not all empirical results are convergent on this topic [53]. Furthermore, the
success or failure of a diagnosis does not lay in the use of system 1 or system
2. Also following formal and standardized procedures (strongly relying on
system 2) a physician may fail to arrive at a good diagnosis. However, the two
systems serve different aims and they are affected by different biases as well
as emotional and contextual interferences. In some studies, doctors are
reported to be particularly faulty in giving a diagnosis when using system 2.
System 2, in particular, processes data in a more abstract form so it is
relatively independent of contingency and marginal clues. Thus system 2 is the
ideal, logical place for hypothetic-deductive reasoning. Conversely, system 1
is strongly affected by contextual factors (e.g., the particular context in which
Biases in Medial Decision Making
a consultation occurs) and the emotional valance of the situation. This doesn’t
mean, however, that system 2 is emotion-driven and that system 1 is isolated
from the emotional system.
In many occasions a decision-maker is not even aware of the fact that
system 1 has already started working and that the final decision will be made
on the basis of implicit processes. This is due to the fact that system 1 is
phase-locked in a bottom-up and rapid fashion. In this way implicit cognitive
distortions may produce biased reasoning that is difficult to correct and also an
optical illusion.
Both systems, then, must be considered as important tools of a physician’s
mind toolbox. As a matter of fact, a physician is required to develop
knowledge and skills based on logic and evidence, enabling the use of formal
procedures and data manipulation. However, the intuitive system should also
be trained and empowered since its activation will contribute to most decisions
a physician will make during their career.
We may conclude that doctors probably perform better when using
guideline procedures (system 2) in clear scenarios, and pure intuition in
ambiguous cases (system 1). This is particularly true for novices. Expert
doctors, instead, often perform better in so called primed recognition decisions
[54], a particular case of decisions in which system 1 and system 2 works in
synergy, giving rise to successful decisions with little effort in pressing
situations as well.
3.2. Summing Up
As we have described in the previous paragraphs a number of studies have
been conducted in the last 10 years with the aim of discovering the source of
diagnostic errors.
Croskerry [55] summarized several cognitive failures, biases and
strategies that implicitly affect physicians’ judgment ability. We have
presented some of them here above.
Research generally suggests that physicians should be trained in cognitive
psychology, learning to use debiasing techniques such as meta-cognition (e.g.,
teaching physicians to ask themselves “What alternatives should be
considered?” before closing a case), high-fidelity simulation and the ability to
Claudio Lucchiari and Gabriella Pravettoni
critically reflect upon one’s own practice for developing and maintaining
medical expertise throughout life [15,39,40] (see table).
However the effective contribution in diagnostic errors of the several
debiasing methods described in literature in diagnostic process is not clear.
Furthermore, little improvement was observed after these studies, suggesting
that further research as well as intervention strategies are needed [56].
Table 1. Frequently used cognitive debiasing methods
Debiasing Method
cognitive awareness
Producing and extending the use of clear and synthetic descriptions of
cognitive mechanisms, biases and heuristics particularly relevant in
medical decision making. Clinical cases and examples should be used
to enhance comprehension of cognitive-based errors. Physicians
should be advised on how to avoid cognitive pitfalls by enhancing
their cognitive awareness. Furthermore, specific training should help
physicians go beyond their first choice and stop automatic decisions.
Metacognition is a critically important, yet often overlooked
component of learning. Effective learning involves planning and goal-
setting, monitoring one's progress, and adapting as needed.
Health personnel should be trained to think about what they are going
to do and what is going on in the actual clinical setting. Problem-
solving and decision making should be considered both by internal
and external (metacognitive) perspectives.
Teaching to
appraise reasoning
and memory traps
Physicians should realize the cognitive mechanisms implied in
memory and reasoning mechanisms as well as their hidden hazards. In
this way, a physician should decide to use support systems to enhance
cognitive abilities, memory and reasoning skills
Provide feedback
information systems
In order to enhance the possibility of learning through experience it is
necessary to provide operators with reliable, systematic and well-
timed feedback. Physicians need to completely appraise the
consequences of decisions and they must learn how to use this
information to adapt behaviors and enhance decision making abilities.
Feedback support systems should be introduced in clinical settings.
Using clinical scenarios where reasoning and decision pathways may
be manipulated to allow physicians to experience difficult situations
linked to cognitive traps and irrational decisions. How to avoid errors
or to react to a wrong decision should be also shown. Simulation
methods may be implemented using video, role-play, immersive
technologies and that like.
Biases in Medial Decision Making
Task and
Since many biases arise from information management, it is necessary
to give rise to procedures and support systems to reduce the impact of
information format and presentation on decision abilities. Also
clinical tasks should be structured and organized to fit with mind
Non-technical skill
Physicians should be trained in so-called non-technical skills,
including stress management, communication, and managing situation
awareness. Also the organization of work should be adapted to
prevent temporal stress, physiological fatigue and to improve
communication and situation awareness management.
The failure to advance in the field may be linked to the intrinsic difficulty
of making research. Hindsight bias, in particular, might guide research work
by framing hypothesis development, data collecting and interpreting. Data
reliability and research methodologies are often too limited to let us really
understand the deep nature of errors [57].
However, following several authors [58, 59, 26], a limited number of
functioning principles may be called in to explain a large portion of diagnostic
errors: premature closure (the tendency to avoid considering other possibilities
after reaching a diagnosis) and overconfidence (the tendency to overestimate
one’s judgment ability). Interestingly, experienced physicians are as likely as
novices to exhibit premature closure and elderly physicians may be
particularly predisposed both to premature closure and overconfidence,
probably because of age-related cognitive constraints and expertise
development [60].
But why just two when many others biases are so important in explaining
diagnostic errors ? A reason may probably be found in the deep nature of these
biases. Let’s approach them separately:
Premature closure is a goal-directed general tendency of the cognitive
system. It is the results of our experience, our limits (both cognitive and
behavioral) and our goals. Thus we can track the route of a premature closure
in basic cognitive mechanisms as well as motivational and social aspects (see
figure 1). At various levels, many factors push for a premature closure in a
clinical setting. Some of these levels may be controlled, others cannot.
For example, the need of premature closure should derive from a strong
need for emotional and cognitive discharge. Actually when a physician
Claudio Lucchiari and Gabriella Pravettoni
approaches a new case, the cognitive load increases. If this load remains active
for a long time it may lead to cognitive stress since it requires resources,
taking time and energy away from other tasks. At the same time, a physician
may experience an emotional charge when a diagnosis puzzle appears difficult
to solve. The impact of this charge depends on several factors (personal,
interpersonal and contextual) but it is surely time-dependent. More
specifically, a premature closure may be the result of the combination of
individual (need for cognitive closure trait) and contextual factors. Thus, a
physician with a high level of need for cognitive closure will feel the necessity
to arrive at a conclusion as soon as possible, probably using all the
shortcomings available.
Figure 1.The interacting factors in premature closure
However, contextual factors may also easily increase the need for closure.
For example, an emotion eliciting clinical case might produce an excessive
emotional charge, thus increasing the individual’s need for closure. Also
cognitive charge (amount of information, ambiguous presentations and the
like) may impact the premature closure bias.
Biases in Medial Decision Making
It is obvious to argue that intuitive thinking is strongly associated with
premature closure, even if specific training could teach physicians both to trust
in intuition and to activate a subsequent meta-cognitive control on it.
A similar schema may be elaborated for overconfidence (see figure 2).
Also overconfidence is a consequence of a number of direct and indirect
drivers, including age and experience. In particular, an overconfidence bias
seems to be particularly salient in expert doctors since they developed
competence and a related confidence. This is not necessarily associated with
an overconfidence bias, but the inability to focus attention on the subtle
cognitive mechanisms of the mind may easily cover the real complexity of
decision making. In this way, an expert doctor may give rise to a sort of
“personal mythology”, a set of false (or not-completely-true) beliefs and
illusionary correlations useful in accounting for decisions and clinical
judgments, without a complete appraisal of contributing factors.
Figure 2.Sources of overconfidence bias
Claudio Lucchiari and Gabriella Pravettoni
In this way, overconfidence may easily arise, becoming a basic element of
the so-called dysrationalia, a termed coined by Stanovich [61] to indicate the
relative independence of judgment performance from judgment ability.
Furthermore since overconfidence and premature closure don’t seem to be
two completely independent variables, we argue that they find a reciprocal
reinforcement in the course of a physician’s career. This is likely due to the
functioning of the human mind, but also to the specific education and practical
experience developed. This link could explain why experience and age do not
necessarily correlate with accurate diagnoses. However, the expertise and
accurate judgment tradeoff is not linear, being mediated by a number of
factors. This fact could explain why we often find divergent results in
From the above considerations, we can argue that a proper approach to the
prevention of diagnostic errors should address overconfidence and premature
closure biases [62].
For instance, premature closure could be approached by forcing
physicians to deepen a case before moving on to another one even when
he/she is confident with it. Similarly overconfidence should be challenged
systematically by the use of automatic alarming mechanisms. Educational
training targeted to avoid diagnostic failures by the use of cognitive
knowledge and tools are also needed [63]. The use of systematic checklists
and computer-based aids, able to suggest alternatives, highlighting relevant
clues or incoherent choices, will increase diagnostic accuracy. Indeed, one
possible way to enhance clinical decision making is to provide support based
on schemas, diagrams, decision trees and algorithms, which are supposed to
improve decision outcomes by reducing the impact of cognitive biases [64].
It is likely that the wide diffusion of the easy-to-use electronic decision
support system (DSS) will positively impact error rates. A DSS works as an
advisory system using clinical and epidemiological data together with expert
knowledge to offer real-time information useful to clinicians for the
management of a specific patient. However, a deep revision of these
instruments is needed too.
To proficiently use a DSS in medical decision making the following are
Biases in Medial Decision Making
Developing awareness of the dynamic and uncertain nature of medical
knowledge, which implies the need to continuously upgrade expertise
in research and critically evaluate the quality and type of knowledge,
of scientific evidence, and guidelines.
Understanding of logical and probabilistic aspects of the diagnostic
process, then developing expertise in dealing with clinical data and in
assessing and handling potential artifacts and errors.
Developing sound knowledge of formal decision methods in terms of
assessing risks and benefits of alternatives
Finding connections between general knowledge and individual cases,
combining knowledge and background clinical data acquired on a
specific case.
In this way, a DSS is supposed to improve medical decisions in the
following ways:
Decreasing memory-based biases by providing reliable data.
Improving the use of statistics and formal reasoning (e.g., Bayesian
reasoning), avoiding base-rate and format biases.
Improving the ability to produce data presentation in a more friendly
way, thus reducing the impact of communication biases.
Increasing the speed and reliability of data collection and integration.
However, in today’s clinical practice most physicians still lack this
knowledge and these skills, and even tools need to be improved to be more
user-friendly and fast to use in the clinical practice. Three main problems arise
when trying to introduce electronic decision making support tools:
1)Physicians are called to actually and systematically use them, thus
changing their work habits. This implies that physicians must perceive
this change as an actual advantage.
2)Decision support systems must be proven to be efficacious, in the sense
that they actually achieve the goal to reduce medical error.
3)Medical decision making is framed in DSSs as a pure normative
process, that is as a logic-driven pathway that helps a physician avoid
all possible cognitive traps, gut feelings included.
Claudio Lucchiari and Gabriella Pravettoni
The actual introduction of DSSs is supposed to be a good way to avoid
cognitive biases, based on the hypothesis that analytical models are definitely
more efficacious than other methods, namely intuitive or heuristic methods. In
general this is proven to be true when we are able to fully understand the
whole clinical situation, when we have all the needed data and the time to
evaluate an alternative’s payoffs, benefits and risks. This is not the case of the
everyday clinical practice in which uncertainty and time constraints are often
In today’s practice the formal models of decision making still require a
greater dose of pragmatism to be translated into practical steps, which requires
a great effort to adapt them to clinical settings.
Furthermore, a motivational bias does exit. Many physicians, for instance,
perceive the role of DSS as an imposition by others (hospital management,
new state legislature, the judiciary, mass media) to their clinical activity.
Today, physicians are faced with conflicting pressures. On one side they
have to practice medicine based on evidence of proven efficacy; on the other
side, they must rely on information sometimes of dubious quality, but also on
evidence and guidelines that aren’t completely adaptable to real contexts. The
use of DSSs should help physicians overcome these conflicts, avoiding the
introduction of data management errors, cognitive biases, and the
contamination of intuitive mechanisms. Is this goal either achievable or
Looking at the specific literature we cannot say that the introduction of
DSSs had an actual impact on the rationality of medical decision making. In
fact, most research in this area has showed that DSSs work well in giving
physicians updates and easy-to-use information [64], thus structuring external
knowledge, but they are not considered a valuable decision making support.
These data likely indicate that a typical DSS cannot be considered the
ultimate debiasing system since it obtains results just in some specific and
limited cases and conditions. We argue that this partial failure of DSSs is
linked to an unbalanced approach to medical decision making, where
analytical mechanisms are overweighed. This unbalance generates
consequences at different levels:
Decreases spontaneous use and motivation
−Conflicts with the human mind’s basic mechanisms
Biases in Medial Decision Making
Requires a radical change of habits
Challenges personal expertise and professionalism
The failing of most debiasing methods is probably due to their basic
assumptions. We argue that a decision support system cannot discharge
physician expertise and/or their intuition. As recently pointed out by Norman
and Eva [65], upon reviewing the diagnostic error literature, physicians seem
to be particularly vulnerable to error when they try to be analytical, i.e., when
they try to force the use of system 2 instead of letting system 1 work naturally.
Indeed, most successful diagnoses are reported to be based on intuitive
judgment, rather than formal reasoning. Since physicians heavily rely on these
abilities, often implicitly, and they are particularly comfortable with them, it is
intrinsically difficult to challenge them. A successful debiasing system, then,
must take a more general perspective into account.
We claim that it is necessary to develop a conceptual model on which
grounding technological decision tools as well as learning instruments, fitting
the requirements both of physicians in their daily practice and human mind
structure. The research on diagnostic error prevention will obligatorily pass
through the development of health technology that could serve medical
decision making with different perspectives and at each stage of the process.
Physicians will have to learn to interact with technological decision aids and
artificial expert systems. At the same time the health setting shall favor the
development of tacit natural knowledge to ground sound expertise and
Future research will have to address these issues in order to
develophealth-related technology able not only to provide structured
information and hints but also to generate a strong learning environment [66],
developing sound decision skills, using both systems 1 and 2. In particular, we
suggest that a wider use of mixed or balanced models will positively
contribute to reduce the medical error rate. In particular, we propose that the
use of fuzzy cognitive maps [67], a soft computing technique that combines
human expertise and analytical algorithms, would enhance physicians’
awareness of the diagnosis-related cognitive flow. Actually, the
Claudio Lucchiari and Gabriella Pravettoni
implementation of such maps requires a social-cognitive qualitative effort that
enhances personnel motivation and participation.
At the moment the use of similar techniques is limited since the research
is directed toward simpler debiasing models or fully automatic data mining
processes. We believe that both premature closure and overconfidence could
be better challenged with a mixed approach, combining cognitive knowledge
with technological solutions.
In fact, most debiasing techniques as well as decision support systems fail
their mission of reducing cognitive biases and, more generally, medical error
rate because of their perceived externality or incompatibility with the everyday
clinical practice. Physicians must develop an intrinsic motivation to use
particular devices or procedures. This motivation requires an increase of
awareness regarding the impact of cognitive bias in medical errors.
However, this is not enough. In fact, specific knowledge and awareness
need to be matched with training and procedures able to promote a cognitive-
driven diagnostic process.
Thus new tools must be tested. In particular, we believe that these tools
must have both scientific validity and personal significance. In this sense, the
use of social-cognitive processes to give rise to contextualized decision tools,
e.g., the use of DSS based on a cognitive fuzzy map could help physicians
build up a cognitive, balanced approach to clinical reasoning [68] aimed at
integrating guidelines and mindlines. A balanced model is surely needed but
its effectiveness should be enhanced by a direct participation of actors in
building decision support systems. In this sense, the lesson given by the use of
fuzzy cognitive maps could be important in developing new balancing (not
necessarily debiasing) techniques. Hence, the final goal of such a system
should be the development and training of intuitive skills within the realm of
the EBM paradigm.
[1] Rasmussen J. The definition of human error and a Taxonomy for
Technical System Design. In Rasmussen J., Dunkan K., Leplat J. (eds),
New Technologies and human Error. New York: John Wiley and Sons,
1987, p. 23-30.
Biases in Medial Decision Making
[2] Reason J.T. Human Error. Cambridge: Cambridge University Press,
[3] Norman D.A. Categorization of slips. Psychological review 1981; 88; 1:
[4] Kahneman D. Tversky A. On the psychology of Prediction.
Psychological Review 1973; 80: 237-251.
[5] Yerkes R.M., Dodson J.D. The relation of strength of stimulus to
rapidity of habit-formation". Journal of Comparative Neurology and
Psychology 1908; 18: 459482.
[6] Vincent C. Understanding and Responding to Adverse Events.New
England Journal of Medicine 2003; 348; 11:1051-1056.
[7] Hall L.M., Pederson C., and Fairley L. Losing the moment:
Understanding interruptions to nurses’ work. Journal of Nursing
Administration 2010; 14; 4: 169-176.
[8] Westbrook J.I., Woods A., Rob M.I., Dunsmuir W.T.M., and Day R.O.
Association of interruptions with an increased risk and severity of
medication administration errors. Archives of Internal Medicine, 2010;
170; 8:683-690.
[9] Pravettoni G., Lucchiari C. Gorini A. and Vago G.The impact of
interruptions during clinical activities in the emergency department.
ODAM conference, april 2011, Grahamstown, South Africa.
[10] Wald H., Shojania K.G. Incident Reporting . In Making Health Care
Safer: A Critical Analysis of Patient Safety Practices, 2001.
[11] Baker G.R., Norton P.G., Flintoft V., et al. The Canadian Adverse
Events Study: the incidence of adverse events among hospital patients in
Canada. Canadian Medical Association Journal 2004; 1; 25; 170:1678-
[12] Davis P, Lay-Yee R, Briant R, Ali W, Scott A, Schug S. Adverse events
in New Zealand public hospitals II: preventability and clinical context.N.
Z. Med. J.2003;116:U624.
[13] deVries N.M., Ramrattan M A, Smorenburg S M, et al. The incidence
and nature of in-hospital adverse events: a systematic review.QualSaf
Health Care 2008 17: 216-223.
[14] Mamede S., Schmidt H.G. and Rikers, R. Diagnostic errors and
reflective practice in medicine. J EvClin Pract;13:138145.
Claudio Lucchiari and Gabriella Pravettoni
[15] Wachter R.M., Holmboe E.S.. Diagnostic errors and patient safety.
JAMA 2009;15:258.
[16] Elstein A.S. Clinical judgment: psychological research and medical
practice. Science 1976; 12:696-700.
[17] Kirch W, Schafii C. Misdiagnosis at a university hospital in 4 medical
eras. Medicine 1996;75:29-40.
[18] Leape L.L. Error in medicine. JAMA 1994, 21:1851-7.
[19] Burroughs T.E., Waterman A.D., Gallagher T.H., Waterman, B., Adams
D., Jeffe D.B., Dunagan, et al. Patient Concerns about Medical Errors in
Emergency Departments. AcadEmerg. Med. 2005;12: 5764.
[20] Shojania K., Burton E., McDonald K., et al. The autopsy as an outcome
and performance measure: evidence report/technology assessment.
Agency for Healthcare Research and Quality 2002. AHRQ Publication
No. 03-E002.
[21] Pidenda L.A., Hathwar V.S., Grand B.J. Clinical suspicion of fatal
pulmonary embolism. Chest 2001;120:791795.
[22] Schiff G.D., Kim S., Abrams R., Cosby K., Lambert B., Elstein A.S.,
Hasler S., Krosnjar N. et al. Diagnosing Diagnosis Errors: Lessons from
a Multi-institutional Collaborative Project. In: Henriksen K, Battles JB,
Marks ES, Lewin DI (eds). Advances in Patient Safety: From Research
to Implementation. Rockville (MD): Agency for Healthcare Research
and Quality (US); 2005 Feb.
[23] Graber M.L., Franklin N., Gordon R. Diagnostic error in internal
medicine. Arch. Intern. Med. 2005; 165:1493-9.
[24] Reason J. Heroic Recoveries. Brookfield, Vt: Ashgate Publishing Co.;
[25] Reason J. Human Error. New York, NY: Cambridge University Press;
[26] Wilson R.M., Harrison B.T., Gibberd R.W., Hamilton J.D.. An analysis
of the causes of adverse events from the Quality in Australian Health
Care Study. Med. J. Aust. 1999;170:411 415.
[27] Blendon R.J., DesRoches C.M., Brodie M., et al. Views of practicing
physicians and the public on medical errors. N. Engl. J. Med.
[28] Deskin W.C., Hoye R.E. Another look at medical error. J. Surg. Oncol.
2004;88: 122129.
Biases in Medial Decision Making
[29] Mendel R., Traut-Mattausch E., Jonas E., Leucht S., Kane J. M., Maino
K., Kissling W., Hamann J. Confirmation bias: why psychiatrists stick to
wrong preliminary diagnoses. Psychological medicine, 2011, 41, 12, pp.
[30] Kahneman, D., Tversky, A.. Prospect Theory: An analysis of decision
under risk. Econometrica, 1979;47:111-132.
[31] Kahneman D., Tversky A. On the psychology of prediction. Psych. Rev.
1973; 80:237-251.
[32] Kahneman D., Tversky, A. Subjective probability: A judgment of
representativeness. Cog. Psych. 1972;3:430-454.
[33] Slovic P., Finucane M, Peters E., MacGregor D.G. The affect heuristic.
In T. Gilovich, D. Griffin, Kahneman (Eds.), Heuristics and biases: The
psychology of intuitive judgment. (pp. 548558). Cambridge, England:
Cambridge University Press, 2006.
[34] Gigerenzer G, Todd P.M., and the ABC Research Group. Simple
Heuristics That Make Us Smart. New York: Oxford University Press,
[35] Tversky A., Kahneman D. The framing of decisions and the psychology
of choice. Science 1981; 211:453-458.
[36] Croskerry P., Abbass AA, Wu AW. How doctors feel: affective issues in
patients' safety. Lancet 2008; 372:1205-6.
[37] Croskerry P. A universal model of diagnostic reasoning. Acad. Med.
[38] Andre M., Borgquist L., Foldevi M., Molstad S. Asking for rules of
thumb: a way to discover tacit knowledge in general practice. FamPrac.
[39] Croskerry P. The importance of cognitive errors in diagnosis and
strategies to minimize them. Acc. Med. 2003;78:775-780.
[40] Klein J.G , 2005 Five pitfalls in decisions about diagnosis and
prescribing. BMJ 2005;330:781.
[41] Webster M., Kruglanski W. Cognitive and Social Consequences of the
Need for Cognitive Closure. European Review of Social Psychology, 8,
1, 1997.
[42] Christensen-Szalanski J.J.J. and Bushyhead J.B. (1981) Physicians’ use
of probabilistic information in a real clinical setting. Journal of
Claudio Lucchiari and Gabriella Pravettoni
Experimental Psychology: Human Perception and Performance 7, 928
[43] Slovic P., Finucane M.L., Peters E., MacGregor D.G., Redelmeier et al.
The Affect Heuristic. European Journal of Operational Research 2007;
177; 3:13331352.
[44] Redelmeier, D., and Shafir, E. Medical decision making in situations
that offer multiple alternatives. Journal of the American Medical
Association 1995; 273; 4: 302-305.
[45] Stanovich K. Who Is Rational: Studies of Individual Differences in
Reasoning. Mahwah, N.J. : Lawrence Erlbaum Associates, 1999.
[46] Epstein S., Pacini R., Denes-Raj V., Heier H. Individual differences in
intuitive-experiential and analytical-rational thinking styles. J. PersSoc.
Psych. 1996;71:390-405.
[47] Lucchiari C., Pravettoni G. Mind The Gap. Milan: Unicopli, 2010.
[48] Medical Research Council: Streptomycin treatment of pulmonary
tuberculosis. BMJ 1948, 2:769-782.
[49] Institute of Medicine. Clinical Practice Guidelines: Directions for a New
Program, M.J. Field and K.N. Lohr (eds.), 1990, Washington, DC:
National Academy Press. page 38.
[50] Gabbay J., le May A. Practice made perfect ? Discovering the role of a
community of general practice. In A. le May, (ed), Communities of
practice in health and sociale care. Oxford: Blackwell, 2009.
[51] Elstad E.A., Lutfey K.E., Marceau L.D., Campbell S.M., von
demKnesebeck O., McKinlay J.B. What do physicians gain (and lose)
with experience? Qualitative results from a cross-national study of
diabetes. SocSci. Med. 2010;70:1728-36.
[52] Klein G. Naturalistic Decision Making. Human Factors: The Journal of
the Human Factors and Ergonomics Society 2008;50:456-460.
[53] Croskerry P. Achieving quality in clinical decision making: cognitive
strategies and detection of bias. AcadEmerg. Med. 2002;9:1184-204.
[54] Wears R.L., Nemeth C.P. Replacing hindsight with insight: Toward
better understanding of diagnostic failures. Ann. Em. Med. 2007;49:206-
[55] Elstein A.S. Thinking about diagnostic thinking: a 30-year perspective.
Adv. Health SciEduc. Theory Pract. 2009;1:7-18.
Biases in Medial Decision Making
[56] Choudhry N.K., Fletcher R.H., Soumerai S.B. Systematic review: the
relationship between clinical experience and quality of health care. Ann.
Int. Med. 2005;142:260-273.
[57] Berner E.S., Graber M.L. Overconfidence as a cause of diagnostic error
in medicine. Am. J. Med. 2008;121:22-23.
[58] Normann G. Dual processing and diagnostic errors. Adv. in Health
SciEduc. 2009;14:3749.
[59] Stanovich K. Dysrationalia. A New Specific Learning Disability. J.
Learn Disabil 1993;26:501-515.
[60] Croskerry P. Commentary: Lowly interns, more is merrier, and the
Casablanca Strategy. Acad. Med. 2011;86:8-10.
[61] Newman-Toker D.E., Pronovost P.J. Diagnostic errors: the next frontier
for patient safety. JAMA 2009; 301:1060-1062.
[62] Ruland C.M., Holte H.H., et al. (2010). “Effects of a computer-
supported interactive tailored patient assessment tool on patient care,
symptomdistress, and patients’ need for symptommanagement support: a
randomized clinical trial.”,Journal of the American Medical Informatics
Association,17, 403-410.
[63] Norman G.R., Eva K.W. Diagnostic error in clinical reasoning. Med.
Educ. 2010;44:94-100.
[64] Hogarth R.M. Educating intuition. Chicago: University of Chicago
Press, 2001.
[65] Kosko, B. (1986) Cognitive fuzzy maps. International Journal of Man-
Machine Studies, 24, 65-7.
[66] Lucchiari C., Pravettoni G. Cognitive balanced model: a conceptual
scheme of diagnostic decision making. Journal of Evaluation in Clinical
Practice, 2012, 18, 1, 82-88.
... The decision research literature on heuristics and biases describe those factors that influence individuals' attention and their judgments, decision processes and choices (see [44][45][46][47][48][49][50][51]). Researchers have demonstrated that these factors affect people's decision making in the same way in health settings (see [52][53][54][55][56][57][58][59][60][61]). ...
... Patients make trial, treatment and screening decisions in the same way as they make other everyday choices, i.e. they attend selectively to the information 'out there' and employ heuristic and systematic processing strategies to evaluate their mental representations [67][68][69][70][71]. There are factors that occur both externally in the decision context and internally in the individual's cognitions (Fig. 2) that affect the way individuals attend to, assimilate and judge information in order to reach a decision (see [47,48,53,62,[72][73][74][75][76][77][78]). This means that the way information is communicated about treatment options and people's prior experiences, beliefs and preferences influence their choices rather than the facts themselves. ...
... A key research area for decision sciences has been to examine the way individuals' mental representation of the decision problem or decision frame changes depending on how information is presented [44,[46][47][48][49]53,[77][78][79][80]. The verbal and/or written and/ or pictorial packaging used to deliver the facts about decision options often leaks other types of information which shape the mental representation or frame of the decision [47,55,63,77]. ...
To discuss whether using the International Patient Decision Aids Standards (IPDAS) Collaboration checklist as a gold standard to judge interventions' quality is premature and potentially detrimental to the validity of resources designed to help patients make treatment choices. Conceptual review integrating the science behind individuals' decision making with the demands of designing complex, healthcare interventions. Patient decision aids are promoted as interventions to help professionals engage in shared and/or patient-centred care. The IPDAS domains were informed by experts' opinions of best practice. Decision scientists study how individuals make decisions, what biases their choices and how best to support decisions. There is debate from decision scientists about which component parts are the active ingredients that help people make decisions. Interventions to help patients make choices have different purposes, component parts and outcomes to those facilitating professional-patient communications. The IPDAS checklist will change to respond to new evidence from the decision sciences. Adhering uncritically to the IPDAS checklist may reduce service variation but is not sufficient to ensure interventions enable good patient decision making. Developers must be encouraged to reason about the IPDAS checklist to identify those component parts that do (not) meet their intervention's purpose.
... The role that cognitive bias plays within health care has recently gained greater appreciation. 15,16 Subtypes of Cognitive Bias Cognitive bias clearly plays a role in medical decision making and communication among health care professionals ( Figure 2). Croskerry and others have discussed how cognitive bias affects medical decisions, often for the worse; in some specialties, the rate of diagnostic error is thought to be as high as 15%. ...
Communication issues play a major role within neurosurgery. There has been a growing awareness of the necessity of enhanced patient-centered communication between the physician and patient to improve patient satisfaction, compliance, and outcomes. In addition, the threat of malpractice litigation within neurosurgery is of particular concern, and improved communication may lead to some degree of risk mitigation. Within the neurosurgical and medical team, effective transmittal of vital clinical data is essential for patient safety. Despite the recent recognition of the critical role that communication plays in all aspects of medical care, multiple impediments hinder the improvement and use of effective techniques. We have identified 8 unique barriers to the advancement of communication practices: lack of recognition of the importance of communication skills; cognitive bias; sense that it "takes too much time"; cultural hierarchy within medicine; lack of formal communication skill training; fear that disclosure of medical errors will lead to malpractice litigation; the electronic medical record; and frequent shift changes and handoffs.
... Furthermore, in the final sections of this essay, discussion of statistical prediction rules and the fact that they often outperform the best experts will further illustrate that a reliance on expert judgment to shore up decision-making is misguided. Chapman and Elstein (2000) also systematically review research regarding health and medical decision-making. They provide an excellent overview of specific areas, but their recommendations are constrained. ...
Empirical research in social psychology has provided robust support for the accuracy of the heuristics and biases approach to human judgment. This research, however, has not been systematically investigated regarding its potential applications for specific health care decision-makers. This paper makes the case for investigating the heuristics and biases approach in the patient-physician relationship and recommends strategic empirical research. It is argued that research will be valuable for particular decisions in the clinic and for examining and altering the background conditions of patient and physician decision-making.
Full-text available
Consent is central to many organizational interactions and obligations. Employees consent to various terms of employment, both formal (contractual obligations) and informal (extra-role responsibilities, interpersonal requests). Yet consent has traditionally been considered a legal matter, unrelated to organizational behavior. In this article, we make a case for why, and how, organizational behavior scholars should undertake the study of consent. We first review scholarship on the legal understanding of consent. We argue that the traditional legal understanding is an incomplete way to think about consent in organizations, and we call for a more nuanced understanding that incorporates psychological and philosophical insights about consent—particularly consent in employer-employee relationships. We then connect this understanding of consent to traditional organizational behavior topics (autonomy, fairness, and trust) and examine these connections within three organizational domains (employee surveillance, excessive work demands, and sexual harassment). We conclude with future directions for research on consent in organizations.
Purpose: Medical treatments and medical decision making are mostly human based and therefore in risk of being influenced by cognitive biases. The potential impact could lead to bad medical outcome, unnecessary harm or even death. The aim of this comprehensive literature study is to analyse the evidence whether healthcare professionals are biased, which biases are most relevant in medicine and how these biases may be reduced. Approach/Findings: The results of the comprehensive literature based meta-analysis confirm on the one hand that several biases are relevant in the medical decision and treatment process. On the other hand, the study shows that the empirical evidence on the impact of cognitive biases on clinical outcome is scarce for most biases and that further research is necessary in this field. Value/Practical Implications: Nevertheless, it is important to determine the extent to which biases in healthcare professionals translate into negative clinical outcomes such as misdiagnosis, delayed diagnosis, or mistreatment. Only this way, the importance of incorporating debiasing strategies into the clinical setting, and which biases to focus on, can be properly assessed. Research Limitations/Future Research: Though recent literature puts great emphasis on cognitive debiasing strategies, there are still very few approaches that have proven to be efficient. Due to the increasing degree of specialization in medicine, the relevance of the different biases varies. Paper type: Theoretical.
Our chapter has two primary goals. The first is to describe a model of the mechanisms underlying the ?common-sense processes? involved in the everyday management of health risks. The second, intertwined with the first, is to apply the model to decisions and management of cancers in three areas: screening, care seeking, and end-of-life planning. We first spell out two themes underlying how the model represents the processes involved in people?s everyday approach to ?decision?-making for managing threats of cancers. The content of the Common-Sense Model (CSM) is spelled out, e.g., prototypes, representations of cancer, and treatments and action planning, as are the processes involved in the activation of mental models. The final sections address how these processes affect decisions to screen, treat, and decide among end of life alternatives.
Full-text available
Federal legislation (Health Information Technology for Economic and Clinical Health (HITECH) Act) has provided funds to support an unprecedented increase in health information technology (HIT) adoption for healthcare provider organizations and professionals throughout the U.S. While recognizing the promise that widespread HIT adoption and meaningful use can bring to efforts to improve the quality, safety, and efficiency of healthcare, the American Medical Informatics Association devoted its 2009 Annual Health Policy Meeting to consideration of unanticipated consequences that could result with the increased implementation of HIT. Conference participants focused on possible unintended and unanticipated, as well as undesirable, consequences of HIT implementation. They employed an input-output model to guide discussion on occurrence of these consequences in four domains: technical, human/cognitive, organizational, and fiscal/policy and regulation. The authors outline the conference's recommendations: (1) an enhanced research agenda to guide study into the causes, manifestations, and mitigation of unintended consequences resulting from HIT implementations; (2) creation of a framework to promote sharing of HIT implementation experiences and the development of best practices that minimize unintended consequences; and (3) recognition of the key role of the Federal Government in providing leadership and oversight in analyzing the effects of HIT-related implementations and policies.
Medical domains involve numerous tests and treatments, the sheer variety of which confounds both care providers and patients. Many of the decisions depend on the patient's preferences. Shared decision-making, which explicitly involves the patients and their preferences, is therefore imperative, but requires also a sufficient explanatory infrastructure. We introduce a distributed application, PANDEX, that, by using decision-analytic methods, assists patients and care providers to reach optimal decisions. Based on a generic architecture, the PANDEX prototype, which focuses on the domain of genetic prenatal consultation, is designed to calculate the optimal treatment strategy based on the patient's clinical data and preferences. A major focus of our study was on developing several types of in-depth sensitivity and importance analysis methods and on the implementation of the respective graphical tools embedded in the system, to help the patient fully understand the recommended strategy. A preliminary assessment of PANDEX by six genetic consultants demonstrated a relative unwillingness to work with PANDEX (mean=2.16+/-0.98 (SD), on a scale of 1-5), but a tendency to agree with its recommended strategies (mean=3.48+/-1.4) and with its capability to provide insights into the recommended strategies (3.28+/-1.23). PANDEX was considered by the consultants to be a potentially useful tool for patients.
The limitations of the classical or traditional paradigm of decision research are increasingly apparent, even though there has been a substantial body of empirical research on medical decision-making over the past 40 years. As decision-support technology continues to proliferate in medical settings, it is imperative that "basic science" decision research develop a broader-based and more valid foundation for the study of medical decision-making as it occurs in the natural setting. This paper critically reviews both traditional and recent approaches to medical decision making, considering the integration of problem-solving and decision-making research paradigms, the role of conceptual knowledge in decision-making, and the emerging paradigm of naturalistic decision-making. We also provide an examination of technology-mediated decision-making. Expanding the scope of decision research will better enable us to understand optimal decision processes, suitable coping mechanisms under suboptimal conditions, the development of expertise in decision-making, and ways in which decision-support technology can successfully mediate decision processes.
Fetal Reduction has been employed over the past two decades as a mechanism to reduce the morbidity and mortality of multiple pregnancies. Utilization of the procedure has increased dramatically as IVF has become commonplace but the average starting number has decreased with the transfer of fewer embryos. Success rates from fetal reduction have improved as a function of increasing experience, better ultrasound, and lower starting numbers. Genetic diagnosis prior to reduction can improve the overall outcomes. Reduction of triplets or more clearly improves outcomes, and reduction of twins to a singleton is now a reasonable consideration.
Full-text available
We review the progress of naturalistic decision making (NDM) in the decade since the first conference on the subject in 1989. After setting out a brief history of NDM we identify its essential characteristics and consider five of its main contributions: recognition-primed decisions, coping with uncertainty, team decision making, decision errors, and methodology. NDM helped identify important areas of inquiry previously neglected (e.g. the use of expertise in sizing up situations and generating options), it introduced new models, conceptualizations, and methods, and recruited applied investigators into the field. Above all, NDM contributed a new perspective on how decisions (broadly defined as committing oneself to a certain course of action) are made. NDM still faces significant challenges, including improvement of the quantity and rigor of its empirical research, and confirming the validity of its prescriptive models. Copyright © 2001 John Wiley & Sons, Ltd.
Full-text available
The goal of this study was to determine the relative contribution of system-related and cognitive components to diagnostic error and to develop a comprehensive working taxonomy. One hundred cases of diagnostic error involving internists were identified through autopsy discrepancies, quality assurance activities, and voluntary reports. Each case was evaluated to identify system-related and cognitive factors underlying error using record reviews and, if possible, provider interviews. Ninety cases involved injury, including 33 deaths. The underlying contributions to error fell into 3 natural categories: "no fault," system-related, and cognitive. Seven cases reflected no-fault errors alone. In the remaining 93 cases, we identified 548 different system-related or cognitive factors (5.9 per case). System-related factors contributed to the diagnostic error in 65% of the cases and cognitive factors in 74%. The most common system-related factors involved problems with policies and procedures, inefficient processes, teamwork, and communication. The most common cognitive problems involved faulty synthesis. Premature closure, ie, the failure to continue considering reasonable alternatives after an initial diagnosis was reached, was the single most common cause. Other common causes included faulty context generation, misjudging the salience of findings, faulty perception, and errors arising from the use of heuristics. Faulty or inadequate knowledge was uncommon. Diagnostic error is commonly multifactorial in origin, typically involving both system-related and cognitive factors. The results identify the dominant problems that should be targeted for additional research and early reduction; they also further the development of a comprehensive taxonomy for classifying diagnostic errors.