ArticlePDF Available

Probability Judgment in Medicine

Authors:

Abstract

Research in cognitive psychology has indicated that alternative descriptions of the same event can give rise to different probability judgments. This observation has led to the development of a descriptive account, called support theory, which assumes that the judged probability of an explicit description of an event (that lists specific possibilities) generally exceeds the judged probability of an implicit description of the same event (that does not mention specific possibilities). To investigate this assumption in medical judgment, the authors presented physicians with brief clinical scenarios describing individual patients and elicited diagnostic and prognostic probability judgments. The results showed that the physicians tended to discount unspecified possibilities, as predicted by support theory. The authors suggest that an awareness of the discrepancy between intuitive judgments and the laws of chance may provide opportunities for improving medical decision making.
227
The
Psychology
of
Decision
Making
Probability
Judgment
in
Medicine:
Discounting
Unspecified
Possibilities
DONALD
A.
REDELMEIER,
MD,
DEREK J.
KOEHLER,
PhD,
VARDA
LIBERMAN,
PhD,
AMOS
TVERSKY,
PhD
Research
in
cognitive
psychology
has
indicated
that
alternative
descriptions
of
the
same
event
can
give
rise
to
different
probability
judgments.
This
observation
has
led
to
the
de-
velopment
of
a
descriptive
account,
called
support
theory,
which
assumes
that
the
judged
probability
of
an
explicit
description
of
an
event
(that
lists
specific
possibilities)
generally
exceeds
the
judged
probability
of
an
implicit
description
of
the
same
event
(that
does
not
mention
specific
possibilities).
To
investigate
this
assumption
in
medical
judgment,
the
au-
thors
presented
physicians
with
brief
clinical
scenarios
describing
individual
patients
and
elicited
diagnostic
and
prognostic
probability
judgments.
The
results
showed
that
the
phy-
sicians
tended
to
discount
unspecified
possibilities,
as
predicted
by
support
theory.
The
authors
suggest
that
an
awareness
of
the
discrepancy
between
intuitive
judgments
and
the
laws
of
chance
may
provide
opportunities
for
improving
medical
decision
making.
Key
words:
probability
judgment;
support
theory;
unpacking
principle;
cognition.
(Med
Decis
Making
1995;15:227-230)
Medical
decisions
are
often
made
under
uncertainty.
When
evaluating
a
patient
with
chest
pain,
for
ex-
ample,
a
physician
needs
to
consider
the
possibility
that
the
patient
is
having
a
myocardial
infarction,
the
risk
of
a
serious
hemorrhage
if
thrombolytics
are
ad-
ministered,
and
the
consequences
if
thrombolytics
are
not
administered.
Uncertainty
can
sometimes
be
re-
duced
by
collecting
additional
data,
reviewing
the
sci-
entific
literature,
and
consulting
experts.
However,
it
cannot
always
be
eliminated
in
a
timely
manner.’
As
a
consequence,
action
often
depends
on
intuitive
judgments
of
the
likelihoods
of
various
possibilities.
Research
on
judgment
under
uncertainty
has
shown
that
both
laypeople
and
experts
do
not
always
follow
the
principles
of
probability
theoiy.2~3
In
particular,
alternative
representations
of
the
same
possibility
can
give
rise
to
different
probability
judgments.’
To
ac-
count
for
such
observations,
Tversky
and
Koehler’
have
developed
an
account
in
which
probability
is
assigned
not
to
events-as
in
other
models-but
rather
to
de-
scriptions
of
events,
called
hypotheses.
This
account,
called
support
theory,
assumes
that
each
hypothesis
refers
to
a
unique
event,
but
that
a
given
event
can
be
described
by
more
than
one
hypothesis.
For
example,
the
explicit
hypothesis
&dquo;death
due
to
traffic
accident,
drowning,
electrocution,
or
any
other
accident&dquo;
and
the
implicit
hypothesis
&dquo;death
due
to
an
accident&dquo;
represent
different
descriptions
of
the
same
event.
The
central
assumption
of
support
theory
is
the
unpacking
principle:
providing
a
more
detailed
de-
scription
of
an
implicit
hypothesis
generally
increases
its
judged
probability.
Thus,
the
judged
probability
of
the
explicit
description
that
lists
various
accidents
generally
exceeds
the
judged
probability
of
the
im-
plicit
description
that
does
not
mention
specific
ac-
cidents.
Like
the
measured
length
of
a
coastline,
which
increases
as
the
map
becomes
more
detailed,
the
per-
ceived
likelihood
of
an
event
increases
as
its
descrip-
tion
becomes
more
specific.
Both
memory
and
atten-
tion
contribute
to
this
effect:
unpacking
can
remind
people
of
possibilities
they
might
have
overlooked,
and
the
explicit
mentioning
of
a
possibility
may
increase
Received
January
14,
1994,
from
the
University
of
Toronto,
To-
ronto,
Ontario,
Canada
IDAR) ;
the
Wellesley
Hospital
Research
In-
stitute,
Toronto
(DAR);
Stanford
University,
Stanford,
California
(DJK,
AT);
and
the
Open
University
of
Israel,
Tel
Aviv,
Israel
(VL).
Revision
accepted
for
publication
August
18, 1994.
Supported
by
a
grant
from
the
PSI
Foundation
of
Ontario,
by
Grant
#SBR-9408684
from
the
National
Science
Foundation,
and
by
a
Fulbright
Visiting
Scholar-
ship
Award.
Dr.
Redelmeier
was
supported
by
a
career
scientist
award
from
the
Ontario
Ministry
of
Health,
Dr.
Koehler
by
a
National
Defence
Science
and
Engineering
Graduate
Fellowship,
and
Drs.
Liberman
and
Tversky
by
grant
#92-00389
from
the
United
States-
Israel
Binational
Science
Foundation.
Address
correspondence
and
reprint
requests
to
Dr.
Redelmeier:
Clinical
Epidemiology
Division,
The
Wellesley
Hospital
Research
Institute,
160
Wellesley
St.,
East,
650
Turner
Wing,
Toronto,
Ontario,
Canada
M4Y
lJ3.
228
its
salience
and
hence
its
perceived
likelihood.
In
accord
with
the
classic
theory
of
probability,
sup-
port
theory
assumes
that
the
judged
probability
of
a
hypothesis
and
of
its
complement
add
to
unity.
For
example,
the
judged
probability
of
the
hypothesis
&dquo;death
due
to
a
natural
cause&dquo;
and
that
of
the
hypothesis
&dquo;death
due
to
an
unnatural
cause&dquo;
should
sum
to
one,
even
though
each
judgment
could
be
increased
by
unpacking
the
respective
category.
The
unpacking
ef-
fect,
as
well
as
binary
complementarity,
have
been
observed
in
several
experiments
involving
nonmedical
situations s
The
present
article
explores
these
prin-
ciples
in
medical
judgments.
To
do
so,
we
presented
clinicians
with
brief
scenarios
describing
an
individual
patient
and
asked
them
to
judge
the
probabilities
of
relevant
medical
possibilities.
Unpacking
the
Residual
.
In
a
survey
of
house
officers
(n
=
59)
at
Stanford
University,
physicians
were
asked
to
review
the
fol-
lowing
scenario:
A
well-known
22-year-old
Hollywood
actress
presents
to
the
emergency
department
with
pain
in
the
right
lower
quadrant
of
her
abdomen
of
12
hours’
duration.
Her
last
normal
menstrual
period
was
four
weeks
ago.
Half
the
physicians,
selected
at
random,
were
asked
to
estimate
probabilities
for
two
diagnoses
(&dquo;gastroen-
teritis&dquo;
and
&dquo;ectopic
pregnancy&dquo;)
and
the
residual
cat-
egory
(&dquo;none
of
the
above&dquo;).
The
other
physicians
were
asked
to
estimate
probabilities
for
the
following
five
diagnoses:
the
two
diagnoses
specified
above
(&dquo;gas-
troenteritis&dquo;
and
&dquo;ectopic
pregnancy&dquo;),
three
addi-
tional
specific
diagnoses
(&dquo;appendicitis,&dquo;
&dquo;pyelone-
phritis,&dquo;
and
&dquo;pelvic
inflammatory
disease&dquo;),
and
the
residual
category
(&dquo;none
of
the
above&dquo;).
The
two
tasks
differed
only
in
that
the
residual
category
in
the
first
(short)
list
was
partially
unpacked
in
the
second
(long)
list.
All
the
physicians
were
told
that
the
patient
had
only
one
condition
and,
hence,
that
the
judged
prob-
abilities
should
add
to
100%.
Logically,
the
probability
of
the
residual
&dquo;none
of
the
above&dquo;
in
the
short
list
should
equal
the
sum
of
the
probabilities
of
the
corresponding
possibilities
in
the
long
list.
In
accord
with
the
unpacking
principle,
however,
we
found
that
the
average
probability
as-
signed
to
the
residual
in
the
short
list
was
smaller
than
the
sum
of
the
corresponding
probabilities
in
the
long
list
(50%
vs
69%,
p
<
0.005
by
Mann-Whitney
test).
As
a
consequence,
unpacking
the
residual
cat-
egory
changed
the
probabilities
assigned
to
specific
diagnoses.
For
example,
the
average
probability
as-
signed
to
&dquo;gastroenteritis&dquo;
was
substantially
higher
in
the
short
list
than
in
the
long
list
(31%
vs
16%,
p
<
0.005
by
Mann-Whitney
test).
Evidently,
unpacking
the
residual
hypothesis
reminded
physicians
of
dis-
eases
they
might
have
overlooked,
or
increased
the
salience
of
diagnoses
that
they
had
considered.
Highlighting
One
Possibility
In
the
previous
example
physicians
were
asked
to
assign
probabilities
to
a
set
of
possibilities.
Often,
how-
ever,
physicians
focus
on
a
single
possibility.
In
this
case,
they
may
be
prone
to
overestimate
the
likelihood
of
that
possibility
because
its
alternatives
are
unspec-
ified.
To
illustrate
this
point,
we
presented
the
follow-
ing
scenario
to
a
group
of
expert
physicians
(n
=
52)
at
Tel
Aviv
University:
R.G.
is
a
67-year-old
retired
farmer
who
presents
to
the
emergency
department
with
chest
pain
of
four
hours’
duration.
The
diagnosis
is
acute
myocardial
infarction.
Physical
examination
shows
no
evidence
of
pulmonary
edema,
hypotension,
or
mental
status
changes.
His
EKG
shows
ST-segment
elevation
in
the
anterior
leads,
but
no
dysrythmia
or
heart
block.
His
past
medical
history
is
unremarkable.
He
is
admitted
to
the
hospital
and
treated
in
the
usual
manner.
Consider
the
possible
out-
comes.
Each
physician
was
randomly
assigned
to
evaluate
one
of
the
following
four
prognoses
for
this
patient:
&dquo;dying
during
this
admission,&dquo;
&dquo;surviving
this
admis-
sion
but
dying
within
one
year,&dquo;
&dquo;living
for
more
than
one
year
but
less
than
ten
years,&dquo;
or
&dquo;surviving
for
more
than
ten
years.&dquo;
The
average
probabilities
as-
signed
to
these
prognoses
were
14%,
26%,
55%,
and
69%,
respectively.
According
to
standard
theory,
the
probabilities
assigned
to these
outcomes
should
sum
to
100%.
In
contrast,
the
average
judgments
added
to
164%
(95%
confidence
interval:
134%
to
194%).
As
im-
plied
by
the
unpacking
principle,
the
physicians
in
each
group
overweighted
the
possibility
that
was
ex-
plicitly
mentioned
relative
to
the
unspecified
alter-
native.
All
groups,
indeed,
overestimated
the
frequen-
cies
reported
in
the
literature.~
Notice
that
while
the
results
of
the
previous
problem
can
be
interpreted
as
a
memory
effect
(reminding
physicians
of
additional
possibilities),
the
present
results
represent
an
atten-
tion
effect
(highlighting
a
particular
interval
on
a
con-
tinuum).
Binary
Completnentarity
We
have
attributed
the
preceding
results
to
the
un-
packing
principle.
An
alternative
interpretation
is
that
people
overestimate
the
(focal)
hypothesis
that
they
are
asked
to
evaluate.
If
this
interpretation
be
correct,
the
sum
of
the
judged
probabilities
for
a
pair
of
com-
plementary
hypotheses
should exceed
one.
To
test
229
this
prediction,
we
presented
the
preceding
scenario
to
fourth-year
medical
students
(n =
149)
at
the
Uni-
versity
of
Toronto.
Half
the
participants,
selected
at
random,
were
asked
to
evaluate
the
probability
that
the
patient
would
&dquo;survive
this
hospitalization.&dquo;
The
other
half
were
asked
to
evaluate
the
probability
that
the
patient
would
&dquo;die
during
this
hospitalization.&dquo;
We
found
that
the
mean
judged
probabilities
in
the
two
groups
were
78%
and
21%,
respectively,
summing
to
99%
(95%
confidence
interval:
94%
to
104%).
As
im-
plied
by
support
theory,
judged
probabilities
add
to
100%
in
cases
with
only
two
possibilities,
and
exceed
100%
in
cases
involving
more
than
two
possibilities.
This
observation
demonstrates
that
people
overesti-
mate
what
is
specified,
not
what
is
under
evaluation.
Treabnent
Decisions
The
final
example
shows
that
the
unpacking
effect
is
not
limited
to
probability
judgments
but
can
also
extend
to
treatment
decisions.
We
asked
fourth-year
medical
students
( n
=
148)
at
the
University
of Toronto
to
consider
the
following
scenario:
M.S.
is
a
43-year-old
journalist
who
presents
to
the
emergency
department
because
of
a
fever
and
head-
ache
of
two
days’
duration.
Past
medical
history
is
re-
markable
only
for
15
years
of lupus
erythematosus,
con-
trolled
on
Tylenol
and
chronic
steroids
(prednisone
10
mg
daily).
She
does
not
look
sick.
Vital
signs
are
normal.
Physical
examination
reveals
tenderness
over
the
fron-
tal
sinuses
and
pharyngeal
erythema.
There
is
no
neck
stiffness,
tympanic
membrane
redness,
or
cervical
ad-
enopathy.
The
remainder
of
the
physical
examination
is
unremarkable
aside
from
some
degenerative
changes
in
the
small
joints
of
both
hands.
For
half
the
students,
selected
at
random,
the
scenario
was
followed
by
the
sentence:
&dquo;Obviously,
many
di-
agnoses
are
possible
given
this
limited
information,
including
CNS
vasculitis,
lupus
cerebritis,
intracranial
opportunistic
infection,
sinusitis,
and
a
subdural
hematoma.&dquo;
The
other
half
were
presented
with
a
shorter
sentence:
&dquo;Obviously,
many
diagnoses
are
pos-
sible
given
this
limited
information,
including
sinus-
itis.&dquo;
Individuals
in
both
groups
were
asked
to
indicate
whether
they
would
recommend
ordering
a
CAT
scan
of
the
head.
Logically,
there
should
be
no
difference
between
the
responses
to
the
two
versions
because
both
describe
the
same
situation.
On
the other
hand,
support
theory
suggests
that
the
possibility
of
sinusitis will
loom
larger
when
it
is
the
only
specified
diagnosis
than
when
it
is
accompanied
by
other
specified
diagnoses.
Con-
sequently,
we
expected
that
fewer
physicians
would
order
a
CAT
scan
in
the
short
version
because
the
diagnosis
of
sinusitis
does
not
normally
call
for
this
test/
Indeed,
we
found
that
fewer
respondents
rec-
ommended
a
CAT
scan
in
response
to
the
short
ver-
sion
than
in
response
to
the
long
version
(20%
vs
32%,
p
<
0.05
by
Mann-Whitney
test).
Thus,
the
unpacking
principle
applies
to
treatment
recommendations,
not
only
to
probability
judgments.
Conclusion
Subjective
assessments
of
uncertain
events
are
sometimes
necessary,
even
though
they
are
often
fal-
lible.
In
this
study
we
focused
on
a
particularly
sig-
nificant
source
of
error,
namely,
the
tendency
to
dis-
count
unspecified
possibilities.
In
the
first
problem
we
demonstrated
the
unpacking
effect
in
a
diagnostic
task
by
reminding
physicians
of
possibilities
they
might
have
overlooked.
In
the
second
problem
we
obtained
the
same
effect
in
a
prognostic
task
by
highlighting
a
specific
interval
along
a
continuum.
In
the
third
prob-
lem
we
showed
that
the
unpacking
effect
cannot
be
explained
by
overestimating
the
focal
possibility.
And
in
the
final
problem
we
illustrated
the
unpacking
effect
in
a
decision
task.
Together,
the
findings
confirm
the
main
qualitative
predictions
of
support
theory
in
med-
ical
judgment.
It
could
be
argued
that
our
respondents
believed
that
the
request
to
evaluate
a
particular
hypothesis
conveys
relevant
information
and
suggests
that
the
hypothesis
in
question
is
not
improbable.
Although
such
belief
may
contribute
to
the
unpacking
effect,
it
does
not
fully
explain
the
data.
First,
this
account
im-
plies
an
overweighting
of
the
focal
hypothesis,
con-
trary
to
the
finding
of binary
complementarity.
Second,
the
unpacking
effect
was
pronounced
in
the
myocar-
dial
infarction
example,
where
the
experts
were
in-
formed
that
other
physicians
were
evaluating
different
hypotheses.
Finally,
the
unpacking
effect
has
also
been
observed
in
non-medical
problems
where
the
re-
spondents
were
made
aware
that
the
focal
hypothesis
had
been
randomly
chosen.s
Although
there
is
no
simple
method
for
eliminating
the
unpacking
effect,
we
call
attention
to
its
presence
and
suggest
some
corrective
procedures.
First,
clini-
cians
need
to
recognize
that
judgments
under
uncer-
tainty
are
susceptible
to
error;
in
particular,
alternative
descriptions
of
the
same
situation
may
lead
to
differ-
ent
judgments.
Second,
clinicians
should
be
encour-
aged
to
unpack
broad
categories
and
compare
pos-
sibilities
at
similar
levels
of
specificity,
rather
than
compare
a
single
specific
possibility
against
an
un-
specified
set
of
alternatives.
Indeed,
unpacking
the
implicit
complement
of
a
focal
hypothesis
may
serve
as
a
useful
method
for
reducing
overconfidence.
More
generally,
a
better
understanding
of
the
cognitive
psy-
chology
underlying
medical
judgment
could
help
identify
common
biases
and
suggest
corrective
pro-
cedures.
230
References
1.
Pauker
SG,
Kopelman
RI.
How
sure
is
sure
enough?
N
Engl
J
Med.
1992;326:688-91.
2.
Kahneman
D,
Slovic
P,
Tversky
A
(eds).
Judgment
under
Uncer-
tainty:
Heuristics
and
Biases.
New
York:
Cambridge
University
Press,
1982.
3.
von
Winterfeldt
D,
Edwards
W.
Decision
Analysis
and
Behavioral
Research.
New
York:
Cambridge
University
Press,
1986.
4.
Fischhoff
B,
Slovic
P,
Lichtenstein
S.
Fault
trees:
sensitivity
of
estimated
failure
probabilities
to
problem
representation.
J
Exp
Psychol
Hum
Percept
Perform.
1978;3:552-64.
5.
Tversky
A,
Koehler
DJ.
Support
theory:
a
nonextensional
repre-
sentation
of
subjective
probability.
Psychol
Rev.
1994;101:547-67.
6.
Goldberg
RJ,
Gore
JM,
Alpert
JS,
et
al.
Cardiogenic
shock
after
acute
myocardial
infarction:
incidence
and
mortality
from
a
com-
munity-wide
perspective
1975
to
1988.
N
Engl
J Med.1991;325:1117-
22.
7.
Hourihane
J.
CT
scans
and
the
common
cold.
N
Engl
J
Med.
1994;1330:1826-7.
... Substantial prior research has examined whether and how the presentation of risk information can alter these perceptions. Such work has revealed that listing the pathways through which risks can occur often heightens these risks' perceived probability (e.g., Biswas et al., 2012;Brody et al., 2003;Redelmeier et al., 1995;Tversky & Koehler, 1994). In the current research, we identify conditions in which the reverse relationship exists between providing more pathway information and risk perceptions. ...
... Individual probabilities for such causes are often provided as part of an overall effort to give people more complete and/or transparent information (e.g., Fox, 2017;Lutz, 2015;Ray, 1999;Weeks, 2019). As noted, prior research finds that providing more information by listing the causes through which a particular outcome can occur generally increases that outcome's total perceived likelihood (Biswas et al., 2012;Redelmeier et al., 1995;Tversky & Koehler, 1994;cf. Sloman et al., 2004). ...
... Sloman et al., 2004). For example, support theory research finds that listing the number of types of fatal natural events increases the subjective likelihood of those events (Tversky & Koehler, 1994) and listing the different pathways through which a patient may survive increases the subjective likelihood of survival (Redelmeier et al., 1995). This listing, or "unpacking," of an outcome's causes similarly biases perceived probability in domains beyond health, including the likelihood of financial audit outcomes (Brody et al., 2003), product failures (Biswas et al., 2012), and trial verdicts (Fox & Birke, 2002). ...
Article
Full-text available
People face increasingly detailed information related to a range of risky decisions. To aid individuals in thinking through such risks, various forms of policy and health messaging often enumerate their causes. Whereas some prior literature suggests that adding information about causes of an outcome increases its perceived likelihood, we identify a novel mechanism through which the opposite regularly occurs. Across seven primary and six supplementary experiments, we find that the estimated likelihood of an outcome decreases when people learn about the (by- definition lower) probabilities of the pathways that lead to that outcome. This "unlikelihood" bias exists despite explicit communication of the outcome's total objective probability and occurs for both positive and negative outcomes. Indeed, awareness of a low-probability pathway decreases subjective perceptions of the outcome's likelihood even when its addition objectively increases the outcome's actual probability. These findings advance the current understanding of how people integrate information under uncertainty and derive subjective perceptions of risk. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... For example, consider the comparison between "dying from any natural cause" versus "dying from heart disease, cancer, or any other natural cause." This information is statistically equivalent-yet numerous tests of support theory find that people judge its latter "unpacked" description as more probable (e.g., Fischhoff et al., 1978;Johnson et al., 1993;Mulford & Dawes, 1999;Redelmeier et al., 1995;Rottenstreich & Tversky, 1997;Russo & Kolzow, 1994;Van Boven & Epley, 2003). Support theory suggests this effect of piecemeal unpacking "represents a basic principle of human judgment," as it "enhances [the component's] salience and hence its support" (Tversky & Koehler, 1994, p. 549). ...
Article
Full-text available
People commonly establish in advance the thresholds they use to pass social judgment (e.g., promising reward/punishment after a fixed number of good/bad behaviors). Ten preregistered experiments (N = 5,542) reveal when, why, and how people violate their social judgment thresholds, even after formally establishing them based on having full information about what might unfold. People can be swayed to be both "quicker to judge" (e.g., promising reward/punishment after 3 good/bad behaviors, yet then acting after 2 such behaviors) and "slower to judge" (e.g., promising reward/punishment after 3 good/bad behaviors, yet then withholding until 4 such behaviors)-despite all behaviors obeying their threshold. We document these discrepancies across many parameters. We also propose and test an integrative theoretical framework to explain them, rooted in psychological support: Being both "quicker" and "slower" to judge reflect a shared function of the distinct modes of evaluation involved in the act of setting social judgment thresholds (involving a packed summary judgment extending across myriad possible realities) versus following them in real time (involving an unpacked focus on whatever specific reality unfolds, which could provide higher or lower support than threshold setters had accounted for). Manipulating the degree of psychological support thus determines the direction of threshold violations: Higher support produces "quicker to judge" effects while lower support produces "slower to judge" effects. Finally, although violating one's preset threshold may sometimes be to one's benefit, we document initial evidence that it also risks damaging people's reputations and relationships. When it comes to treating others, making exceptions to the rule may often be the rule-for better or worse. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
... Unpacking effects refer to the judgment error that occurs when a larger category event is broken down into smaller events whose union equals the larger event. The following is an example obtained from Stanford physicians reported in [15]. ...
Article
Full-text available
The psychology of judgment and decision making can provide useful guidance to the task of medical decision making. More specifically, we describe how a new approach to judgment and decisions, based on quantum probability theory, can shed new light on seemingly irrational judgments, as well as indicate ways to ameliorate these judgment errors. Five different types of probability judgment errors that occur in medical decisions are reviewed. For each one, we provide a simple account using theory from quantum cognition. We conclude by drawing the implications of quantum cognition for ameliorating these common medical probability judgment errors.
... Presenting physicians with diagnostic alternatives early on could reduce unwarranted certainty by reminding them of other possibilities that they should consider. "Unpacking" hypotheses, i.e. presenting specific hypotheses in place of an "other" category, has been found to have a debiasing effect on diagnostic judgements and to reduce probability estimates attached to the focal hypothesis (Redelmeier et al., 1995). Furthermore, when physicians are presented with other possibilities, they may be more willing to seek further information before reaching a diagnostic conclusion, more cautious when they evaluate non-diagnostic information, and more likely to reconsider their initial diagnostic hypothesis. ...
Article
Full-text available
Previous research has highlighted the importance of physicians' early hypotheses for their subsequent diagnostic decisions. It has also been shown that diagnostic accuracy improves when physicians are presented with a list of diagnostic suggestions to consider at the start of the clinical encounter. The psychological mechanisms underlying this improvement in accuracy are hypothesised. It is possible that the provision of diagnostic suggestions disrupts physicians' intuitive thinking and reduces their certainty in their initial diagnostic hypotheses. This may encourage them to seek more information before reaching a diagnostic conclusion, evaluate this information more objectively, and be more open to changing their initial hypotheses. Three online experiments explored the effects of early diagnostic suggestions, provided by a hypothetical decision aid, on different aspects of the diagnostic reasoning process. Family physicians assessed up to two patient scenarios with and without suggestions. We measured effects on certainty about the initial diagnosis, information search and evaluation, and frequency of diagnostic changes. We did not find a clear and consistent effect of suggestions and detected mainly non-significant trends, some in the expected direction. We also detected a potential biasing effect: when the most likely diagnosis was included in the list of suggestions (vs. not included), physicians who gave that diagnosis initially, tended to request less information, evaluate it as more supportive of their diagnosis, become more certain about it, and change it less frequently when encountering new but ambiguous information; in other words, they seemed to validate rather than question their initial hypothesis. We conclude that further research using different methodologies and more realistic experimental situations is required to uncover both the beneficial and biasing effects of early diagnostic suggestions.
Chapter
Sampling approaches to judgment and decision making are distinct from traditional accounts in psychology and neuroscience. While these traditional accounts focus on limitations of the human mind as a major source of bounded rationality, the sampling approach originates in a broader cognitive-ecological perspective. It starts from the fundamental assumption that in order to understand intra-psychic cognitive processes one first has to understand the distributions of, and the biases built into, the environmental information that provides input to all cognitive processes. Both the biases and restriction, but also the assets and capacities, of the human mind often reflect, to a considerable degree, the irrational and rational features of the information environment and its manifestations in the literature, the Internet, and collective memory. Sampling approaches to judgment and decision making constitute a prime example of theory-driven research that promises to help behavioral scientists cope with the challenges of replicability and practical usefulness.
Article
Full-text available
The feasibility of different options to reduce the risks of climate change has engaged scholars for decades. Yet there is no agreement on how to define and assess feasibility. We define feasible as “do‐able under realistic assumptions.” A sound feasibility assessment is based on causal reasoning; enables comparison of feasibility across climate options, contexts, and implementation levels; and reflexively considers the agency of its audience. Global climate scenarios are a good starting point for assessing the feasibility of climate options since they represent causal pathways, quantify implementation levels, and consider policy choices. Yet, scenario developers face difficulties to represent all relevant causalities, assess the realism of assumptions, assign likelihood to potential outcomes, and evaluate the agency of their users, which calls for external feasibility assessments. Existing approaches to feasibility assessment mirror the “inside” and the “outside” view coined by Kahneman and co‐authors. The inside view considers climate change as a unique challenge and seeks to identify barriers that should be overcome by political choice, commitment, and skill. The outside view assesses feasibility through examining historical analogies (reference cases) to the given climate option. Recent studies seek to bridge the inside and the outside views through “feasibility spaces,” by identifying reference cases for a climate option, measuring their outcomes and relevant characteristics, and mapping them together with the expected outcomes and characteristics of the climate option. Feasibility spaces are a promising method to prioritize climate options, realistically assess the achievability of climate goals, and construct scenarios with empirically‐grounded assumptions. This article is categorized under: Climate, History, Society, Culture > Disciplinary Perspectives Assessing Impacts of Climate Change > Representing Uncertainty The Carbon Economy and Climate Mitigation > Decarbonizing Energy and/or Reducing Demand
Article
Full-text available
The historical tendency to view medicine as both an art and a science may have contributed to a disinclination among clinicians towards cognitive science. In particular, this has had an impact on the approach towards the diagnostic process which is a barometer of clinical decision-making behaviour and is increasingly seen as a yardstick of clinician calibration and performance. The process itself is more complicated and complex than was previously imagined, with multiple variables that are difficult to predict, are interactive, and show nonlinearity. They appear to characterise a complex adaptive system. Many aspects of the diagnostic process, including the psychophysics of signal detection and discrimination, ergonomics, probability theory, decision analysis, factor analysis, causal analysis and more recent developments in judgement and decision-making (JDM), especially including the domain of heuristics and cognitive and affective biases, appear fundamental to a good understanding of it. A preliminary analysis of factors such as manifestness of illness and others that may impede clinicians' awareness and understanding of these issues is proposed here. It seems essential that medical trainees be explicitly and systematically exposed to specific areas of cognitive science during the undergraduate curriculum, and learn to incorporate them into clinical reasoning and decision-making. Importantly, this understanding is needed for the development of cognitive bias mitigation and improved calibration of JDM in clinical practice.
Article
Full-text available
Bayesian approaches presuppose that following the coherence conditions of probability theory makes probabi-listic judgments more accurate. But other influential theories claim accurate judgments (with high "ecological rationality") do not need to be coherent. Empirical results support these latter theories, threatening Bayesian models of intelligence; and suggesting, moreover, that "heuristics and biases" research, which focuses on violations of coherence, is largely irrelevant. We carry out a higher-power experiment involving poker probability judgments (and a formally analogous urn task), with groups of poker novices, occasional poker players, and poker experts, finding a positive relationship between coherence and accuracy both between groups and across individuals. Both the positive relationship in our data, and past null results, are captured by a sample-based Bayesian approximation model, where a person's accuracy and coherence both increase with the number of samples drawn. Thus, we reconcile the theoretical link between accuracy and coherence with apparently negative empirical results.
Article
Full-text available
Identifying sources of inaccuracy in probability judgments can help correct decision errors. We combine the lens model with the Brier Score to decompose this measure of inaccuracy into (a) difficulty of judgment task, (b) outcome predictability, (c) bias, (d) inappropriate weighting of cues, (e) private information, and (f) noise. Private information refers to discriminatory power of judgments uncorrelated with observable, modeled cues. Its identification can be particularly valuable to explain inaccuracy of laypersons’ judgments about personal events, and to gauge the knowledge of experts predicting impersonal events. We illustrate this new decomposition by using it to explain the accuracy of professional forecasters in predicting economic recession and the grossly inaccurate judgments of older Americans about their longevity. In both applications, judgment difficulty makes the largest contributions to inaccuracy and its variation, although this is partially offset by outcome predictability. Inappropriate weighting of cues is substantial and helps explain heterogeneity in inaccuracy. In the longevity application, low discriminatory power of the judgments is partly due to insufficient responsiveness to mortality risk factors, particularly among the least educated. In both applications, noise in judgment residuals (net of predictions from cues) is an important source of inaccuracy and its variation. This noise is partially offset by nonnegligible (private) information these residuals contain on the outcome.
Article
Full-text available
Background Cancer risk algorithms were introduced to clinical practice in the last decade, but they remain underused. We investigated whether General Practitioners (GPs) change their referral decisions in response to an unnamed algorithm, if decisions improve, and if changing decisions depends on having information about the algorithm and on whether GPs overestimated or underestimated risk. Methods 157 UK GPs were presented with 20 vignettes describing patients with possible colorectal cancer symptoms. GPs gave their risk estimates and inclination to refer. They then saw the risk score of an unnamed algorithm and could update their responses. Half of the sample was given information about the algorithm’s derivation, validation, and accuracy. At the end, we measured their algorithm disposition. We analysed the data using multilevel regressions with random intercepts by GP and vignette. Results We find that, after receiving the algorithm’s estimate, GPs’ inclination to refer changes 26% of the time and their decisions switch entirely 3% of the time. Decisions become more consistent with the NICE 3% referral threshold ( OR 1.45 [1.27, 1.65], p < .001). The algorithm’s impact is greatest when GPs have underestimated risk. Information about the algorithm does not have a discernible effect on decisions but it results in a more positive GP disposition towards the algorithm. GPs’ risk estimates become better calibrated over time, i.e., move closer to the algorithm. Conclusions Cancer risk algorithms have the potential to improve cancer referral decisions. Their use as learning tools to improve risk estimates is promising and should be further investigated.
Article
Full-text available
Investigated how often people are wrong when they are certain that they know the answer to a question. Five studies with a total of 528 paid volunteers suggest that the answer is "too often." For a variety of general-knowledge questions, Ss first chose the most likely answer and then indicated their degree of certainty that the answer they had selected was, in fact, correct. Across several different question and response formats, Ss were consistently overconfident. They had sufficient faith in their confidence judgments to be willing to stake money on their validity. The psychological bases for unwarranted certainty are discussed in terms of the inferential processes whereby knowledge is constructed from perceptions and memories. (15 ref)
Article
Full-text available
Fault trees represent problem situations by organizing "things that could go wrong" into functional categories. Such trees are essential devices for analyzing and evaluating the fallibility of complex systems. They follow many different formats, sometimes by design, other times inadvertently. The present study examined the effects of varying 3 aspects of fault tree structure on the evaluation of a fault tree for the event "a car fails to start." The fault trees studied had 4 to 8 branches, including "battery charge insufficient," "fuel system defective," and "all other problems." Six experiments were conducted, 5 of which used a total of 628 college community members and 1 of which used 29 experienced auto mechanics. Results show the following: (a) Ss were quite insensitive to what had been left out of a fault tree. (b) Increasing the amount of detail for the tree as a whole or just for some of its branches produced small effects on perceptions. (c) The perceived importance of a particular branch was increased by presenting it in pieces (i.e., as 2 separate component branches). Insensitivity to omissions was found with both college Ss and mechanics. It is suggested that, aside from their relevance for the study of problem solving, results have implications for (a) how best to inform the public about technological risks and to involve it in policy decisions and (b) how experts should perform fault tree analyses of the risks from technological systems. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Presents a new theory of subjective probability according to which different descriptions of the same event can give rise to different judgments. The experimental evidence confirms the major predictions of the theory. First, judged probability increases by unpacking the focal hypothesis and decreases by unpacking the alternative hypothesis. Second, judged probabilities are complementary in the binary case and subadditive in the general case, contrary to both classical and revisionist models of belief. Third, subadditivity is more pronounced for probability judgments than for frequency judgments and is enhanced by compatible evidence. The theory provides a unified treatment of a wide range of empirical findings. It is extended to ordinal judgments and to the assessment of upper and lower probabilities. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Article
This article has no abstract; the first 100 words appear below. The step-by-step process of clinical problem-solving is the subject of this new Journal feature. Information about a real patient is presented to an expert clinician in stages (boldface type), to simulate the way such information emerges in everyday clinical practice. The clinician responds (regular type) as each piece of information is presented, sharing his or her reasoning extemporaneously with the reader. A commentary on the process by the authors follows. A 40-year-old housewife was admitted to the hospital for sharp left-sided chest pain. In the preceding six days she had had one episode of left-shoulder pain and one episode of . . . Source Information From the Divisions of Clinical Decision Making and General Internal Medicine, New England Medical Center, Tufts University School of Medicine, Boston.
Article
Cardiogenic shock resulting from acute myocardial infarction is a serious complication with a high mortality rate, but little is known about whether its incidence or outcome has changed over time. As part of an ongoing population-based study of acute myocardial infarction, we examined trends over time in the incidence and mortality rate of cardiogenic shock after acute myocardial infarction. We studied 4762 patients with acute myocardial infarction who were admitted to 16 hospitals in the Worcester, Massachusetts, metropolitan area between 1975 and 1988. We determined the incidence of and short-term and long-term mortality due to cardiogenic shock in each of six years during this study period. The incidence of cardiogenic shock complicating acute myocardial infarction remained relatively constant, averaging 7.5 percent. Multivariate regression analysis that controlled for variables affecting incidence revealed significant though inconsistent temporal trends in the incidence of cardiogenic shock. As compared with the risk in 1975, the adjusted relative risk (with 95 percent confidence interval) was 0.83 (0.54 to 1.28) in 1978, 0.96 (0.63 to 1.48) in 1981, 0.68 (0.42 to 1.12) in 1984, 1.16 (0.70 to 1.92) in 1986, and 1.65 (0.99 to 2.77) in 1988. The overall in-hospital mortality rate among patients with cardiogenic shock was significantly higher than that among patients without this complication (77.7 percent vs. 13.5 percent, P less than 0.001). The in-hospital mortality among the patients with shock did not improve between 1975 (73.7 percent) and 1988 (81.7 percent). Long-term survival during the 14-year follow-up period was significantly worse among patients who survived cardiogenic shock during hospitalization than among patients who did not have shock (P less than 0.001). The results of this observational, community-wide study suggest that neither the incidence nor the prognosis of cardiogenic shock resulting from acute myocardial infarction has improved over time. Both in-hospital and long-term survival remain poor for patients with this complication.