ArticlePDF Available

The Forensic Confirmation Bias: Problems, Perspectives, and Proposed Solutions

Authors:

Abstract

As illustrated by the mistaken, high-profile fingerprint identification of Brandon Mayfield in the Madrid Bomber case, and consistent with a recent critique by the National Academy of Sciences (2009), it is clear that the forensic sciences are subject to contextual bias and fraught with error. In this article, we describe classic psychological research on primacy, expectancy effects, and observer effects, all of which indicate that context can taint people's perceptions, judgments, and behaviors. Then we describe recent studies indicating that confessions and other types of information can set into motion forensic confirmation biases that corrupt lay witness perceptions and memories as well as the judgments of experts in various domains of forensic science. Finally, we propose best practices that would reduce bias in the forensic laboratory as well as its influence in the courts.
Journal
of
Applied
Research
in
Memory
and
Cognition
2
(2013)
42–52
Contents
lists
available
at
SciVerse
ScienceDirect
Journal
of
Applied
Research
in
Memory
and
Cognition
jo
u
rn
al
hom
epa
g
e:
www.elsevier.com/locate/jarmac
Target
article
The
forensic
confirmation
bias:
Problems,
perspectives,
and
proposed
solutions
Saul
M.
Kassina,, Itiel
E.
Drorb,
Jeff
Kukuckaa
aJohn
Jay
College
of
Criminal
Justice,
United
States
bUniversity
College
London
(UCL),
United
Kingdom
a
r
t
i
c
l
e
i
n
f
o
Article
history:
Received
28
October
2012
Received
in
revised
form
29
December
2012
Accepted
3
January
2013
Keywords:
Context
effects
Expectancy
effects
Confirmation
bias
a
b
s
t
r
a
c
t
As
illustrated
by
the
mistaken,
high-profile
fingerprint
identification
of
Brandon
Mayfield
in
the
Madrid
Bomber
case,
and
consistent
with
a
recent
critique
by
the
National
Academy
of
Sciences
(2009),
it
is
clear
that
the
forensic
sciences
are
subject
to
contextual
bias
and
fraught
with
error.
In
this
article,
we
describe
classic
psychological
research
on
primacy,
expectancy
effects,
and
observer
effects,
all
of
which
indicate
that
context
can
taint
people’s
perceptions,
judgments,
and
behaviors.
Then
we
describe
recent
studies
indicating
that
confessions
and
other
types
of
information
can
set
into
motion
forensic
confirmation
biases
that
corrupt
lay
witness
perceptions
and
memories
as
well
as
the
judgments
of
experts
in
various
domains
of
forensic
science.
Finally,
we
propose
best
practices
that
would
reduce
bias
in
the
forensic
laboratory
as
well
as
its
influence
in
the
courts.
©
2013
Society
for
Applied
Research
in
Memory
and
Cognition.
Published
by
Elsevier
Inc.
All
rights
reserved.
1.
The
problem
On
March
11,
2004,
a
coordinated
series
of
bombs
exploded
in
four
commuter
trains
in
Madrid.
The
explosions
killed
191
people,
wounded
1800
others,
and
set
into
motion
a
full-scale
international
investigation.
On
the
basis
of
a
latent
fingerprint
lifted
from
a
bag
containing
detonating
devices,
the
U.S.
Federal
Bureau
of
Investi-
gation
(FBI)
positively
identified
Brandon
Mayfield,
an
American
Muslim
from
the
state
of
Oregon.
Subsequent
to
9–11,
Mayfield
had
been
on
an
FBI
watch
list.
Following
standard
protocol,
a
num-
ber
of
FBI
fingerprint
examiners
independently
concluded
that
the
fingerprint
was
definitely
that
of
Mayfield.
After
being
arrested
and
appearing
in
court,
Mayfield
requested
to
have
a
fingerprint
examiner
on
the
defense
team
examine
the
prints.
That
fingerprint
examiner
concurred
with
the
judgment
that
the
print
was
May-
field’s.
Soon
thereafter,
however,
the
Spanish
authorities
matched
the
prints
to
the
real
Madrid
bomber,
an
Algerian
national
by
the
name
of
Ouhnane
Daoud.
Following
an
internal
investigation
at
the
FBI
and
a
report
by
the
Office
of
the
Inspector
General
(OIG,
2006),
“confirmation
bias”
was
listed
as
a
contributing
factor
to
the
erro-
neous
identification.
At
that
point,
the
U.S.
government
issued
a
formal
apology,
and
paid
two
million
dollars
in
compensation.
The
FBI
has
rigorous
standards
of
training
and
practice
and
highly
competent
forensic
examiners.
It
is
considered
one
of
the
best,
if
not
the
best
forensic
laboratories
in
the
U.S.,
if
not
in
the
entire
world.
Thus,
it
was
not
easy
to
dismiss
the
error
and
Corresponding
author.
E-mail
address:
Skassin@jjay.cuny.edu (S.M.
Kassin).
claim
it
to
be
the
product
of
mere
“bad
apples.”
The
Mayfield
case
(preceded
by
a
decade
in
which
the
U.S.
Supreme
Court
had
sought
to
curb
the
introduction
at
trial
of
experts
in
junk
science—see
Daubert
v.
Merrell
Dow
Pharmaceuticals,
1993;
Kumho
Tire
Co.
v.
Carmichael,
1999),
along
with
improprieties
discovered
in
various
state
laboratories,
have
come
together
to
draw
attention
to
forensic
science
and
to
the
fact
that
is
not
infallible.
Forensic
science
errors
have
also
surfaced
with
alarming
frequency
in
DNA
exoneration
cases
and
other
wrongful
convictions
(Garrett,
2011;
http://www.innocenceproject.org/fix/Crime-Lab-Oversight.php).
In
“The
genetics
of
innocence,”
Hampikian,
West,
and
Akselrod
(2011)
found
that
several
types
of
forensic
science
testimony
had
been
used
to
wrongfully
convict
innocent
individuals.
In
cases
where
trial
transcripts
or
reliable
forensic
science
data
were
avail-
able
for
review,
38%
contained
incorrect
serology
testimony,
which
is
highly
regarded.
In
addition,
22%
involved
hair
comparisons;
3%
involved
bite
mark
comparisons;
and
2%
involved
fingerprint
comparisons.
The
National
Academy
of
Sciences
(NAS,
2009)
published
a
scathing
assessment
of
a
broad
range
of
forensic
disciplines.
Included
in
this
critique
were
toolmarks
and
firearms;
hair
and
fiber
analysis;
impression
evidence;
blood
spatter;
fibers;
hand-
writing;
and
even
fingerprints—until
recently
considered
infallible.
NAS
concluded
that
there
are
problems
with
standardization,
reli-
ability,
accuracy
and
error,
and
the
potential
for
contextual
bias.
Specifically,
the
NAS
report
went
on
to
advise
that:
“These
disci-
plines
need
to
develop
rigorous
protocols
to
guide
these
subjective
interpretations
and
pursue
equally
rigorous
research
and
evalua-
tion
programs.
The
development
of
such
research
programs
can
benefit
significantly
from
other
areas,
notably
from
the
large
body
2211-3681/$
see
front
matter
©
2013
Society
for
Applied
Research
in
Memory
and
Cognition.
Published
by
Elsevier
Inc.
All
rights
reserved.
http://dx.doi.org/10.1016/j.jarmac.2013.01.001
S.M.
Kassin
et
al.
/
Journal
of
Applied
Research
in
Memory
and
Cognition
2
(2013)
42–52
43
of
research
on
the
evaluation
of
observer
performance
in
diagnos-
tic
medicine
and
from
the
findings
of
cognitive
psychology
on
the
potential
for
bias
and
error
in
human
observers”
(p.
8).
The
criticisms
of
the
forensic
sciences
are
twofold.
First
is
the
realization
that
too
often
the
stimulus
does
not
compel
a
per-
ceptual
judgment
that
is
objective
and,
hence,
there
is
a
concern
both
for
inter-rater
reliability
across
experts
and
for
intra-test
reli-
ability
over
time
within
experts.
In
many
forensic
disciplines,
the
human
examiner
is
the
main
instrument
of
analysis.
It
is
the
foren-
sic
expert
who
compares
visual
patterns
and
determines
if
they
are
“sufficiently
similar”
to
conclude
that
they
originate
from
the
same
source
(e.g.,
whether
two
fingerprints
were
made
by
the
same
fin-
ger,
whether
two
bullets
were
fired
from
the
same
gun,
or
whether
two
signatures
were
made
by
the
same
person).
However,
determi-
nations
of
“sufficiently
similar”
have
no
criteria
and
quantification
instruments;
these
judgments
are
subjective.
Indeed,
a
recent
study
has
shown
that
when
the
same
fingerprint
evidence
is
given
to
the
same
examiners,
they
reach
different
conclusions
approximately
10%
of
the
time
(Ulery,
Hicklin,
Buscaglia,
&
Roberts,
2012).
Dror
et
al.
(2011)
have
shown
not
only
that
the
decisions
are
inconsis-
tent
but
that
even
the
initial
perception
of
the
stimulus,
prior
to
comparison,
lack
inter-
and
intra-expert
consistency.
Following
from
this
realization
about
the
lack
of
reliabil-
ity
is
a
corollary
concern
that
forensic
experts’
judgments
are
“biasable”—that
is,
they
are
significantly
influenced
by
psycho-
logical
factors
(Dror
&
Cole,
2010;
Dror
&
Rosenthal,
2008).
The
biasability
of
forensic
science
is
a
particular
concern
because
forensic
experts
work
within
a
variety
of
contextual
influences:
Knowing
the
nature
and
details
of
the
crime,
being
pressured
by
detectives;
working
within—and
as
part
of—the
police;
the
use
of
computer-generated
lists
that
feature
some
suspects
ahead
of
others;
appearing
in
court
within
an
adversarial
criminal
jus-
tice
system.
Describing
the
various
sources
of
bias,
Saks,
Risinger,
Rosenthal,
and
Thompson
(2003)
note
that
examiners
often
receive
direct
communications
from
police
(e.g.,
in
transmittal
letters
that
accompany
submitted
evidence,
in
person,
and
by
phone),
that
there
is
often
cross-communication
among
different
examiners
involved
in
a
case
(e.g.,
via
informal
channels
or
as
mandated
in
“peer
review”
processes
designed
to
ensure
the
reasonableness
of
conclusions),
and
that
police
and
prosecutors
sometimes
respond
to
non-supportive
test
results
by
requesting
a
re-examination.
In
short,
the
contextual
influences
that
impinge
on
forensic
examin-
ers
are
numerous
and
they
come
in
many
forms,
some
of
which
are
subtle.
The
erroneous
identification
in
the
Madrid
bomber
case
illustrated
a
number
of
psychological
factors
at
work
(e.g.,
the
latent
fingerprint
was
examined
against
a
pre-existing
“target,”
without
first
being
properly
analyzed
in
isolation;
the
examiners
were
pre-
armed
with
contextual
information,
leading
them
to
be
suspicious
of
their
target;
and
the
case
was
high
in
profile
and
time-urgent,
increasing
the
need
for
closure).
In
this
article,
we
overview
prior
critiques
of
the
forensic
sci-
ences
and
specific
cases
in
which
experts
have
rendered
judgments
that
were
fraught
with
bias
and
error.
Then
we
consider
classic
psy-
chological
research
on
primacy,
expectancy
effects,
and
observer
effects,
and
the
various
confirmation
biases
that
can
taint
people’s
perceptions,
judgments,
and
behaviors.
Then
we
examine
recent
empirical
work
on
confirmation
biases
in
various
domains
of
foren-
sic
science.
Finally
we
use
psychology
to
propose
best
practices
that
would
minimize
such
effects—both
in
the
crime
laboratory
and
in
the
courtroom.
2.
The
forensic
sciences:
accuracy
and
error
For
over
100
years
forensic
science
disciplines
have
pro-
duced
evidence
used
both
to
prosecute
and
convict
criminals
as
well
as
to
exonerate
and
release
those
who
are
inno-
cent.
The
domains
of
forensic
science
are
varied
and
include
judgments
of
fingerprints,
firearms
examinations,
toolmarks,
bite
marks,
tire
and
shoe
impressions,
bloodstain
pat-
tern
analysis,
handwriting,
hair,
coatings
such
as
paint
and
chemicals—including
drugs
and
such
materials
as
fibers,
fluids,
fire
and
explosive
analysis,
digital
evidence,
and
serological
analysis.
Since
the
1990s,
advances
in
DNA
technology
have
proved
particularly
useful
in
these
regards.
Many
previously
unsolved
crimes
have
been
solved
because
of
DNA
samples
left
in
hair,
semen,
blood,
skin,
and
saliva.
Often,
however,
these
DNA
cases
have
revealed
that
faulty
forensic
sciences
have
contributed
to
the
wrongful
convictions
of
innocent
people.
As
exposed
by
more
than
300
DNA
exonerations
identified
by
the
Inno-
cence
Project,
two
sets
of
problems
have
come
to
light:
(1)
Forensic
science
judgments
are
often
derived
from
inadequate
testing
and
analysis,
if
not
outright
fabrication;
and
(2)
Experts
often
give
imprecise
or
exaggerated
testimony,
drawing
con-
clusions
not
supported
by
the
data—in
some
cases
drawing
charges
of
misconduct.
Indeed,
some
form
of
invalid
or
improper
forensic
science
was
a
contributing
factor
in
the
original
con-
victions
of
more
than
half
of
all
DNA
exonerees
(Garrett,
2011;
http://www.innocenceproject.org/understand/Unreliable-Limited
-Science.php).
In
cases
that
are
not
subject
to
bias,
certain
forensic
sciences—such
as
latent
fingerprint
identifications—offer
a
poten-
tially
powerful
tool
in
administering
justice
(e.g.,
Tangen,
Thompson,
&
McCarthy,
2011;
Ulery,
Hicklin,
Buscaglia,
&
Roberts,
2011).
In
most
domains,
however,
there
are
no
quantitatively
pre-
cise
objective
measures
and
no
instruments
of
measurement—just
partial
samples
from
a
crime
scene
to
be
compared
against
a
par-
ticular
suspect.
No
two
patterns
are
identical,
so
an
examiner
invariably
must
determine
whether
they
are
“sufficiently
simi-
lar”
(a
term
that
has
yet
to
be
defined
or
quantified)
to
conclude
that
they
originate
from
the
same
source.
The
absence
of
objec-
tive
standards
is
reflected
in
the
lack
of
consistency
not
only
between
examiners
but
within
examiners
over
time.
Hence,
not
only
do
inter-variations
exist,
but
intra-variations
show
that
the
same
examiner
inspecting
the
same
data
on
multiple
occasions
may
reach
different
conclusions
(Ulery
et
al.,
2012).
The
lack
of
reliability
indicates
that
the
identification
process
can
be
subjective
and
that
judgments
are
susceptible
to
bias
from
other
sources.
This
is
especially
problematic
in
cases
that
contain
complex
forms
of
forensic
evidence,
as
is
often
the
case
in
evidence
gathered
in
crime
scene.
Popular
TV
programs,
such
as
CSI,
communicate
a
false
belief
in
the
powers
of
forensic
science,
a
problem
that
can
be
exacer-
bated
when
forensic
experts
overstate
the
strength
of
the
evidence.
Such
occurrences
are
common
when
you
consider
the
follow-
ing:
(1)
Across
many
domains,
experts
are
often
overconfident
in
their
abilities
(e.g.,
Baumann,
Deber,
&
Thompson,
1991);
(2)
the
courts,
for
the
most
part,
have
blindly
accepted
forensic
sci-
ence
evidence
without
much
scrutiny
(Mnookin
et
al.,
2011);
(3)
errors
are
often
not
apparent
in
the
forensic
sciences
because
ground
truth
is
often
not
known
as
a
matter
of
certainty;
(4)
many
forensic
examiners
work
for
police
and
appear
in
court
as
advocates
for
the
prosecution;
and
(5)
many
forensic
examin-
ers
consider
themselves
objective
and
immune
to
bias.
As
stated
by
the
Chair
of
the
Fingerprint
Society:
“Any
fingerprint
exam-
iner
who
comes
to
a
decision
on
identification
and
is
swayed
either
way
in
that
decision
making
process
under
the
influ-
ence
of
stories
and
gory
images
is
either
totally
incapable
of
performing
the
noble
tasks
expected
of
him/her
or
is
so
imma-
ture
he/she
should
seek
employment
at
Disneyland”
(Leadbetter,
2007).
44
S.M.
Kassin
et
al.
/
Journal
of
Applied
Research
in
Memory
and
Cognition
2
(2013)
42–52
3.
Classic
confirmation
biases:
a
psychological
perspective
Over
the
years,
research
has
identified
a
number
of
confirmation
biases
by
which
people
tend
to
seek,
perceive,
interpret,
and
create
new
evidence
in
ways
that
verify
their
preexisting
beliefs.
Confir-
mation
biases
are
a
pervasive
psychological
phenomenon.
Classic
studies
showed
that
prior
exposure
to
images
of
a
face
or
a
body,
an
animal
or
a
human,
or
letters
or
numbers,
can
bias
what
people
see
in
an
ambiguous
figure.
More
recent
research
shows
that
our
impressions
of
other
people
can
similarly
be
tainted.
Recognition
of
confirmation
bias
as
a
human
phenomenon
is
not
new.
Julius
Caesar
is
cited
to
have
said
that
“Men
freely
believe
that
which
they
desire”
(e.g.,
Hochschild,
2008).
Refer-
ences
can
also
be
found
in
the
writings
of
William
Shakespeare
and
Francis
Bacon
(Risinger,
Saks,
Thompson,
&
Rosenthal,
2002).
Indeed,
Nickerson
(1998)
notes
that
confirmation
biases
may
be
implicated
in
“a
significant
fraction
of
the
disputes,
altercations,
and
misunderstandings
that
occur
among
individuals,
groups,
and
nations”—including,
among
others,
the
witch
trials
of
Western
Europe
and
New
England,
the
continuation
of
ineffective
medical
treatments,
inaccurate
medical
diagnoses,
and
adherence
to
erro-
neous
scientific
theories
(p.
175).
3.1.
Perceptual
and
cognitive
effects
Contemporary
work
on
confirmation
biases
began
with
classic
research
suggesting
that
the
perception
of
a
stimulus
is
not
solely
a
function
of
the
stimulus
itself
(i.e.,
“bottom-up”
processing),
but
is
also
shaped
by
the
qualities
of
the
observer
(i.e.,
“top-down”
processing).
For
example,
Bruner
and
Goodman
(1947)
asked
chil-
dren
to
estimate
the
size
of
coins
from
memory
and
found
that
children
of
low-SES
overestimated
the
size
of
the
coins
to
a
greater
degree
than
did
children
of
high
SES.
Bruner
and
Potter
(1964)
demonstrated
that
one’s
expectations
can
also
interfere
with
visual
recognition.
Participants
were
shown
photographs
of
common
objects
(e.g.,
a
dog,
a
fire
hydrant,
etc.)
that
had
been
blurred
to
various
degrees,
and
then
watched
as
the
pictures
were
gradually
brought
into
focus.
The
blurrier
the
photographs
were
at
the
start,
the
less
able
participants
were
to
correctly
recognize
the
objects
later.
Bruner
and
Potter
explained
these
results
by
noting
that
par-
ticipants
readily
generated
hypotheses
about
the
blurry
images
and
then
maintained
these
beliefs
even
as
the
pictures
came
into
focus.
Using
simple
ambiguous
(“reversible”)
figures,
other
research
as
well
showed
that
expectations
shape
perception
(Boring,
1930;
Leeper,
1935;
for
a
compendium
of
such
figures,
see
Fisher,
1968).
Recent
studies
have
demonstrated
similar
effects
using
more
complex
stimuli.
For
example,
Bressan
and
Dal
Martello
(2002)
showed
participants
photographs
of
adult-child
pairs
and
asked
them
to
rate
their
facial
resemblance.
When
led
to
believe
that
the
adult
and
child
were
genetically
related
(e.g.,
parent
and
offspring),
participants
rated
their
facial
similarity
as
higher
even
when
the
two
were
not
truly
related.
Other
studies
have
similarly
shown
that
people
perceive
more
similarity
between
a
suspect
and
a
facial
composite
when
led
to
believe
the
suspect
is
guilty
(Charman,
Gregory,
&
Carlucci,
2009);
and
people
hear
more
incrimination
in
degraded
speech
recordings
when
the
interviewee
was
thought
to
be
a
crime
suspect
(Lange,
Thomas,
Dana,
&
Dawes,
2011).
To
sum
up:
A
wealth
of
evidence
indicates
that
an
observer’s
expectations
can
impact
visual
and
auditory
perception.
Although
similar
effects
can
be
driven
by
motivation
(Balcetis
&
Dunning,
2006,
2010;
Radel
&
Clement-Guillotin,
2012),
confirmation
biases
are
a
natural
and
automatic
feature
of
human
cognition
that
can
occur
in
the
absence
of
self-interest
(Nickerson,
1998)
and
operate
without
conscious
awareness
(Findley
&
Scott,
2006;
Kunda,
1990).
3.2.
Social
perception
effects
Strong
expectancy
effects
can
also
contaminate
the
processes
of
social
perception.
This
research
literature
can
be
traced
to
Asch’s
(1946)
initial
finding
of
primacy
effects
in
impression
formation
by
which
information
about
a
person
presented
early
in
a
sequence
is
weighed
more
heavily
than
information
presented
later
which
is
ignored,
discounted,
or
assimilated
into
the
early-formed
impres-
sion.
Illustrating
the
process
of
assimilation,
or
“change
of
meaning”
hypothesis,
later
research
revealed
that
depending
on
one’s
first
impression
of
a
person,
the
word
“proud”
can
mean
self-respecting
or
conceited;
“critical”
can
mean
astute
or
picky;
and
“impul-
sive”
can
mean
spontaneous
or
reckless
(Hamilton
&
Zanna,
1974;
Watkins
&
Peynircioglu,
1984).
As
a
result
of
these
processes,
addi-
tional
research
has
shown
that
beliefs,
once
they
take
root,
can
persist
even
after
the
evidence
on
which
they
were
based
has
been
discredited
(Anderson,
Lepper,
&
Ross,
1980).
In
fact,
the
presence
of
objective
evidence
that
can
be
selectively
interpreted
may
exac-
erbate
the
biasing
effects
of
pre-existing
beliefs
(Darley
&
Gross,
1983).
Research
on
confirmatory
hypothesis
testing
also
explains
the
power
and
resistance
to
change
of
first
impressions.
In
a
clas-
sic
experiment,
Wason
(1960)
gave
participants
a
three-number
sequence,
challenged
them
to
discern
the
rule
used
to
gener-
ate
the
set,
and
found
that
very
few
discovered
the
correct
rule
because
once
they
seized
upon
a
hypothesis
they
would
search
only
for
confirming
evidence
(see
also
Klayman
&
Ha,
1997).
In
a
social-interactional
context,
Snyder
and
Swann
(1978)
brought
together
pairs
of
participants
for
a
getting-acquainted
interview.
In
each
pair,
interviewers
were
led
to
believe
that
their
partner
was
either
introverted
or
extroverted.
Expecting
a
certain
kind
of
per-
son,
participants
unwittingly
sought
evidence
that
would
confirm
their
expectations:
Those
in
the
introverted
condition
chose
to
ask
mostly
introvert-oriented
questions
(“Have
you
ever
felt
left
out
of
some
social
group?”);
those
in
the
extroverted
condition
asked
extrovert-oriented
questions
(“How
do
you
liven
up
a
party?”).
In
doing
so,
interviewers
procured
support
for
their
beliefs,
causing
neutral
observers
who
later
listened
to
the
tapes
to
perceive
the
interviewees
as
introverted
or
extroverted
on
the
basis
of
their
randomly
assigned
condition.
The
fact
that
people
can
be
jaded
by
existing
beliefs
is
a
phe-
nomenon
of
potential
consequence
in
forensic
settings.
In
one
study,
participants
reviewed
a
mock
police
file
of
a
crime
inves-
tigation
that
contained
weak
circumstantial
evidence
pointing
to
a
possible
suspect.
Some
participants
but
not
others
were
asked
to
form
and
state
an
initial
hypothesis
as
to
the
likely
offender.
Those
who
did
so
proceeded
to
search
for
additional
evidence
and
interpret
that
evidence
in
ways
that
confirmed
their
hypothesis.
Hence,
a
weak
suspect
became
the
prime
suspect
(O‘Brien,
2009).
In
another
study,
Kassin,
Goldstein,
and
Savitsky
(2003)
had
some
participants
but
not
others
commit
a
mock
crime,
after
which
all
were
questioned
by
interrogators
who
by
random
assignment
were
led
to
presume
guilt
or
innocence.
Interrogators
who
presumed
guilt
asked
more
incriminating
questions,
conducted
more
coer-
cive
interrogations,
and
tried
harder
to
get
the
suspect
to
confess.
In
turn,
this
more
aggressive
style
made
the
suspects
sound
defensive
and
led
observers
who
later
listened
to
the
tapes
to
judge
them
as
guilty,
even
when
they
were
innocent.
Follow-up
research
has
con-
firmed
variants
of
this
latter
chain
of
events
in
the
context
of
suspect
interviews
(Hill,
Memon,
&
McGeorge,
2008;
Narchet,
Meissner,
&
Russano,
2011).
An
individual’s
prior
beliefs
can
produce
dramatic
behavioral
consequences
as
well,
often
setting
into
motion
a
three-step
behavioral
confirmation
process
by
which
a
perceiver
forms
an
impression
of
a
target
person,
interacts
in
a
manner
that
is
consis-
tent
with
that
impression,
and
causes
the
target
person
unwittingly
S.M.
Kassin
et
al.
/
Journal
of
Applied
Research
in
Memory
and
Cognition
2
(2013)
42–52
45
to
adjust
his
or
her
behavior.
The
net
result:
a
process
that
trans-
forms
expectations
into
reality
(Darley
&
Fazio,
1980;
Rosenthal
&
Jacobson,
1966;
Snyder
&
Swann,
1978).
In
an
early
demonstration
of
this
phenomenon,
Rosenthal
and
Fode
(1963)
reported
on
an
experimenter
expectancy
effect,
whereby
an
experimenter
who
is
aware
of
the
hypothesis
of
a
study
and
the
condition
to
which
a
participant
is
assigned
can
unwittingly
produce
results
consistent
with
the
expected
outcome.
Thus,
when
students
were
led
to
believe
that
the
rats
they
would
be
train-
ing
at
maze
learning
were
bright
or
dull,
those
rats
believed
to
be
bright
learned
more
quickly
(for
an
overview
of
this
research,
see
Rosenthal,
2002).
In
subsequent
research
on
teacher
expectancy
effects,
Rosenthal
and
Jacobson
(1966)
extended
these
findings
to
human
participants
and
found
that
when
elementary
school
tea-
chers
were
led
to
believe
that
certain
of
their
students,
randomly
assigned,
were
on
the
verge
of
an
intellectual
growth
spurt,
those
selected
students
exhibited
greater
improvement
in
academic
tests
eight
months
later.
Whether
training
rats
or
teaching
students,
it
appears
that
people
unwittingly
act
upon
their
beliefs
in
ways
that
produced
the
expected
outcomes.
Although
the
interpretation
of
the
teacher
expectancy
effect
is
a
source
of
some
controversy
(Jussim,
2012),
self-fulfilling
prophecies
have
amply
been
demon-
strated
not
only
in
the
laboratory
but
in
schools
and
other
types
of
organizations
as
well
(for
reviews,
see
Kierein
&
Gold,
2000;
McNatt,
2000).
3.3.
Cognitive
and
motivational
sources
of
bias
It
is
clear
that
belief-confirming
thought
processes
are
an
inher-
ent
feature
of
human
cognition.
In
their
classic
studies,
Tversky
and
Kahneman
(1974)
demonstrated
that
people
naturally
rely
on
various
cognitive
heuristics
and
that
heuristic
thinking,
while
gen-
erally
beneficial,
can
also
produce
systematic
errors
in
judgment,
especially
where
strong
prior
expectations
exist.
Over
time,
and
across
a
range
of
domains,
basic
psychological
research
has
shown
that
strong
expectations
provide
a
sufficient
and
unwitting
trig-
ger
of
our
tendency
to
seek,
perceive,
interpret,
and
create
new
evidence
in
ways
that
verify
preexisting
beliefs.
At
times,
confirmation
biases
can
be
fueled
by
motivational
goals.
Kunda
(1990)
argued
that
motivation
influences
reasoning
indirectly
as
a
result
of
two
types
of
goals:
accuracy
goals,
where
individuals
strive
to
form
an
accurate
belief
or
judgment,
and
directional
goals,
where
individuals
seek
a
particular
desired
conclu-
sion.
In
the
latter
case,
people
maintain
an
“illusion
of
objectivity”
that
prevents
them
from
recognizing
that
their
cognition
has
been
tainted
by
preference
or
desire
(Kunda,
1990,
p.
483).
Motivated
reasoning
is
pervasive.
Hence,
people
exhibit
a
ubiquitous
self-
serving
positivity
bias
in
the
attributions
they
make
for
their
own
successes
and
failures
(Mezulis,
Abramson,
Hyde,
&
Hankin,
2004).
Likewise,
people’s
attributions
for
external
events
are
influenced
by
their
political
ideologies
(Skitka,
Mullen,
Griffin,
Hutchinson,
&
Chamberlin,
2002).
Recent
empirical
research
supports
the
notion
that
directional
goals
can
unconsciously
guide
perception.
In
a
series
of
studies,
Balcetis
and
Dunning
(2006)
showed
participants
an
ambiguous
figure
that
could
be
readily
perceived
as
either
of
two
different
stimuli
(e.g.,
the
letter
“B”
or
the
number
“13”).
Depending
on
which
stimulus
they
perceived,
participants
were
assigned
either
to
drink
orange
juice
or
a
foul-smelling
beverage.
For
those
told
that
a
letter
would
assign
them
to
the
orange
juice
condition,
72%
saw
the
letter
B.
For
those
told
that
a
number
would
assign
them
to
the
orange
juice,
61%
saw
the
number
13.
Using
an
array
of
methods,
follow-up
studies
showed
that
these
results
were
not
due
to
selective
repor-
ting
but
rather
that
motivation
had
a
genuine
unconscious
effect
on
perception.
In
additional
research
on
“wishful
seeing,”
Balcetis
and
Dunning
(2010)
found
that
people
judged
objects
that
they
want
as
physically
closer
than
more
neutral
objects
(e.g.,
participants
who
were
thirsty
compared
to
those
who
were
quenched
estimated
that
a
bottle
of
water
across
a
table
was
closer
to
them).
Perceptions
of
form
and
distance
are
not
limitlessly
malleable,
even
among
people
who
are
highly
motivated.
As
Kunda
(1990)
noted,
“people
do
not
seem
to
be
at
liberty
to
conclude
whatever
they
want
to
conclude
merely
because
they
want
to”
(p.
482).
To
some
extent,
reality
constrains
perception.
Evidence
in
favor
of
one’s
biased
judgment
must
be
sufficient
to
allow
for
the
construc-
tion
of
that
judgment;
a
desired
outcome
cannot
be
rationalized
in
the
face
of
irrefutable
evidence
to
the
contrary.
This
is
precisely
why
ambiguous
stimuli
prove
particularly
susceptible
to
confirma-
tion
biases.
It
is
also
why
many
forensic
judgments
are
subject
to
bias.
4.
The
forensic
confirmation
bias
Nearly
40
years
ago,
Tversky
and
Kahneman
(1974)
reasoned
that
confirmation
bias
effects
could
extend
to
the
legal
system
insofar
as
“beliefs
concerning
the
likelihood
of.
.
.the
guilt
of
a
defendant”
could
impact
judicial
decision-making
(p.
1124).
They
further
speculated
that
the
operation
of
such
biases
would
affect
not
only
the
layperson
but
also
experienced
professionals.
These
statements
proved
quite
prescient.
Empirical
and
anecdotal
evi-
dence
now
suggests
that
pre-judgment
expectations
can
indeed
influence
interrogators
(Hill
et
al.,
2008;
Kassin,
Goldstein,
&
Savitsky,
2003;
Narchet
et
al.,
2011),
jurors
(Charman
et
al.,
2009;
Lange
et
al.,
2011),
judges
(Halverson,
Hallahan,
Hart,
&
Rosenthal,
1997),
eyewitnesses
(Hasel
&
Kassin,
2009),
and
experts
in
a
range
of
forensic
domains
(e.g.,
see
Dror
&
Cole,
2010;
Dror
&
Hampikian,
2011).
Thus,
we
use
the
term
forensic
confirmation
bias
to
summarize
the
class
of
effects
through
which
an
individual’s
preexisting
beliefs,
expectations,
motives,
and
situational
context
influence
the
collec-
tion,
perception,
and
interpretation
of
evidence
during
the
course
of
a
criminal
case.
As
Findley
and
Scott
(2006)
have
noted,
the
per-
nicious
result
produces
a
form
of
“tunnel
vision”—a
rigid
focus
on
one
suspect
that
leads
investigators
to
seek
out
and
favor
incul-
patory
evidence,
while
overlooking
or
discounting
any
exculpatory
evidence
that
might
exist.
A
growing
body
of
literature
has
begun
to
identify
the
ways
in
which
such
biases
can
pervade
the
investigative
and
judicial
processes.
4.1.
Context
effects
on
forensic
judgments
In
an
1894
treatise
on
distinguishing
genuine
from
forged
sig-
natures,
William
Hagan
wrote:
“There
must
be
no
hypothesis
at
the
commencement,
and
the
examiner
must
depend
wholly
on
what
is
seen,
leaving
out
of
consideration
all
suggestions
or
hints
from
interested
parties.
.
.
Where
the
expert
has
no
knowledge
of
the
moral
evidence
or
aspects
of
the
case.
.
.
there
is
nothing
to
mis-
lead
him”
(p.
82).
With
this
statement,
Hagan
was
among
the
first
scholars
to
acknowledge
the
potential
biasing
effect
of
expectation
and
context
on
perceptual
judgments
made
by
forensic
examiners.
It
was
not
until
recently,
however,
that
empirical
data
emerged
to
support
Hagan’s
admonition.
A
growing
body
of
work
now
suggests
that
confessions,
a
highly
potent
form
of
incrimination
(Kassin,
1997;
Kassin
et
al.,
2010)—and
other
strong
contextual
cues—may
bias
forensic
judg-
ments
in
the
criminal
justice
system,
producing
an
effect
that
Kassin
(2012)
has
called
“corroboration
inflation.”
Saks
et
al.
(2003)
note
that
the
resulting
non-independence
among
items
of
evidence
can
create
an
“investigative
echo
chamber”
in
which
certain
items
reverberate
and
seem
stronger
and
more
numerous
than
they
really
are.
Simon
(2011)
notes
that
coherence-based
reasoning
promotes
46
S.M.
Kassin
et
al.
/
Journal
of
Applied
Research
in
Memory
and
Cognition
2
(2013)
42–52
false
corroboration
among
different
witnesses,
resulting
in
trials
that
are
limited
in
their
diagnostic
value.
Dror
(2012)
notes
that
the
overall
effect
on
judgments
can
increase
as
a
result,
creating
a
“bias
snowball
effect.”
To
our
knowledge,
the
first
study
to
examine
this
effect
was
by
Miller
(1984),
who
explored
the
impact
of
contextual
informa-
tion
on
the
judgments
of
12
college
students
trained
to
identify
forged
signatures.
Miller
found
that
participants
who
were
exposed
to
additional
inculpatory
evidence
formed
a
belief
in
the
sus-
pect’s
guilt,
which
skewed
their
perceptions.
More
recent
work
builds
upon
this
finding.
Kukucka
and
Kassin
(2012)
found
that
knowledge
of
a
recanted
confession
can
taint
evaluations
of
hand-
writing
evidence.
In
this
study,
lay
participants
read
a
bank
robbery
case
in
which
the
perpetrator
gave
a
handwritten
note
to
a
bank
teller.
Soon
afterward,
they
were
told
that
a
suspect
was
appre-
hended
and
interrogated,
at
which
point
he
gave
a
handwritten
Miranda
waiver.
Participants
were
asked
to
compare
the
hand-
writing
samples
taken
from
the
perpetrator
(bank
note)
and
the
defendant
(Miranda
waiver).
When
told
that
the
defendant
had
confessed—even
though
he
later
retracted
his
confession,
claiming
it
was
coerced—participants
perceived
the
handwriting
samples
as
more
similar
and
were
more
likely
to
conclude,
erroneously,
that
they
were
authored
by
the
same
individual.
Other
research
indicates
that
interpretations
of
polygraph
tests
may
also
be
shaped
by
preexisting
beliefs.
Elaad,
Ginton,
and
Ben-
Shakhar
(1994)
noted
two
ways
in
which
expectations
can
impact
the
outcome
of
a
polygraph
test:
By
influencing
the
way
exam-
iners
conduct
their
interviews
and
the
questions
they
ask,
and
by
influencing
the
conclusions
they
draw
from
the
test
results.
To
test
the
latter
hypothesis,
these
investigators
asked
ten
poly-
graph
examiners
from
the
Israeli
Police
to
analyze
14
records
from
polygraph
examinations
of
criminal
suspects,
all
of
whom
had
been
judged
inconclusive
by
independent
raters.
Each
chart
was
accompanied
by
biasing
information—for
half
of
the
charts,
examiners
were
told
that
the
interviewee
had
later
confessed;
for
the
remaining
half,
they
were
told
that
someone
else
had
later
confessed.
Although
most
charts
were
judged
inconclusive
in
the
absence
of
biasing
information,
the
charts
were
more
likely
to
be
scored
as
deceptive
in
the
suspect-confession
condition
and
as
truthful
in
the
other-confession
condition.
This
effect
was
obtained
with
both
experienced
and
inexperienced
examiners—but
not
when
the
charts
were
conclusive.
Thus,
the
conclusions
drawn
from
ambiguous
polygraph
results
were
influenced
by
prior
expectations.
Additional
studies
suggest
that
even
fingerprint
judgments
may
be
subject
to
bias.
In
one
study,
Dror,
Charlton,
and
Peron
(2006)
asked
five
experienced
fingerprint
experts
to
assess
pairs
of
finger-
prints
that,
unbeknownst
to
them,
they
had
examined
years
earlier
and
declared
to
be
a
match.
Before
the
stimuli
were
re-presented,
these
examiners
were
told
that
the
fingerprints
were
taken
from
a
high-profile
case
of
erroneous
identification,
implying
that
they
were
not
a
match.
Given
this
biasing
information,
only
one
of
the
five
experts
judged
the
fingerprints
to
be
a
match,
indicating
that
context
undermined
reliability.
This
study
is
particularly
troubling
because
the
change
as
a
function
of
context
was
obtained
among
experienced
examiners,
in
a
highly
trusted
forensic
science,
and
in
a
within-subject
experimental
design.
In
a
followup
study,
Dror
and
Charlton
(2006)
presented
six
latent
fingerprint
experts
with
eight
pairs
of
prints
from
a
crime
scene
and
suspect
in
an
actual
case
in
which
they
had
previously
made
a
match
or
exclusion
judgment.
The
participants
did
not
know
they
were
taking
part
in
a
study,
believing
instead
that
they
were
conducting
routine
casework.
The
prints
were
accompanied
either
by
no
extraneous
information,
information
that
the
suspect
had
confessed,
suggesting
a
match;
or
information
that
the
suspect
was
in
custody
at
the
time,
suggesting
exclusion.
The
results
showed
that
contextual
information
in
the
custody
condition
produced
an
overall
change
in
17%
of
the
originally
correct
match
decisions.
Based
on
a
meta-analysis
of
these
two
studies,
Dror
and
Rosenthal
(2008)
estimated
that
the
reliability
of
fingerprint
experts’
judgments
over
time
likely
falls
in
the
range
of
0.33–0.80,
implying
a
considerable
degree
of
subjectivity.
Similarly,
effect
size
estimates
of
biasability
were
0.45
and
0.41,
respectively,
for
the
two
studies.
These
findings
are
likely
to
extend
to
other
foren-
sic
science
domains
that
are
based
on
visual
similarity
judgments,
such
as
firearms;
microscopic
hair
and
fiber
analysis;
bite
marks;
impression
evidence
involving
shoeprints,
bite
marks,
tire
tracks,
and
handwriting;
and
bloodstain
pattern
analysis
(Dror
&
Cole,
2010).
Additional
research
suggests
that
confessions
can
also
influence
the
testimony
of
lay
witnesses.
Looking
at
the
possible
effects
of
confession
on
eyewitnesses
themselves,
Hasel
and
Kassin
(2009)
staged
a
theft
and
took
photographic
identification
decisions
from
eyewitnesses
who
viewed
a
culprit-absent
lineup.
Two
days
later,
individual
witnesses
were
told
that
the
person
they
had
identified
denied
guilt
during
a
subsequent
interrogation,
or
that
he
con-
fessed,
or
that
a
specific
other
lineup
member
confessed.
Among
those
who
had
made
a
selection
but
were
told
that
another
lineup
member
confessed,
61%
changed
their
identifications—and
did
so
with
confidence.
Among
those
who
had
correctly
not
made
an
ini-
tial
identification,
50%
went
on
to
select
the
confessor.
The
biasing
effect
of
confessions
can
have
grave
consequences.
The
criminal
justice
system
presupposes
that
suspects,
eyewit-
nesses,
forensic
experts,
and
others
offer
information
that
is
independent—not
subject
to
taint
from
outside
influences.
But
does
this
presupposition
describe
the
reality
of
criminal
investi-
gation?
Both
basic
psychology
and
forensic
psychology
research
suggest
otherwise—and,
in
particular,
suggest
the
possibility
that
confessions
can
corrupt
other
evidence.
To
determine
if
this
phe-
nomenon
might
occur
in
actual
cases,
Kassin,
Bogart,
and
Kerner
(2012)
conducted
an
archival
analysis
of
DNA
exonerations
from
the
Innocence
Project
case
files.
Testing
the
hypothesis
that
con-
fessions
may
prompt
additional
evidentiary
errors,
they
examined
whether
other
contributing
factors
were
present
in
DNA
exonera-
tion
cases
containing
a
false
confession.
They
found
that
additional
errors
were
present
in
78%
of
these
cases.
In
order
of
frequency,
false
confessions
were
accompanied
by
invalid
or
improper
forensic
sci-
ence
(63%),
mistaken
eyewitness
identifications
(29%)
and
snitches
or
informants
(19%).
Consistent
with
the
causal
hypothesis
that
the
false
confessions
had
influenced
the
subsequent
errors,
the
confes-
sion
was
obtained
first
rather
than
later
in
the
investigation
in
65%
of
these
cases.
As
a
result
of
improprieties
in
U.S.
laboratories,
the
frequency
with
which
forensic
science
errors
have
surfaced
in
wrongful
con-
victions,
and
the
scathing
critique
from
the
National
Academy
of
Sciences
(2009)—which
concluded
that
there
are
problems
with
standardization,
reliability,
accuracy
and
error,
and
the
potential
for
contextual
bias—it
is
not
surprising
that
the
most
common
means
of
corroboration
for
false
confessions
comes
from
bad
forensic
science
(http://www.innocenceproject.org/).
When
coupled
with
recent
laboratory
studies,
this
presence
of
numerous
forensic
errors
in
Innocence
Project
confession
cases
suggests
that
confession
evi-
dence
constitutes
the
kind
of
contextual
bias
that
can
skew
expert
judgments
in
many
domains.
Con