ArticlePDF Available

Contextual bias and cross-contamination in the forensic sciences: The corrosive implications for investigations, plea bargains, trials and appeals


Abstract and Figures

Most forensic science evidence is produced in conditions that do not protect the analyst from contextual information about the case that could sway their decision-making. This article explores how these largely unrecognized threats raise real problems for the criminal justice system; from the collection and interpretation of traces to the presentation and evaluation of evidence at trial and on appeal. It explains how forensic analysts are routinely exposed to information (e.g. about the investigation or the main suspect) that is not related to their analysis, and not documented in their reports, but has been demonstrated to affect the interpretation of forensic science evidence. It also explains that not only are forensic analysts gratuitously exposed to such ‘domain-irrelevant’ information, but their own cognitively contaminated interpretations and opinions are then often unnecessarily revealed to other witnesses—both lay and expert. This back and forth can create a ‘biasing snowball effect’ where evidence is (increasingly) cross-contaminated, though represented, at trial and on appeal, as separate lines of evidence independently corroborating one another. The article explains that lawyers and courts have not recognized how contextual bias and cognitive processes may distort and undermine the probative value of expert evidence. It suggests that courts should attend to the possibility of contextual bias and cross-contamination when admitting and evaluating incriminating expert evidence.
Content may be subject to copyright.
Contextual bias and cross-contamination in the forensic sciences: the corrosive
implications for investigations, plea bargains, trials and appeals
Professor, Australian Research Council (ARC) Future Fellow; and Director, Expertise, Evidence
& Law Program, School of Law, The University of New South Wales, Sydney 2052; Australia and
Professor (fractional), School of Law, Northumbria University, Newcastle City Campus, Ellison
Pl, Newcastle upon Tyne, Tyne and Wear NE1 8ST, United Kingdom
School of Psychology, University of Queensland, Brisbane, St Lucia QLD 4072, Australia
Centre for the Forensic Sciences, University College London (UCL), Gower St, London
WC1E 6BT, United Kingdom
[Received on 20 February 2014; accepted on 31 August 2014]
Most forensic science evidence is produced in conditions that do not protect the analyst from contextual
information about the case that could sway their decision-making. This article explores how these
largely unrecognized threats raise real problems for the criminal justice system; from the collection
and interpretation of traces to the presentation and evaluation of evidence at trial and on appeal. It
explains how forensic analysts are routinely exposed to information (e.g. about the investigation or the
main suspect) that is not related to their analysis, and not documented in their reports, but has been
demonstrated to affect the interpretation of forensic science evidence. It also explains that not only are
forensic analysts gratuitously exposed to such ‘domain-irrelevant information, but their own cogni-
tively contaminated interpretations and opinions are then often unnecessarily revealed to other wit-
nesses—both lay and expert. This back and forth can create a ‘biasing snowball effect’ where evidence
is (increasingly) cross-contaminated, though represented, at trial and on appeal, as separate lines of
evidence independently corroborating one another. The article explains that lawyers and courts have not
recognized how contextual bias and cognitive processes may distort and undermine the probative value
of expert evidence. It suggests that courts should attend to the possibility of contextual bias and
cross-contamination when admitting and evaluating incriminating expert evidence.
Keywords: expert evidence; context effects; confirmation bias; cognitive science; human factors;
expectancy effects; corroboration; suggestion; priming; proof.
1. Contextual bias, cross-contamination and criminal justice
This article explores the pernicious, though largely unrecognized, influence that contextual factors and
cognitive processes may exert on the production of incriminating expert evidence and its presentation
Corresponding author. E-mail:
Law, Probability and Risk (2014) 0, 1–25 doi:10.1093/lpr/mgu018
ß The Author [2014]. Published by Oxford University Press. All rights reserved
Law, Probability and Risk Advance Access published October 16, 2014
at University of New South Wales on October 19, 2014 from
and evaluation in criminal proceedings.
Drawing on decades of research from the cognitive sciences,
we explain how contemporary legal practice has been insensitive to processes that threaten to subvert
expert evidence and proof. Specifically, the article explains how the manner in which much forensic
science evidence is produced and presented needlessly introduces real risks of error.
Many forensic scientists are routinely exposed to information that is not relevant to their processing
and interpretation of evidence. Exposure to this domain-irrelevant information (e.g. about the suspect,
police suspicions and other aspects of the case) threatens the interpretation and value of their opinion
evidence. The manner in which forensic scientists are regularly and unnecessarily exposed to domain-
irrelevant information is rarely disclosed, raised or considered in plea negotiations, admissibility
decision-making or when different types of evidence are combined and assessed during investigations,
trials and appeals. The fact that relatively few forensic scientists are actively shielded from information
with the potential to mislead is hardly ever raised by prosecutors and judges or considered by jurors. In
consequence, even though incriminating expert evidence is routinely developed in conditions that are
known to produce errors, it is nevertheless portrayed as independent, objective and sometimes even
Whereas forensic scientists, lawyers and judges have been slowly sensitized to the dangers of
physical contamination, the dangers posed by cognitive contamination (where interpretations and
judgments are swayed, often without awareness or conscious control, by contextual cues, irrelevant
details of the case, prior experiences, expectations and institutional pressures) affecting the interpret-
ation and evaluation of evidence, have not received serious consideration in trials and appeals.
cognitive processes that stand at the centre of many forensic science techniques—such as comparing
and interpreting traces and data—are not protected from the risks posed by exposure to extraneous
contextual information and other factors that may contaminate an analyst’s performance. Lack of
attention to cognitive processes, in conjunction with the continuing exposure of many forensic analysts
to domain-irrelevant information (often in the guise of explicit suggestion), threatens the value of
expert evidence and legal decision-making.
Adding to the complexity and dangers, forensic science evidence is routinely represented—in
investigations, plea negotiations, trials and appeals—as independent corroboration for other strands
of incriminating evidence. Claims about independence and corroboration persist even where the expert
evidence may have been influenced by the other strands of evidence (and vice versa).
The problem is not only that forensic science evidence can be biased (by what the
detective tells the examiners, the context of the case, and so on), but that it can bias
other lines of evidence. For example, if one piece of forensic evidence (biased or not)
is known to other forensic examiners who are analyzing other forensic evidence, then their
examination may be affected and biased by their knowledge of the results of the other
Because these issues infect all jurisdictions where humans perceive information, make judgments and interpret, we have
intentionally kept the article general in nature.
We identified no sustained discussion or responses to ‘contextual bias’ or ‘cognitive bias’ in reported appellate judgments in
England, Australia and Canada, though there are several passing references in Laing & Anor v. R [2013] EWCA Crim 1836, [49];
Resolution Chemicals Ltd v. HLundbeckA/S[2013] EWHC 3160, [54], [66]; Webber & Hatton [2013] FamCA 15; Fonteyn v.
Candetti Constructions Pty Ltd [2010] SAIRC 43; R v. Wiens 2013 BCSC 1539.
For example, a fingerprint analyst conducting a comparison might know that the suspect, whose prints are being examined,
made a confession and an eyewitness might be told that the person, who they thought looked like the offender, was ‘confirmed
by fingerprint evidence. This kind of feedback tends to strengthen the confidence of the witnesses (especially eyewitnesses), but
has no obvious correlation with accuracy.
2of25 G. EDMOND ET AL.
at University of New South Wales on October 19, 2014 from
piece of evidence (for example, a forensic examiner looking at bite marks may be influ-
enced and biased in their examination if they know that fingerprint evidence shows the
suspect is guilty).
Forensic evidence can also bias other lines of evidence. For example, eyewitnesses can
be affected. ...
When they affect and influence one another, then their value and reliability is dimin-
ished. Furthermore, because one piece of evidence influences another, then greater dis-
tortive power is gathered as more evidence is affected (and affecting) other lines of
evidence, causing an increasing snowball of bias.
The dangers posed by cross-contamination, our snowball effect, go in all directions—non-scientific
evidence influencing forensic science, the results of forensic science analyses influencing
the evidence of non-expert witnesses, as well as the results of one forensic science analysis influencing
the results of another. Endeavouring to capture the seriousness of the threat to proof, Simon
has described non-independent ‘corroboration’ as ‘pseudo-corroboration’.
The failure to shield
forensic scientists from information that is not required for their analyses threatens the
objectivity, independence, impartiality and the value of their evidence as well as the standard of
criminal proof.
2. Contextual bias and cognitive contamination: the scientific research
This section begins with a brief overview of scientific research on contextual influences, biases and the
(pliable) nature of human interpretation and decision-making, then shifts to focus on the forensic
2.1 The human mind and cognitive architecture: contextual influences and biases
Human perception and memory do not operate like a video camera that reliably captures and stores
every detail of our experience. They do not provide direct (or unmediated) access to our world nor
allow us to re-‘view’ perceptions at some later stage. Instead, the way we perceive the world
and remember events is shaped by our experiences and beliefs as well as the context and stimuli.
We use our prior knowledge and contextual cues to help sift, sort and weigh the vast amount of
information delivered through our senses. Our perception is moulded with each new experience and
the lens through which we see the world, adjusted. We evolved, and continue to live, in a highly
complex and variable environment. In order to navigate the complexity, we chronically make
inferences on the basis of simplified models and heuristics. In coping with the normal stimuli of
our everyday lives, we are crucially dependent on processes and knowledge of which we are often
DROR, I. E. (2012). Cognitive Bias in Forensic Science. Yearbook of Science and Technology (McGraw-Hill), pp. 43–45. See
also K
ASSIN,S.M., DROR,I.E.&KUKUCKA, J. (2013). The Forensic Confirmation Bias: Problems, Perspectives, and Proposed
Solutions. Journal of Applied Research in Memory and Cognition, 2, 42–52; and D
ROR,I.E.&STOEL, R. (2014). Cognitive
Forensics: Human Cognition, Contextual Information and Bias. Encyclopedia of Criminology and Criminal Justice,Springer,
New York, 353–363. It is not our intention to suggest that evidence, particularly opinion evidence, has a proper value, but rather
to recognize an appropriate (or indicative) range of values based on formal evaluation.
See SIMON, D. (2012). In Doubt: The Psychology of the Criminal Justice Process, Harvard University Press, Cambridge,
at University of New South Wales on October 19, 2014 from
The fact that environmental factors (or attributes of the stimulus or situation) can influence our
perception and interpretation of an object or event is often referred to as a context effect.
Such effects
are notorious and widespread.
One of the most basic exemplifications is a simultaneous contrast effect
where context changes the appearance of an object (e.g., lightness, length, area, orientation, colour,
etc.), but the physical properties of the object remain unchanged. For example, a grey patch will appear
whiter against a dark background than against a white background. An example of a context effect is
depicted in Figure 1. The squares marked A and B are the same shade of grey, yet B appears to be
lighter than A. Why? Because, in a three-dimensional world, objects in shadow (e.g. B) reflect less
light than objects in full illumination (e.g. A). We treat the two-dimensional image on the page ‘as if
it comprises three-dimensional objects in the world where objects cast shadows and reflect light.
Similar contrast effects can be observed by, for example, staring at a green patch. It will make a
subsequent grey patch appear pinkish.
Staring at a face that is fat, happy, contracted, male, etc., causes
successive neutral faces to appear thin, sad, expanded, female, etc.
The properties of one perceptual
FIG. 1. The checker shadow illusion by Edward H. Adelson
The term context effect is a general description that encompasses all aspects of the stimulus itself, the situation, and the
previous experience or expectations of the subject. Others have grouped together a variety of terms to refer to this general
phenomenon (e.g. observer effects, expectancy effects, cueing, priming, top–down processing, constructive perception, percep-
tual set, etc.). For a general review and discussion of context effects in forensic science, see S
OSENTHAL,R.&THOMPSON, W. C. (2003). Context Effects in Forensic Science: A Review and Application of the Science of
Science to Crime Laboratory Practice in the United States. Science & Justice, 43, 77–90; R
W.C. & R
OSENTHAL, R. (2002). The Daubert/Kumho Implications of Observer Effects in Forensic Science: Hidden Problems of
Expectation and Suggestion. California Law Review, 90,156.
NICKERSON, R. S. (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Review of General Psychology,
2, 175.
ADELSON, E. H. (1995). Checkershadow Illusion, [Accessed 1 October 2014].
HEINEMANN, E.G. (1955). Simultaneous Brightness Induction as a Function of Inducing-and Test-Field Luminances. Journal
of Experimental Psychology, 50,89.
For a review of the face adaptation effect, see HILLS,P.J.,HOLLAND,A.M.&LEWIS, M. B. (2010). Aftereffects for Face
Attributes with Different Natural Variability: Children are More Adaptable than Adolescents. Cognitive Development, 25,
4of25 G. EDMOND ET AL.
at University of New South Wales on October 19, 2014 from
stimulus change the way that we perceive or interpret subsequent stimuli. These phenomena are not
limited to perceptual contexts, but also to expectations.
Another type of context effect is priming (also known as cueing or suggestion), where exposure to
some stimulus (such as a set of words or images or information) can influence subsequent judgments,
decisions and choices. For example, if you read the word ‘eat’, and are then asked to complete the word
S_ _P, you are much more likely to fill in the blanks to read ‘soup’ than ‘soap’. On the other hand, if
you are primed by the idea or the word ‘washing you are more likely to fill in the blanks to read
People who are primed with the concept of ‘money (i.e., by completing a word descrambling
task related to money) tend to (temporarily) behave and respond to questions in a more selfish, self-
reliant and individualistic manner.
These priming effects are commonplace across experimental
psychology, where activating a particular concept, attitude or goal can change the context of the
situation and thereby alter how an individual responds.
Context effects are not limited to cues in the environment. Our body and actions shape the way that
we perceive and interpret the world.
Most of the time, contextual cues help us to make appropriate
judgments and decisions. Our perceptual and cognitive systems go to a lot of trouble to ensure that
what we see, hear and remember responds to what is usually out there.
Research in cognitive science over the last several decades has revealed many regularities in
judgment and decision-making. These are often described as heuristics, biases or effects, to reflect
the systematic recurrence of these phenomena across individuals and circumstances, and as fallacies or
errors when they lead to error.
Research demonstrates that these regularities in judgment and de-
cision-making are usually quite useful. They can, however, be detrimental, encouraging short cuts and
mistakes. Indeed, even though the term ‘cognitive bias is neutral, simply capturing a systematic
deviation—true for virtually every judgment we make—the term is typically used pejoratively, to
refer to the errors that occasionally result.
278–289; and for a related interesting face contrast effect, see TANGEN,J.M.,MURPHY,S.C.&THOMPSON, M. B. (2011). Flashed
Face Distortion Effect: Grotesque Faces from Relative Spaces. Perception, 40, 628–630.
BRESSAN,P.&DAL MARTELLO,M.F.(2002).Talis Pater, Talis Filius: Perceived Resemblance and the Belief in Genetic
Relatedness. Psychological Science, 13, 213–218.
See KAHNEMAN,D.(2011).Thinking, Fast and Slow. Farrar, Straus and Giroux, New York for a more detailed description
of priming and related experiments.
VOHS, K. D., MEAD,N.L.&GOODE, M. R. (2006). The Psychological Consequences of Money. Science, 314, 1154–1156.
DIJKSTERHUIS, A., SMITH,P.K.,VanBAAREN,R.B.&WIGBOLDUS, D.H. (2005). The Unconscious Consumer: Effects of
Environment on Consumer Behavior. Journal of Consumer Psychology, 15, 193–202.
KILLEEN,P.R.&GLENBERG, A. M. (2010). Resituating Cognition. Comparative Cognition & Behavior Reviews, 5, 59–77;
TRACK,F.,MARTIN,L.L.&STEPPER, S. (1988). Inhibiting and Facilitating Conditions of the Human Smile: A Nonobtrusive Test
of the Facial Feedback Hypothesis. Journal of Personality and Social Psychology, 54, 768; see G
ECKER,R.&RINCK, M. (2005). Grounding Language in Bodily States: The Case for Emotion. In: The Grounding of Cognition:
The Role of Perception and Action in Memory, Language, and Thinking (R. Zwaan & D. Pecher eds.) for discussion. See also
HALLA,M.&PROFFITT, D. R. (1999). Visual–Motor Recalibration in Geographical Slant Perception. Journal of Experimental
Psychology: Human Perception and Performance, 25, 1076; W
ITT,J.K.&SUGOVIC, M. (2010). Performance and Ease Influence
Perceived Speed. Perception, 39, 13411353; J
OSTMANN,N.B.,LAKENS,D.&SCHUBERT, T. W. (2009). Weight as an Embodiment
of Importance. Psychological Science, 20, 1169–1174.
PINKER,S.(2003).The Blank Slate: The Modern Denial of Human Nature. Penguin, London. See also TALEB,N.N.(2007).
The Black Swan: The Impact of the Highly Improbable. Random House, New York and Penguin, London.
See TVERSKY,A.&KAHNEMAN, D. (1974). Judgment and Uncertainty: Heuristics and Biases. Science 185, 1124–1131 for
the original manuscripts and K
AHNEMAN, Thinking Fast and Slow. Farrar, Straus and Giroux, New York for a comprehensive
at University of New South Wales on October 19, 2014 from
Some of the cognitive biases identified through research include the following:
Confirmation bias refers to our tendency to search for and interpret information that confirms our
prior beliefs
(e.g., those who believe that arthritis pain is influenced by the weather will notice their
pain more during extreme weather events, but may pay less attention when the weather is fine).
When we know the outcome of an event, but try to act as if we do not, we tend to be influenced
by knowledge of the outcome. This tendency is known as hindsight bias or the knew-it-all-along
The anchoring effect describes our tendency to rely on the first piece of information offered (the
‘anchor’) when making decisions. Subsequent judgments are influenced by initial information or
beliefs (including when that information is unreliable or even arbitrary—e.g., a number on a
roulette wheel or drawn from a hat).
A preference to remain in the same state rather than taking a risk by moving to another state is
called the status-quo bias
(e.g., committing to the usual brand at the supermarket rather than
risking an alternative
Our perception of order in random sequences of coin tosses
and stock market prices
examples of the gambler’s fallacy.
We tend to underestimate how much time we need to complete a task, even when our experience
with very similar tasks suggests otherwise
(e.g., in 1957, the initial plans for the Sydney Opera
House proposed opening in 1963 at a cost of $7 million; a scaled-down version was opened in 1973
at a cost of $102 million
). This phenomenon is aptly named the planning fallacy.
NICKERSON, R.S. (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Review of General Psychology, 2,
REDELMEIER,D.A.&TVERSKY, A. (1996). On the Belief that Arthritis Pain is Related to the Weather. Proceedings of the
National Academy of Sciences, 93, 2895–2896. See also: A
BELL,G.O.&GREENSPAN, B. (1979). Human Births and the Phase of
the Moon. The New England Journal of Medicine, 300, 96, for an example where people take more notice of hospital admissions
during a full moon compared to other nights of the month. More generally, see B
ARKER BAUSELL,R.(2007).Snake Oil Science:
The Truth about Complementary and Alternative Medicine. Oxford University Press, New York.
FISCHHOFF,B.&BEYTH, R. (1975). I Knew It would Happen: Remembered Probabilities of Once—Future Things.
Organizational Behavior and Human Performance, 13,116;R
Perspectives on Psychological Science, 7, 411–426, for a review.
TVERSKY,A.&KAHNEMAN, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185, 1124–1131;
ERVONE,D.&PEAKE, P. K. (1986). Anchoring, Efficacy, and Action: The Influence of Judgmental Heuristics on Self-Efficacy
Judgments and Behavior. Journal of Personality and Social Psychology, 50,492.
SAMUELSON,W.&ZECKHAUSER, R. (1988). Status Quo Bias in Decision Making. Journal of Risk and Uncertainty, 1,759.
DHAR, R. (1997). Consumer Preference for a No-Choice Option. Journal of Consumer Research, 24, 215–231.
BAR-HILLEL,M.&WAGENAAR, W. A. (1991). The Perception of Randomness. Advances in Applied Mathematics, 12,
CKL, T. (2010). The Hot Hand Belief and the Gambler’s Fallacy in Investment Decisions
under Risk. Theory & Decision, 68, 445–462.
TVERSKY,A.&KAHNEMAN, D. (1971). Belief in the Law of Small Numbers. Psychological Bulletin, 76,105.
KAHNEMAN,D.&TVERSKY, A. (1979). Intuitive Predictions: Biases and Corrective Procedures. TIMS Studies in
Management Sciences, 12, 313–327. For a review, see B
UEHLER,R.,GRIFFIN,D.&ROSS, M. (2002). Inside the Planning
Fallacy: The Causes and Consequences of Optimistic Time Predictions. In: Heuristics and Biases: The Psychology of
Intuitive Judgment (T. D. Gilovich, D. W. Griffin & D. Kahneman eds.). Cambridge University Press, New York.
BUEHLER, R., GRIFFIN,D.&ROSS, M. (1994).Exploring the Planning Fallacy: Why People Underestimate their Task
Completion Times. Journal of Personality and Social Psychology, 67,366.
See e.g. FLYVBJERG, B. (2003). Megaprojects and Risk: An Anatomy of Ambition. Cambridge University Press,
Cambridge, UK.
6of25 G. EDMOND ET AL.
at University of New South Wales on October 19, 2014 from
Sometimes our basic pattern recognition abilities are shaped by very specific knowledge and ex-
pectations. For example, a ‘backward message’ attributed to rock music is easy to hear when you know
(through direct suggestion) what phrase to listen for. The effects of expectancy can make these ‘hidden
messages’ (or features) in ambiguous auditory or visual signals seem intelligible even when there is no
actual message or feature.
By knowing specifically what to look or listen for, or what to remember,
some characteristics that are consistent with a specific piece of information are ‘sharpened’, exag-
gerated, and emphasized, whereas other characteristics that are inconsistent, are ‘levelled’, toned
down, and weakened.
This general process of sharpening and levelling information that is consistent
with one’s expectations can be referred to generically as an expectancy effect.
Our ability to recognize patterns is also influenced by general knowledge and expectations. Under
most circumstances, using language as an example, we are unaware of our reliance on syntax, word
frequency, our background knowledge of specific individuals or the topic of conversation. Under
‘noisy’ conditions (e.g., reading untidy handwriting, listening to a telephone adjacent to a busy road,
tracking an unfamiliar accent or watching a badly distorted video), the pace seems fast, but our general
knowledge allows us to fill in the gaps and resolve ambiguity. As the ‘noise’ increases, our abilities to
fill gaps tend to deteriorate making us more prone to mistakes.
The reliance on specific and general knowledge is what makes it difficult for machines to perceive
stimuli and complete some tasks that humans find trivial.
Computer scientists and researchers in
machine learning (e.g. A.I.) can attest to the incredible amount of stored information and processing
required for a computer to interpret seemingly simple information such as a handwritten post code on
an envelope.
Subjectively, however, we just open our eyes and apprehend it because we are all
experts in coping with the normal stimuli of our everyday lives. We are not even aware of having made
an interpretation, and we take for granted the many cognitive processes that lurk beneath the surface of
our external behaviour when we are the experts.
To non-experts, the cognitive feats that specialists are capable of performing often seem impressive,
even extraordinary. For example, chess masters can play the game blindfolded with high levels of
and can remember the exact configuration of pieces on a chessboard after only a few
seconds of exposure (about 93% correct for configurations of about 25 pieces
); radiologists can
detect and name 70% of abnormalities in chest X-rays after seeing them for only 200 ms,
experienced waiters can memorize orders for up to 16 people without taking notes, and while engaging
in unrelated conversation.
VOKEY,J.D.&READ, J. D. (1985). Subliminal Messages: Between the Devil and the Media. American Psychologist, 40,
1231–1239. The New Zealand case of Bain raised a particularly striking example of this problem: Bain v. The Queen [2009]
NZSC 16.
GIBSON, J. J. (1929). The Reproduction of Visually Perceived Forms. Journal of Experimental Psychology, 12, 1–39.
Gilovich, T. (1991). How We know What isn’t So: The Fallibility of Human Reason in Everyday Life. Free Press, New
See COLLINS,H.(1990).Artificial Experts: Social Knowledge and Intelligent Machines. MIT Press, Cambridge, MA.
REYFUSS,H.(1972)What Computers Can’t Do. MIT Press, Cambridge, MA.
LE CUN, Y., JACKEL,L.D.,BOSER,B.,DENKER,J.S.,GRAF,H.P.,GUYON,I.&HUBBARD, W. (1989). Handwritten Digit
Recognition: Applications of Neural Network Chips and Automatic Learning. Communications Magazine, IEEE, 27,4146.
HOLDING,D.H.(1985).The Psychology of Chess Skill. L. Erlbaum Assoc.
DE GROOT,A.D.(1978).Thought and Choice in Chess, 2nd edn. Mouton, The Hague (originally published 1946).
KUNDEL,H.L.&NODINE, C. F. (1975). Interpreting Chest Radiographs without Visual Search. Radiology, 116, 527–532.
ERICSSON, K. A. & Polson, P. G. (1988). Memory for Restaurant Orders. In: The Nature of Expertise (M.Chi,R.Glaser&M.
Farr eds.). Erlbaum, Hillsdale, NJ; E
RICSSON,K.A.&POLSON, P. G. (1988). An Experimental Analysis of the Mechanisms of a
Memory Skill. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14,305;T
at University of New South Wales on October 19, 2014 from
According to the well-established exemplar theory of categorization,
the identification of category
members in everyday classification (e.g., a bird, a table, or a car) or expert classification (e.g., an
abnormal chest X-ray, a patient with myocardial ischaemia, or a poor chess move) is effortless because
experts have acquired a large number of exemplars. They respond to new items by reference to their
similarity to those previously encountered. Often this sensitivity develops effortlessly and without any
intention to learn structures or categories.
Yet, both common experience and laboratory research
alike demonstrate that this tacit sensitivity influences performance and expectations in virtually every
task we undertake.
Our perception and cognition—the way we see, hear and remember the world—
is, in a very real sense, shaped by the sum of our experiences.
Of course, the acquisition of expertise requires more than just an accumulation of experiences; it
requires good quality and timely feedback.
A ballet dancer practicing a plie or pirouette in front of a
mirror can see immediately which aspects of her posture need correcting and, with sufficient practice
and instruction, is able to refine even the most difficult and unnatural of movements. There are
situations within policing where feedback is similarly informative and immediate. In acquiring firearm
and marksmanship skills, for example, police recruits learn through training how slight changes in
position and breathing can affect accuracy and consistency between shots. Recruits see immediately
where they have hit the target.
Our sensitivity to feedback and cues in our environment do not always lead to desirable outcomes.
In some environments, we can use or ‘learn’ from the wrong cues. Robin Hogarth describes these as
‘wicked environments and provides the example of a physician in the early 20th century who report-
edly developed a technique to diagnose patients with typhoid by palpating their tongues with his
unwashed hands.
When each of his patients fell ill with typhoid, he mistakenly took this as positive
feedback that his intuitions and method were correct, when in fact he was simply transferring typhoid
from one patient to the next. Environments that are less structured and regular than ballet or the firing
range may create an illusion of skill where we judge our abilities to be more pronounced than they
actually are.
The stock market is a good example of an irregular environment, where it has not proved
possible to make accurate predictions about stock prices consistently. Nevertheless, professional in-
vestors and fund managers routinely make predictions about the future of stock prices, and despite
their confidence in these predictions (on average), regularly fail to outperform the market—demon-
strating a level of performance more akin to chance (and for many individual investors, often worse
&SEARSTON, R. A. (2014). Understanding expertise and non-analytic cognition in fingerprint discriminations made by humans.
Frontiers in Psychology, 5, doi: 10.3389/fpsyg.2014.00737.
BROOKS, L.R. (1978). Nonanalytic Concept Formation and Memory for Instances. In: Cognition and Categorisation (E.
Rosch & B. Lloyd eds.). Erlbaum, Hillsdale, NJ.
HOGARTH, R. (2010). Educating Intuition. Chicago University Press, Chicago.
NORMAN,G.,YOUNG,M.&BROOKS, L. (2007). Non-Analytical Models of Clinical Reasoning: the Role of Experience.
Medical Education, 41, 1140–1145.
KAHNEMAN, Thinking Fast and Slow.
HOGARTH, R.M. (2008). On the Learning of Intuition. In: Intuition in Judgment and Decision Making (H. Plessner, C.
Betsch & T. Betsch eds.). Lawrence Erlbaum Associates, Inc, Mahwah, NJ.; T
HOMAS,L.(1983).The Youngest Science: Notes of
a Medicine Watcher. Viking, New York, p. 22.
KAHNEMAN, Thinking Fast and Slow. This might help explain the persistence of ‘fields’ and individuals purporting to be
experts in forensic domains where there is no (evidence of) actual expertise.
8of25 G. EDMOND ET AL.
at University of New South Wales on October 19, 2014 from
than chance).
Without regularity or structure in the learning environment, no amount of practice and
feedback will generate genuine expertise.
As a result of the automaticity of context effects, we tend to believe that the information we receive
through our senses is an accurate reflection of the world, uncontaminated by our preferences, precon-
ceptions, beliefs and interpretations.
This naı
ve realist view of perception and cognition is mistaken
and potentially misleading.
The inability to recognize the extent to which prior experience shapes
our judgments and decisions has been labelled the ‘curse of knowledge’.
When you know something,
it is difficult to put yourself in the shoes of someone who does not know it. Moreover, we tend to
believe that other people are influenced by prior experience, but that we are not—sometimes described
as bias blindness,abias blind spot,
or more informally, the not me fallacy.
Our beliefs and
experiences automatically influence our judgments, with little or no effort, and with no sense of
voluntary control.
These effects are not a sign of weakness and cannot be willed away; just as we
cannot use willpower to overcome the impressions created by most visual illusions—recall Figure 1.
2.2 Contextual bias in the forensic sciences
Forensic science evidence presented in court is often neatly packaged as an independent source of
evidence in the form of a detailed report and/or testimony from an impartial and experienced expert.
Confronted with the end-product, it is difficult to appreciate the many steps involved in producing this
evidence—from the time trace evidence is left at a crime scene, through collection, processing, ana-
lysis, interpretation (and verification) and preparation for presentation in court.
The process in which trace evidence is recovered involves many people with diverse experience and
backgrounds and varying levels and types of expertise, preconceptions and knowledge about the case.
Upon arrival at a crime scene, first responders and investigators are often faced with limited and
unconfirmed information about the (alleged) crime. They are required to reconstruct events and fill in
gaps with preliminary notifier reports, witness statements and impressions, usually under considerable
time and resource pressures. Quite often, chance plays a major role in the detection of evidence. The
types of specimens collected are also dependent on the tools and training available to investigators, the
kinds of analytical facilities and resources available, and the type of crime. Ultimately, the chance that
BOGLE, J.C. (2000). Common Sense on Mutual Funds: New Imperatives for the Intelligent Investor. Wiley, New York;
RINBLATT,M.&TITMAN, S. (1992). The Persistence of Mutual Fund Performance. Journal of Finance, 42, 1977–1984; ELTON,
E. J., et al. (1997). The Persistence of Risk-Adjusted Mutual Fund Performance. Journal of Business, 52, 1–33; and K
Thinking Fast and Slow, pp. 212–216 for a more expansive review and further examples of the illusion of skill in stock picking
and in other professional fields. See also B
ARBER,B.M.andODEAN, T. (2002). Trading is Hazardous to Your Wealth: the
Common Stock Investment Performance of Individual Investors. Journal of Finance, 55, 773–806.
SEGALL, M., CAMPBELL,D.&HERSKOVITZ, M. (1966). The Influence of Culture on Visual Perception. Bobbs-Merrill, New
ROSS,L.&WARD, A. (1995). Psychological Barriers to Dispute Resolution. Advances in Experimental Social Psychology,
27, 255–303.
CAMERER,C.,LOEWENSTEIN,G.&WEBER, M. (1989). The Curse of Knowledge in Economic Settings: An Experimental
Analysis. The Journal of Political Economy, 97, 1232–1254.
PRONIN, E., GILOVICH,T.&ROSS, L. (2004). Objectivity in the Eye of the Beholder: Divergent Perceptions of Bias in Self
versus Others. Psychological Review, 111, 781–799.
LILIENFELD, S.O., AMMIRATI,R.&LANDFIELD, K. (2009). Giving Debiasing Away: Can Psychological Research on
Correcting Cognitive Errors Promote Human Welfare? Perspectives on Psychological Science, 4, 390–398.
NISBETT,R.E.&WILSON, T. D. (1977). Telling More than We can Know: Verbal Reports on Mental Processes.
Psychological Review, 84, 231–259.
at University of New South Wales on October 19, 2014 from
one piece of evidence is collected, as opposed to others, is driven by the perspectives and preconcep-
tions of investigators, organizational expectations and capacities, and the information available.
One of the dangers in any investigation is the failure to consider (or pursue) alternative possibilities.
There is a real risk that investigators and forensic analysts will engage in tunnel vision.
This is where,
usually unwittingly, investigators interpret evidence in a manner that tends to confirm the basic case
theory. Tunnel vision—which is related to sharpening and levelling, expectancy effects and confirm-
ation bias—illustrates the human tendency to hold onto previously formed theories, hypotheses or
beliefs even in the face of new or disconfirming evidence. Interpretive flexibility, in conjunction with
the potential for other information (e.g. beliefs, suspicions and other evidence) to cue, may result in
traces, data and readouts being interpreted in ways that are consistent with expectations (and in ways
that are different to how the same data might have been interpreted if the investigator or forensic
analyst was shielded from gratuitous information, had different assumptions, or was exposed to the
evidence in a different sequence).
Although some failures and risks might be avoided, some cannot
be. For example, the assumptions, expectations and prior experiences of the investigator may bias the
interpretation of a crime scene and the collection of evidence.
Beyond the crime scene, it is a human analyst who is required to sort traces and samples, select and
run tests or undertake analyses in order to report on the findings. Almost all forensic science and
medicine techniques rely upon input and interpretation by analysts, often in order to link a trace or
sample to a particular person or source.
Some types of comparison or analysis rely on tools and/or
technologies to assist with the interpretation (e.g., a fingerprint database or image enhancement), but
the final decision almost always rests with the analyst. Ideally, the distinction between traces that
originate from the same source and those that originate from different sources would be obvious. In
reality, traces are regularly degraded (e.g., smudged or partial, mixed or poorly resolved) or quite
similar to the reference (e.g., the suspect’s profile), resulting in ambiguity and increasing scope for
erroneous interpretations.
There are a number of obvious ways in which contextual cues, observer and expectancy effects, as
well as anchoring and priming might influence the process of evaluating and interpreting evidence. If
someone is affected by domain-irrelevant information, they respond differently than they would if they
were not exposed to this extraneous information. This information might relate to the trace, the
suspect, or the case.
One study by Dror et al. demonstrated how exposure to domain-irrelevant
information led a small sample of latent fingerprint examiners to contradict their own opinions as to
whether the same two prints matched.
Dror et al.followedupthisstudy,byshowingthatotherforms
of contextual information (e.g., being told that the suspect confessed or emotion-evoking case infor-
mation) could also influence analyses by both novice and experienced fingerprint examiners.
FINDLEY,K.A.& SCOTT, M. S. (2006). Multiple Dimensions of Tunnel Vision in Criminal Cases. Wisconsin Law Review, 2,
KRANE, al. (2008). Sequential Unmasking: a Means of Minimizing Observer Effects in Forensic DNA Interpretation.
Journal of Forensic Science, 53, 1006.
TANGEN, J. M. (2013). Identification Personified. Australian Journal of Forensic Sciences, 45, 315.
THOMPSON, W. C. (2011). What Role should Investigative Facts play in the Evaluation of Scientific Evidence? Australian
Journal of Forensic Sciences, 43, 123–134.
RON, A. E. (2005). Contextual Information Renders Experts Vulnerable to Making
Erroneous Identifications. Forensic Science International, 156, 74–78.
DROR,I.E.&CHARLTON, D. (2006). Why Experts make Errors. Journal of Forensic Identification, 56, 600–616; DROR,I.E.,
RON,A.E.,HIND,S.L.&CHARLTON, D. (2005). When Emotions Get the Better of Us: the Effect of Contextual Top-Down
Processing on Matching Fingerprints. Applied Cognitive Psychology, 19, 799–809.
10 of 25 G. EDMOND ET AL.
at University of New South Wales on October 19, 2014 from
also found that the interpretation of complex DNA profiles (e.g. mixtures), as well as forensic an-
thropological assessments of gender, age and race, were vulnerable to similar extraneous influences.
Our concern is with the information and conditions that adversely affect the analyst’s perception and
interpretation. Scientific research on confirmation bias suggests that ambiguous information that is
consistent with what is expected tends to be sharpened, whereas ambiguous information that is in-
consistent with what is expected tends to be levelled.
The extent to which information is regarded as
ambiguous presumably varies between individuals and the same individual’s performance may vary
Hence, ambiguous information may not seem ambiguous as a result of confirmation
bias and is much in the ‘eye of the beholder’. The extent to which these sharpening and levelling
processes will affect the examiner’s interpretation also depends on a number of other factors, including
the amount of actual ambiguity—which can be molded to fit one’s expectations. If the information in
the sample is unambiguous (e.g., the blood type test reads O negative), then it is almost impossible for
sharpening or levelling to change the perception and interpretation (to make it AB positive). If,
however, the information is ambiguous (e.g. a small amount of DNA, a degraded or mixed sample,
allelic drop-ins and drop-outs, stutters, peak-height imbalances), then certain bits of this ambiguous
information may seem more relevant than others in light of extraneous information (e.g. a suspect’s
Evidence found at crime scene is often of degraded quality and quantity, includes noise and
distortions, and other factors which often make it ambiguous.
Many forensic analysts are exposed to potentially corrosive domain-irrelevant information in their
day-to-day work.
Not infrequently, forensic analysts are told about other strands of (potential)
evidence, the investigator’s background knowledge of the case and the suspect (e.g. criminal history
or admissions), as well as results from other forensic analyses. Given the inevitability of errors and
biases in human interpretation, it seems reasonable to assume that forensic analysts, like the rest of us,
are susceptible to expectancy effects and cognitive biases. Where there is vulnerability, the only way to
avoid being influenced inappropriately is to restrict access to domain-irrelevant information (or, more
precisely, information with the potential to mislead). Thompson and Dror propose the separation of a
forensic scientist’s role into either a ‘case manager’—who communicates with investigators, may help
to decide what specimens to collect from the crime scene, and manages the workflow and tasks
assigned in the laboratory (e.g., what to sample and which assays to run)—or an ‘analyst’—who
performs analyses (e.g., comparisons of trace evidence) according to instructions.
This separation of
DROR,I.E.&HAMPIKIAN, G. (2011). Subjectivity and Bias in Forensic DNA Mixture Interpretation. Science & Justice, 51,
204–208; T
HOMPSON, W. (2009). Painting the Target around the Matching Profile: The Texas Sharpshooter Fallacy in Forensic
DNA Interpretation. Law, Probability & Risk, 8,257;N
AKHAEIZADEH,S.,DROR, I. E., MORGAN, R. (2014). Cognitive Bias in
Forensic Anthropology: Visual Assessments of Skeletal Remains is Susceptible to Confirmation Bias. Science & Justice, 54,
GILOVICH,T.(1991).How We know What isn’t so: The Fallibility of Human Reason in Everyday Life. Free Press, New York.
Fingerprint Analysis: Inter- and Intra-Expert Consistency and the Effect of a Target Comparison. Forensic Science
International, 208, 10–17 for evidence of performance changing from one time to the next.
THOMPSON, W. C. (2013). Forensic DNA Evidence: The Myth of Infallibility. In: Genetic Explanations: Sense and Nonsense
(S. Krimsky & J. Gruber eds.). Harvard University Press, Cambridge, MA.
See BUTT, L. (2013). The Forensic Confirmation Bias: Problems, Perspectives, and Proposed Solutions: Commentary by a
Forensic Examiner. Journal of Applied Research in Memory & Cognition, 2, 59–60, for an analyst’s perspective on the case
information available to the analyst.
THOMPSON, W. C. (2011). What Role should Investigative Facts Play in the Evaluation of Scientific Evidence? Australian
Journal of Forensic Sciences, 43, 123–134; D
ROR, I. E. (2014). Practical Solutions to Cognitive and Human Factor Challenges in
Forensic Science. Forensic Science Policy & Management, 4, 105–113.
at University of New South Wales on October 19, 2014 from
roles facilitates blind analysis while allowing analysts to have access to appropriate information
thereby ensuring the case manager (and the institution) is informed about the overall case.
Another proposal involves ‘sequential unmasking’, where information is gradually revealed to the
For example, an analyst might conduct an initial examination of the trace evidence and limit
their interpretation to the legible or salient parts of the sample before comparing it to the suspect (e.g.,
recording the possible genotypes of all possible contributors to a mixed DNA sample before learning
about the profile of the victim and any suspects). Of course, the process will depend on the type of trace
evidence, but the basic idea is to keep the analyst blind to potentially biasing information for as long as
possible, and for the analyst to document retrospective adjustments to the interpretation in the report.
The effectiveness of these approaches and the circumstances in which blinding is considered to be
necessary can really only be refined through further research pinpointing the specific types of case
information that may be harmful to performance (e.g., information that reduces the overall accuracy
of analysts decisions) and the kinds of traces and samples that might, in contrast to the blood typing
example, represent genuine threats to interpretation. While threats to cognition are ubiquitous, not every
technique and not all samples will require blinding. Nevertheless, as Section 2.3 affirms, at this point in
time, there is insufficient evidence for us to ignore or discount the threat to most types of interpretation.
A range of additional, sometimes subtle ideological, factors may influence, consciously or other-
wise, the conduct and performance of forensic analysts. For example, a recent study showed that
forensic examiners are biased by knowledge of the side who retained their service in such a way that
identical evidence is interpreted differently depending on whether they think they are working for the
defense or for the prosecution.
Another example is that analysts might be outraged by the crime such
that they want to put the ‘guilty’ suspect behind bars. Analysts may be more or less interested in crimes
against persons of particular ethnic groups, gender, social classes or employment groups (e.g. pros-
titutes). Analysts might draw upon institutional philosophies (such as the need to ‘think dirty’) or
pervasive beliefs—such that multiple infant deaths in the one family are compelling evidence of child
abuse and even homicide (e.g. Meadow’s law).
There may be public, media and political, as well as
institutional pressure to identify an offender. In serious cases, analysts might be subjected to work
environments (e.g. long hours) that adversely influence performance or make it difficult to devote the
necessary time and resources to other types of investigations, particularly high volume crimes.
Many forensic science laboratories are faced with organizational constraints that, by design, create
conditions that are likely to increase the chances of analysts encountering domain-irrelevant
Some information, although relevant, may have biasing effects. Such cases are more complex. If information is not relevant,
then examiners need not have it, which is relatively straightforward. A case in which the information is relevant but also biasing,
however, is tricky. In such cases, the relative importance of the information ought to be weighted against the potential biasing
effects. See D
ROR, I. E. (2012). Combating Bias: The Next Step in Fighting Cognitive and Psychological Contamination. Journal
of Forensic Sciences, 57, 276–277.
See KRANE,D.,et al. (2008). Sequential Unmasking: A Means of Minimizing Observer Effects in Forensic DNA
Interpretation. Journal of Forensic Science, 53, 1006.
ROSENTHAL,R.&RUBIN, D.B. (1978). Interpersonal Expectancy Effects: The First 345 Studies. The Behavioural and Brain
Sciences, 3, 377–386.
MURRIE,D.C.,BOCCACCINI, M.T., GUARNERA,L.A.,&RUFINO, K. A. (2013). Are Forensic Experts Biased by the Side that
Retained Them? Psychological Science, 24, 1889–1897.
GOUDGE,S.(2008).Inquiry into Pediatric Forensic Pathology. Queens Printer, Toronto; CUNLIFFE,E.(2011).Murder,
Medicine and Motherhood. Hart Publishing, Oxford.
See Expert Working Group on Human Factors in Latent Print Analysis. (2012). Latent Print Examination and Human
Factors: Improving the Practice through a Systems Approach. National Institute of Standards and Technology, Washington, DC
(hereafter EWG, Latent Print Examination and Human Factors). Dror was a member of this working group.
12 of 25 G. EDMOND ET AL.
at University of New South Wales on October 19, 2014 from
information. For example, in some jurisdictions, forensic analysts across several disciplines (e.g.,
image analysts and fingerprint examiners) share the same management hierarchy, belong to the
same team or workgroup, and in some instances share office facilities and workspace. Such environ-
ments can make it difficult for analysts to avoid even inadvertent exposure to domain-irrelevant
information. Some analysts are involved in the end-to-end collection and processing of evidence
(e.g. many fire investigators)—thereby exposing them to the crime scene, other investigators and
additional aspects of the case. Such institutional arrangements threaten the independence and, poten-
tially, the value of resulting interpretations.
There have been endogenous attempts to reduce errors. Some forensic analysts have modified their
procedures in an attempt to catch problems at the ‘back end’, before results are formally reported. One
such attempt is ‘verification’. Such internal safeguards are not always well-suited to enhancing per-
formance (e.g. by reducing the impact of context) or identifying subtle influences and errors. With
verification, for example, the second analyst may not be blinded to (i.e., they are aware of) the outcome
of the first analyst’s assessment, because they are (in)formally told or because a particular finding is
suggested—e.g. where ‘no match cases (i.e. the first analyst has not declared an identification) are
subject to verification. Knowing the outcome of an earlier assessment, or the beliefs of other inves-
tigators, is likely to influence an analyst (recall confirmation bias and anchoring effects) regardless of
how hard she tries to resist.
Yet, such verification (or peer review, as it is sometimes styled) is
commonly presented, and accepted, as a guarantee of the reliability of interpretations and opinions.
The organization of many forensic science institutions and their workflows unnecessarily exposes
the humans working in them, particularly those involved in interpretation and analysis, to real threats
from contextual influences and contamination.
While the impact of these effects on the vast majority
of techniques is unknown, preliminary studies (e.g. research by Dror) suggest that threats from
contextual influences, with the potential to erode the probative value of expert evidence, including
evidence derived using techniques that are (otherwise) demonstrably reliable, are real and warrant
institutional responses.
2.3 Authoritative reviews of the forensic sciences
Several recent reviews reinforce the orthodox nature of our concerns. In the remainder of this intro-
duction, we succinctly advert to recent recommendations by peak scientific bodies and independent
judicial inquiries.
In 2006, following congressional appropriation, the U.S. National Academy of Sciences (NAS)
formed a multidisciplinary National Research Council (NRC) committee to review the condition of the
In the Mayfield misattribution, two senior analysts ‘verified’ the mis-identification.
On peer review and its limitations, see EDMOND, G. (2008). Judging the Scientific and Medical Literature: Some Legal
Implications of Changes to Biomedical Research and Publication. Oxford Journal of Legal Studies, 28,523.
For a review of the scientific literature regarding contextual influences in forensic science, see KASSIN,S.,DROR,I.E.&
UKUCKA, J. (2013). The Forensic Confirmation Bias: Problems, Perspectives, and Proposed Solutions. Journal of Applied
Research in Memory and Cognition, 2, 4–52.
The existence of bias does not mean that the conclusion is erroneous. There may be a need to distinguish between the
decision process and the decision outcome (forensic conclusion). Bias affects the decision-making process. Whether it alters the
final conclusion depends on the level and direction of the bias, as well as the difficulty of the forensic comparisons (as they
become more difficult—closer to the decision threshold, the more likely bias will affect the final conclusion). However, even in
cases in which bias does not alter the decision outcome, it still affects examiners confidence in the conclusion and how it is
presented in court. See D
ROR, I. E. (2009). On Proper Research and Understanding of the Interplay between Bias and Decision
Outcomes. Forensic Science International, 191, 17–18.
at University of New South Wales on October 19, 2014 from
forensic sciences. The result of that inquiry, ‘Strengthening the forensic sciences in the United States:
A path forward (2009)’, was remarkably critical in tone.
To its surprise, the Committee encountered
serious problems across the forensic sciences and expressed doubts about the evidentiary value of
some forensic science techniques in regular use, particularly the non-DNA identification sciences.
Specifically addressing the issue of bias and the need for research, the NRC committee insisted that:
a body of research is required to establish the limits and measures of performance and to
address the impact of sources of variability and potential bias. Such research is sorely
needed, but it seems to be lacking in most of the forensic disciplines that rely on subjective
assessments of matching characteristics. These disciplines need to develop rigorous proto-
cols to guide these subjective interpretations and pursue equally rigorous research and
evaluation programs. The development of such research programs can benefit signifi-
cantly from other areas, notably from the large body of research on the evaluation of
observer performance in diagnostic medicine and from the findings of cognitive psych-
ology on the potential for bias and error in human observers.
The NAS report further recommended establishing a National Institute of Forensic Science (NIFS)
that, in addition to sponsoring and supervising validation studies, determining error rates, developing
empirically driven standards and probabilistic forms of reporting results, would address contextual
bias and threats to interpretations of evidence through attention to psychological research and revised
procedures. The Committee recommended research on bias and the reform of institutional procedures
and workflows.
Recommendation 5, for example, states:
The National Institute of Forensic Science (NIFS) should encourage research programs on
human observer bias and sources of human error in forensic examinations. Such