ArticlePDF Available

Abstract

Since the 2001 attacks on the twin towers, policies on security have changed drastically, bringing about an increased need for tools that allow for the detection of deception. Many of the solutions offered today, however, lack scientific underpinning. We recommend two important changes to improve the (cost) effectiveness of security policy. To begin with, the emphasis of deception research should shift from technological to behavioural sciences. Secondly, the burden of proof should lie with the manufacturers of the security tools. Governments should not rely on security tools that have not passed scientific scrutiny, and should only employ those methods that have been proven effective. After all, the use of tools that do not work will only get us further from the truth.
Open Access Journal of Forensic Psychology
http://www.forensicpsychologyunbound.ws/ – 2009. 1: 1-4
A Call for Evidence-Based Security Tools
Ewout H. Meijer, Faculty of Psychology and Neuroscience, Maastricht University,
Maastricht, The Netherlands. Email: eh.meijer@maastrichtuniversity.nl. Bruno
Verschuere, Department of Psychology, Ghent University, Ghent, Belgium. Aldert
Vrij, Psychology Department, University of Portsmouth, Portsmouth, UK. Harald
Merckelbach, Faculty of Psychology and Neuroscience, Maastricht University,
Maastricht, The Netherlands. Fren Smulders, Faculty of Psychology and
Neuroscience, Maastricht University, Maastricht, The Netherlands. Sharon Leal,
Psychology Department, University of Portsmouth, Portsmouth, UK. Gershon Ben-
Shakhar, Department of Psychology, The Hebrew University, Jerusalem, Israel.
Pär Anders Granhag, Department of Psychology, Göteborg University, Göteborg,
Sweden. Matthias Gamer, Department of Systems Neuroscience, University
Medical Center Hamburg-Eppendorf, Hamburg, Germany. Nurit Gronau,
Department of Education and Psychology, The Open University of Israel, Israel.
Gerhard Vossel, Department of Psychology, Johannes Gutenberg University
Mainz, Mainz, Germany. Geert Crombez, Department of Psychology, Ghent
University, Ghent, Belgium. Sean Spence, School of Medicine and Biomedical
Sciences, The University of Sheffield, Sheffield, UK.
Abstract: Since the 2001 attacks on the twin towers, policies on security have
changed drastically, bringing about an increased need for tools that allow for the
detection of deception. Many of the solutions offered today, however, lack scien-
tific underpinning.
We recommend two important changes to improve the (cost) effectiveness of secu-
rity policy. To begin with, the emphasis of deception research should shift from
technological to behavioural sciences. Secondly, the burden of proof should lie
with the manufacturers of the security tools. Governments should not rely on secu-
rity tools that have not passed scientific scrutiny, and should only employ those
methods that have been proven effective. After all, the use of tools that do not
work will only get us further from the truth.
Keywords: security policy, lie detection, deception detection, behavioral science,
terrorism
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
A Call for Evidence-Based Security Tools
2
OAJFP – ISSN 1948-5115 – Volume 1. 2009.
Recently, the peer-reviewed journal The International Journal of Speech, Lan-
guage and the Law yanked an article that unfavourably reviewed voice-stress-
analysis software. This software analyzes a speaker’s voice, and its manufacturer
claims that it can be used for truth verification (See the Nemesysco website at
www.lva650.com and http://security.nemesysco.com/gk1.html). Examples of its
use entail airport screening (Moscow Domodedovo Airport, 2006) and the
evaluation of benefits claims by social services (“Lie detector to target claimants,”
2007). The decision by the editorial board was prompted after the company
manufacturing the software threatened to sue for defamation (Cho, 2009). Such
intimidation and censoring of academic discussion is alarming. The real problem,
however, lies in governments actually using these technologies.
Since the 2001 attacks on the twin towers, policies on security have changed
drastically, bringing about an increased need for tools that allow for the detection of
deception. Potential solutions are primarily sought in new methods and
technologies. The US Department of Homeland Security funded the development
of the Future Attribute Screening Technology (FAST; Barrie, A., 2008), a set of
sensors that can remotely measure multiple physiological signals. The US
Transport and Security Administration introduced The Cogito, another device
measuring physiological signals, as well as the Screening Passengers by
Observation Technique (SPOT), where specially trained teams watch travellers for
behavioural signs thought to be indicative of deception (Karp & Meckler, 2006).
Meanwhile, the US Defence Academy for Credibility Assessment issued the
Preliminary Credibility Assessment Screening System (PCASS), yet another
device measuring physiological signals, to its soldiers in Afghanistan (Dedman,
2008). Non-US examples of widely used deception detection techniques include
the use of the voice stress analysis software by British authorities (Cho, 2009), and
Scientific Content Analysis (SCAN) as one of the worlds most widely used
methods to detect deception from written statements (Vrij, 2008). Besides well-
chosen acronyms, these methods have one thing in common: They all lack
scientific underpinning. None of them is supported by research published in peer-
reviewed journals.
In absence of systematic research, users will base their evaluation on data
generated by field use. Because people tend to follow heuristics rather than the
rules of probability theory, perceived effectiveness can substantially differ from true
effectiveness (Tversky & Kahneman, 1973). For example, one well-known prob-
lem associated with field studies is that of selective feedback. Investigative author-
ities are unlikely to receive feedback from liars who are erroneously considered
truthful. They will occasionally receive feedback when correctly detecting
deception, for example through confessions (Patrick & Iacono, 1991; Vrij, 2008).
The perceived effectiveness that follows from this can be further reinforced through
confirmation bias: Evidence confirming one’s preconception is weighted more
heavily than evidence contradicting it (Lord, Ross, & Lepper, 1979). As a result,
even techniques that perform at chance level may be perceived as highly effective
(Iacono, 1991). This unwarranted confidence can have profound effects on
A Call for Evidence-Based Security Tools
3
OAJFP – ISSN 1948-5115 – Volume 1. 2009.
citizens’ safety and civil liberty: Criminals may escape detection while innocents
may be falsely accused. The Innocence Project (Unvalidated or improper science,
no date) demonstrates that unvalidated or improper forensic science can indeed
lead to wrongful convictions (see also Saks & Koehler, 2005).
We recommend two important changes to improve the (cost) effectiveness of secu-
rity policy. To begin with, the emphasis of deception research should shift from
technological to behavioural sciences. It is the behavioural sciences that can pro-
vide insight into the psychological factors underlying deception. For example,
many of the methods described above rely on the assumption that deception is
accompanied by some kind of heightened emotional arousal. The robustness of
this link between deception and emotional arousal, however, has been criticized in
the scientific literature for decades. Consequently, it is not the reliable registration
of stress that is cumbersome; it is the relationship between stress and deception
that is a problematic starting point (Lykken, 1998; National Research Council,
2003; Vrij, Fisher, Mann, & Leal, 2006). This key problem is addressed by the
behavioural sciences, and not by technology.
Secondly, the burden of proof should lie with the manufacturers of the security
tools. Currently, the evidence for many of these tools relies almost exclusively
upon testimonials or non-disclosed research performed by the manufacturers
themselves. This stands in sharp contrast to scientific practice and the recom-
mendation of the US National Research Council. This council— distinguished
scholars—concluded that research directed at methods for detection and deterring
major security threats should be “conducted and reviewed openly in the manner of
other scientific research. Classified and restricted research should be limited only
to matters of identifiable national security (National Research Council, 2003, p.
230; see also Bhattacharjee, 2006).
The government’s task of protecting her citizens comes with responsibilities. One
of these responsibilities entails that decisions about matters with significant poten-
tial social or personal implications are based on informed quantitative reasoning
(Smith, 1996). Governments should not rely on security tools that have not passed
scientific scrutiny, and only employ those methods that have been proven effective.
After all, the use of tools that do not work will only get us further from the truth.
Acknowledgements: The authors form The European Consortium of
Psychological Research On Deception Detection (EPRODD; www.eprodd.net).
References
Barrie, A. (2008, September 23). Homeland Security detects terrorist threats by
reading your mind. Retrieved 7/7/09 from
http://www.foxnews.com/story/0,2933,426485,00.html.
Bhattacharjee, Y. (2006). Scientific openness: Should academics self-censor their
A Call for Evidence-Based Security Tools
4
OAJFP – ISSN 1948-5115 – Volume 1. 2009.
findings on terrorism? Science, 312, 993-994.
Cho, A. (2009). Forensic science: Journal flinches as article on voice analyzer
sparks lawsuit threat. Science, 323, 863.
Dedman, B. (2008, April 9). New anti-terror weapon: Hand-held lie detector.
Retrieved 7/7/09 from http://www.msnbc.msn.com/id/23926278/.
Domodedovo International Airport’s clarifications on GK-1 voice profiling
technology application. (2006, April 14). Press release retrieved 7/7/09 from
http://www.domodedovo.ru/en/main/news/press_rel/?ID=1308.
Iacono, W. G. (1991). Can we determine the accuracy of polygraph tests? In J. R.
Jennings, P. K. Ackles & M. G. H. Coles (Eds.), Advances in
Psychophysiology (Vol. 4, pp. 201-207). London: Jessica Kingsley
Publishers.
Karp, J., & Meckler, L. (2006). Which travelers have 'hostile intent'? Biomedic
device may have the answer. The Wall Street Journal, p. B1.
Lie detector to target claimants. (2007, November 20). BBC News. Retrieved
7/7/09 from http://news.bbc.co.uk/2/hi/uk_news/7102920.stm.
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude
polarization: The effects of prior theories on subsequently considered
evidence. Journal of Personality and Social Psychology, 37, 2098-2109.
Lykken, D. T. (1998). A tremor in the blood. New York: Plenum Press.
National Research Council. (2003). The polygraph and lie detection. Committee to
review the scientific evidence on the polygraph. Division of behavioral and
social sciences and education. Washington, DC: The National Academic
Press. Retrieved 7/7/09 from
http://www.nap.edu/openbook.php?record_id=10420&page=212.
Patrick, C. J., & Iacono, W. G. (1991). Validity of the control question polygraph
test: The problem of sampling bias. Journal of Applied Psychology, 76, 229-
238.
Saks, M. J., & Koehler, J. J. (2005). The coming paradigm shift in forensic
identification science. Science, 309, 892-895.
Smith, A. F. M. (1996). Mad cows and ecstasy: Chance and choice in an evidence-
based society. Journal of the Royal Statistical Society A, 159, 367-383.
Tversky, A., & Kahneman, D. (1973). Judgment under uncertainty: Heuristics and
biases. Oxford England: Oregon Research Inst., Vol. 13.
Unvalidated or improper science (no date). Retrieved 7/7/09 from
http://www.innocenceproject.org/understand/Unreliable-Limited-
Science.php.
Vrij, A. (2008). Detecting lies and deceit. Pitfalls and opportunities. Chichester:
Wiley.
Vrij, A., Fisher, R., Mann, S., & Leal, S. (2006). Detecting deception by
manipulating cognitive load. Trends in Cognitive Sciences, 10, 141-142.
... However, in the last decades, this assumption has been challenged due to its lack of scientific basis (Meijer et al. 2009; Nortje and Tredoux 2019). Arousal, a key element in the theory, can be triggered by various factors beyond deception, including surprise, cognitive load, and the fear of being perceived as deceptive (Ben-Shakhar 2008). ...
Article
Full-text available
In four experiments, we investigated how stress influenced the perceived verbal and nonverbal behaviors of honest and deceptive senders and how it affected credibility judgments made by independent observers. We used the Maastricht Acute Stress Test to induce stress in senders. Independent observers evaluated videos (Experiments 1 and 3) or transcripts (Experiments 2 and 4) of these senders reporting honestly or deceptively. Our results showed that stress significantly influenced observers' judgments of nonverbal behaviors but had a limited effect on content evaluations (plausibility, believability, and accuracy). Instead, veracity predominantly shaped credibility assessments, with plausibility and believability emerging as indicators of truthfulness. The findings challenge the reliance on nonverbal cues in detecting deception and emphasize the need of considering situational factors. Furthermore, they underscore the importance of prioritizing verbal content in professional lie-detection practices.
... Given that SCAN is used worldwide in police investigations, providing support, or the lack thereof, is not trivial (Meijer et al., 2009). Using a data set of 234 statements, the current study aimed at extending previous SCAN findings, and to investigate whether the different SCAN criteria can actually discriminate between truthful and fabricated statements. ...
Article
Full-text available
The Scientific Content Analysis (SCAN) is a verbal veracity assessment method that is currently used worldwide by investigative authorities. Yet, research investigating the accuracy of SCAN is scarce. The present study tested whether SCAN was able to accurately discriminate between true and fabricated statements. To this end, 117 participants were asked to write down one true and one fabricated statement about a recent negative event that happened in their lives. All statements were analyzed using 11 criteria derived from SCAN. Results indicated that SCAN was not able to correctly classify true and fabricated statements. Lacking empirical support, the application of SCAN in its current form should be discouraged.
... Although behavioral cues represent an obvious candidate for deception detection, the traditional approach has largely overlooked the cognitive processes involved in producing and executing secretive behaviors. More recently, it has been convincingly argued that focusing on cognitive processes would be an essential step in the development of a mechanistic understanding of deception [1], and in improving the existing deception detection techniques [2,3]. ...
Article
Full-text available
Concealing the possession of relevant information represents a complex cognitive process, shaped by contextual demands and individual differences in cognitive and socio-emotional functioning. The Reaction Time-based Concealed Information Test (RT-CIT) is used to detect concealed knowledge based on the difference in RTs between denying recognition of critical (probes) and newly encountered (irrelevant) information. Several research questions were addressed in this scenario implemented after a mock crime. First, we were interested whether the introduction of a social stimulus (facial identity) simulating a virtual investigator would facilitate the process of deception detection. Next, we explored whether his emotional displays (friendly, hostile or neutral) would have a differential impact on speed of responses to probe versus irrelevant items. We also compared the impact of introducing similar stimuli in a working memory (WM) updating context without requirements to conceal information. Finally, we explored the association between deceptive behavior and individual differences in WM updating proficiency or in internalizing problems (state / trait anxiety and depression). Results indicated that the mere presence of a neutral virtual investigator slowed down participants' responses, but not the appended lie-specific time (difference between probes and irrelevants). Emotional expression was shown to differentially affect speed of responses to critical items, with positive displays from the virtual examiner enhancing lie-specific time, compared to negative facial expressions, which had an opposite impact. This valence-specific effect was not visible in the WM updating context. Higher levels of trait / state anxiety were related to faster responses to probes in the negative condition (hostile facial expression) of the RT-CIT. These preliminary findings further emphasize the need to take into account motivational and emotional factors when considering the transfer of deception detection techniques from the laboratory to real-life settings.
... For the time being researchers should concentrate on: (1) organizing their research agendas in order to address the more basic issues within this new field, (2) drawing on established psychological theory when conducting such research, and (3) warning against the many quick-fixes and pseudo-scientific techniques that promise to spot criminal intent; such techniques are too often sparked by strong commercial motives (for such warnings, see e.g. Honts et al., 2009; Meijer et al., 2009). ...
Article
Full-text available
The topic of true and false intent has been more or less ignored within the field of legal psychology. This is remarkable considering the frequency and importance of situations calling for assessments of whether a person is lying or telling the truth about his or her intentions (e.g., when crossing a border). There were four aims to the present paper. The first was to outline a psycho-legal research program on true and false intentions. The second aim was to highlight some conceptual issues which might be relevant for planning and conducting research in this domain. The third aim was to offer some stepping stones which might assist researchers launching investigations on true and false intentions. The final aim was to briefly summarize the first round of empirical psycho-legal studies on true and false intentions.
Chapter
Since the emergence of developmental psychology, scientists and laypersons have been fascinated with children’s lie- telling because it is a lens through which to view a multitude of behaviors, including children’s developing cognitive, social, and moral abilities (Darwin 1877; Hall 1891; Stern, Stern, and Lamiell 1909; Hartshorne and May 1928). The scientific study of the development of lying began at the turn of the twentietth century, when the field of developmental psychology was just being established. However, a shift away from examining social influences and mental activity, and toward behaviorist principles meant that the investigation of deception lay dormant for nearly half a century. It is only during the last three decades, as cognitive and social explanations have regained acceptance among developmental scientists as playing a role in the development of intentional systems in children, that lying has become the focus of investigative efforts among researchers. The widespread prevalence of lying in everyday life (e.g., DePaulo et al. 1996) coupled with a strong emphasis on the promotion of children’s truthful behaviors in society has also made the topic of interest to parents, educators, and professionals who work with children in clinical and forensic settings. Despite its ubiquity in everyday life, lying is considered a negative and reprehensible behavior (Bok 1999). Given the social interpersonal implications including loss of trust and credibility, and increased risk of antisocial behavior associated with chronic lying (Stouthamer- Loeber 1986; Gervais et al. 2000), researchers are interested in how this behavior emerges and develops in children and the factors that influence its manifestation as a social strategy.
Chapter
Lying occurs in many different forms and contexts. There are the caring parents, who tell their children that the beloved dog went to an animal farm (instead of dying), or the shop assistant who tells you that the clothes you are trying on are fitting you just perfectly. But there is also the banker, who leaves out the risks when advising investment possibilities to a client, or the murderer who pretends not to have killed the victim. To describe these different types of lies, researchers have categorized them according to different dimensions. For instance, DePaulo, Kashy, Kirkendol, Wyer, and Epstein (1996) proposed that lies could be classified according to the reasons why people lie. According to these authors, lies can be classified as self- or other- oriented, as emitted to gain an advantage or to avoid costs, and as motivated by materialistic or psychological reasons. Clearly, many lies fall into several categories and the boundaries are not always strict. For instance, the parents lying to their children about the dog may do so to spare their children grief (i.e., other- oriented, avoiding costs, psychological reasons), but at the same time (secretly), the lie may also be self- oriented, as it spares the parents the difficult task of explaining the concept of death to their young children. Other classifications of lying refer to the degree to which a lie bends the truth: whether it is only a very subtle modification of the truth (e.g., “I did not have sex with that woman.”), an exaggeration (e.g., “The fish I caught was soooo big.”), or an outright lie (e.g., “I cannot come to work because I have a cold.”). Finally, one can also distinguish the consequences that lies have if they were discovered, in which case lies are often referred to as low versus high- stake lies. This distinction is especially important in psychological research, as the severity of the consequences may affect many psychological and physiological correlates of lies.
Chapter
Full-text available
Covert methods/measures of detecting deception/guilt, without the awareness of the suspects that they are being examined, are reviewed. Behavioural cues such as seeking micro-expressions of emotions in the face of the target person, detecting stress in the targets’ voice and eye-tracking technologies measured by cameras from a distant, were presented. Eye-tracking assume that lying, or the additional cognitive load that accompany lying, is associated with pupil dilation, longer fixation on a target and decreased blinking. Another measure that can be carried out unobtrusively by a special camera is thermal imaging whereby changes in facial temperature, which are related to changes in the blood flow, are detected. Finally, Psychophysiological tests of lie-detection (polygraph) using covert respiration measures were reviewed. In all these applications, the question of privacy may be raised as well as the issue of pre-examination consent that is currently required from polygraph examinees. Such ethical considerations were discussed.
Article
Full-text available
People who hold strong opinions on complex social issues are likely to examine relevant empirical evidence in a biased manner. They are apt to accept "confirming" evidence at face value while subjecting "disconfirming" evidence to critical evaluation, and, as a result, draw undue support for their initial positions from mixed or random empirical findings. Thus, the result of exposing contending factions in a social dispute to an identical body of relevant empirical evidence may be not a narrowing of disagreement but rather an increase in polarization. To test these assumptions, 48 undergraduates supporting and opposing capital punishment were exposed to 2 purported studies, one seemingly confirming and one seemingly disconfirming their existing beliefs about the deterrent efficacy of the death penalty. As predicted, both proponents and opponents of capital punishment rated those results and procedures that confirmed their own beliefs to be the more convincing and probative ones, and they reported corresponding shifts in their beliefs as the various results and procedures were presented. The net effect of such evaluations and opinion shifts was the postulated increase in attitude polarization. (28 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Sampling bias is a potential problem in polygraph validity studies in which posttest confessions are used to establish ground truth because this criterion is not independent of the polygraph test. In the present study, criterion evidence was sought from polygraph office records and from independent police files for all 402 control question tests (CQTs) conducted during a 5-yr period by federal police examiners in a major Canadian city. Based on blind scoring of the charts, the hit rate for criterion innocent Ss (65% of whom were verified by independent sources) was 55%; for guilty Ss (of whom only 2% were verified independently), the hit rate was 98%. Although the estimate for innocent Ss is tenable given the characteristics of the sample on which it is based, the estimate for the guilty subsample is not. Some alternatives to confession studies for evaluating the accuracy of the CQT with guilty Ss are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Chapter
People are generally poor at detecting deceit when observing someone’s behaviour or listening to their speech. In this chapter I will discuss the major factors (pitfalls) that lead to failures in catching liars: the sixteen reasons I will present are clustered into three categories: (i) a lack of motivation to detect lies; (ii) difficulties associated with lie detection; and (iii) common errors made by lie detectors. Discussing pitfalls provides insight into how lie detectors can improve their performance (for example, by recognising common biases and avoiding common judgment errors). The second section of this chapter discusses 11 ways (opportunities) to improve lie detection skills. Within this section, I first provide five recommendations for avoiding common errors in detecting lies. Next, I discuss recent lie detection research that introduces novel interview styles aimed at eliciting and enhancing verbal and nonverbal differences between liars and truth tellers. The recommendations are relevant in various settings, from the individual level (e.g., “Is my partner really working late?”) to the societal level (e.g., “Can we trust this suspect when he claims that he is not the serial rapist the police are searching for?”).
Article
Converging legal and scientific forces are pushing the traditional forensic identification sciences toward fundamental change. The assumption of discernible uniqueness that resides at the core of these fields is weakened by evidence of errors in proficiency testing and in actual cases. Changes in the law pertaining to the admissibility of expert evidence in court, together with the emergence of DNA typing as a model for a scientifically defensible approach to questions of shared identity, are driving the older forensic sciences toward a new scientific paradigm.
Article
Concern with crime and terrorism makes it increasingly important to be able to detect lying. Most lie detection tools used to date are arousal-based protocols. The majority of these protocols are based on the assumption that, because of their fear of being caught, liars will be more aroused when answering key relevant questions (‘Did you steal the money?’) than when answering comparison questions. According to the US National Research Council's well-documented report [1], however, this premise is theoretically weak. Liars do not necessarily reveal more signs of arousal when answering key questions. Conversely, truth tellers might be anxious and hence show signs of arousal when answering key questions.
Scientific openness: Should academics self-censor their rA Call for Evidence-Based Security Tools 4 OAJFP – ISSN 1948-5115 – Volume 1
  • Y Bhattacharjee
Bhattacharjee, Y. (2006). Scientific openness: Should academics self-censor their rA Call for Evidence-Based Security Tools 4 OAJFP – ISSN 1948-5115 – Volume 1. 2009. findings on terrorism? Science, 312, 993-994
Can we determine the accuracy of polygraph tests?
  • W G Iacono
Iacono, W. G. (1991). Can we determine the accuracy of polygraph tests? In J. R. Jennings, P. K. Ackles & M. G. H. Coles (Eds.), Advances in Psychophysiology (Vol. 4, pp. 201-207). London: Jessica Kingsley Publishers.
A tremor in the blood The polygraph and lie detection. Committee to review the scientific evidence on the polygraph. Division of behavioral and social sciences and education
  • D T Lykken
Lykken, D. T. (1998). A tremor in the blood. New York: Plenum Press. National Research Council. (2003). The polygraph and lie detection. Committee to review the scientific evidence on the polygraph. Division of behavioral and social sciences and education. Washington, DC: The National Academic Press. Retrieved 7/7/09 from http://www.nap.edu/openbook.php?record_id=10420&page=212.