Conference PaperPDF Available

Intelligent Deception Detection through Machine Based Interviewing

Authors:

Figures

Content may be subject to copyright.
Intelligent Deception Detection through Machine
Based Interviewing
James O’Shea1, Keeley Crockett1, Wasiq Khan1, Philippos Kindynis2, Athos Antoniades2 , Georgios Boultadakis3
1School of Computing, Mathematics and Digital Technology,
Manchester Metropolitan University, Manchester, M1 5GD, UK, K.Crockett@mmu.ac.uk
2Stremble Ventures LTD, 59 Christaki Kranou, 4042 Germasogeia, Limassol, Cyprus
3European Dynamics, Brussels
Abstract In this paper an automatic deception detection
system, which analyses participant deception risk scores from
non-verbal behaviour captured during an interview conducted
by an Avatar, is demonstrated. The system is built on a
configuration of artificial neural networks, which are used to
detect facial objects and extract non-verbal behaviour in the
form of micro gestures over short periods of time. A set of
empirical experiments was conducted based a typical airport
security scenario of packing a suitcase. Data was collected
through 30 participants participating in either a truthful or
deceptive scenarios being interviewed by a machine based
border guard Avatar. Promising results were achieved using
raw unprocessed data on un-optimized classifier neural
networks. These indicate that a machine based interviewing
technique can elicit non-verbal interviewee behavior, which
allows an automatic system to detect deception.
Keywords- neural networks, avatar, deception detection
I. INTRODUCTION
Border control officers’ tasks rely on bilateral human
interaction such as interviewing an individual traveller using
verbal and non-verbal communication to both provoke
response and interpret the traveler’s responses. Automated
pre-arrival screening could greatly reduce the amount of time
a participant spends at the border crossing point and may
improve security control. Such a system would complement
existing border control technology such as Advanced
Passenger Information systems and future systems such as
the new Entry/Exit System centralized border management
system which will facilitate the automation of border control
process (due for implementation in 2020) [1].
This paper presents initial work on an Automated Deception
Detection system known as ADDS which is powered by a
conversational agent avatar and is capable of quantifying the
degree of deception on the part of the interviewee. ADDS
forms part of the iBorderCtrl (Intelligent Portable Control
System) [2] whose aim is to enable faster and more thorough
border control for third country nationals crossing the land
borders of EU Member States (MS) [2,3]. The final version
of ADDS will utilize an advanced border control agent avatar
which conducts an interview with a traveller. The avatar
attitudes will be personalized to communicate with the
traveler including utilizing subtle non-verbal communication
cues to stimulate richer responses from them. A strong focus
will be on identifying the impact on non-verbal
communication expressed by the avatar on the performance
of ADDS.
Nonverbal behaviour is used by humans to communicate
messages, which are transmitted through visual and auditory
features such as facial expressions, gaze, posture, gestures,
touch and non-linguistic vocal sounds [4]. A human being
continually transmits nonverbal behavior, which can be
produced subconsciously, in contrast to spoken language.
The majority of work on the use of non-verbal behaviour
(NVB) to determine a specific cognitive state has been
undertaken by human observers, who are often prone to
fatigue and produce different subject opinions. Hence, an
automated solution is preferable. Related, but limited work
has been done in the automated extraction of NVB from a
learning system [5] to detect comprehension levels and also
in detection of guilt and deception [6, 7]. Both of these
examples have used artificial neural networks to first detect
micro gesture patterns and then perform classification
successfully.
Time is also a factor, as interviewers need to interact longer
with travelers to reach a conclusion on their deception intent.
Such time comes at a premium in border control, resulting in
short and potentially false positive results in the field. An
automated system, which utilizes a few minutes of traveler
time at the pre-crossing stage without increasing the amount
of time they spend with a border control agent, could thus
potentially increase efficacy while reducing cost. In this work
deception detection in ADDS is performed by an
implementation of the patented Silent Talker artificial
intelligence based deception detector [6, 7].
The aim of research presented in this paper was to firstly
produce a prototype trained artificial neural network (ANN)
classifier to be used within the automatic deception detection
system; secondly to investigate whether an avatar, machine
based interviewing technique could be developed for a border
security application which requires large volumes of
interviews. Thus, the research question addressed in this
paper can be stated as:
Can a machine based interviewing technique elicit non-
verbal behavior, which allows an automatic system to detect
deception?
This paper is organized as follows; Section II provides a
description of prior work in the field of deception detection
systems with emphasis on automation. The use of
conversational agents is also examined in the border control
context in terms of being used as avatar interviewers. Section
III describes the ADDs system. Section IV presents the
overall methodology of the data collection process and
describes a series of border control scenarios, which are used
to simulate truthful and deceptive behaviour of participants.
Results and findings of a series of experiments are
highlighted in Section V. Section VI presents the conclusions
and future directions.
II. PRIOR WORK
A) Deception Detection Systems
Human interest in detecting deception has a long history.
The earliest records date back to the Hindu Dharmasastra of
Gautama, (900 600 BC) and the Greek philosopher
Diogenes (412 323 BC) according to Trovillo (1939).
Today, the best-known method is the Polygraph [8], which
was invented, by John Augustus Larson in 1921, to detect lies
by measuring physiological changes related to stress. The
Polygraph is a recording instrument, which displays
physiological changes such as pulse rate, blood pressure, and
respiration, in a form where they can be interpreted by a
trained examiner as indicating truthful or deceptive
behaviour. A polygraph test takes a minimum of 1.5 hours
but can take up to four hours depending on the issue being
tested for [8]. Individual scientific studies can be found which
support [9] or deny [10] the validity of the Polygraph. A
meta-study [11] conducted in 1985 found 10 studies from a
pool of 250 were sufficiently rigorous to be included. From
these they concluded that under very narrow conditions, the
Controlled Question Test (CQT - the standard Polygraph test
that could be used at border crossings) could perform
significantly better than chance, but these results would still
contain substantial numbers of false positive, false negative
and inconclusive classifications. They also stated that many
conditions needed to achieve this might be beyond the control
of the examiner. Constructing a good set of control questions
for this test requires substantial information about the
interviewee's background, occupation, work record and
criminal record to be collected before the exam. The
polygraph requires physiological sensors on the traveler that
would make both the set-up time and cost of an interview
prohibitively expensive to apply to all travelers, thus typically
if it is used, it is at a secondary stage for high-risk travelers.
Functional Magnetic Resonance Imaging (fMRI) is a
technique that measures changes in activity of areas of the
brain indirectly by measuring blood flow (which changes to
supply more oxygen to active areas of the brain). It has been
proposed that there are reliable relationships between patterns
of brain activation and deception that can be measured by
fMRI. It has also been reported that although fMRI is seen as
overcoming some weaknesses of the Polygraph, for example
by having an explanatory model based on cognitive load [12]
it is highly vulnerable to countermeasures (in common with
EEG-based approaches).
Voice Stress Analysis (VSA) is a technique that analyses
physical properties of a speech signal as opposed to the
semantic content. The technique is fundamentally based on
the idea that a deceiver is under stress when telling a lie and
that the pitch of the voice is affected by stress. More
specifically, it claims that micro tremors, small frequency
modulations in the human voice, are produced by the
automatic or involuntary nervous system when an
interviewee is lying. There have also been claims that the
increased cognitive load of deception creates micro tremors
[13]. The weight of scientific analysis is that, whatever the
assumed underlying model, VSA performs no better than
chance and has been described as “charlatanry” [14].
The most recent work in this area is contained in the
INTERSPEECH 2016 Computational Paralinguistics
Challenge: Deception, Sincerity & Native Language.
Inspection of a sample of responses to the 2016 challenge
shows them to be either paralinguistic, phonemic or a
combination of the two, e.g. the Low Level Descriptors such
as psychoacoustic spectral sharpness or phonetic features
such as phonemes [15]. These techniques achieved
approximately 67% using a technique called “Unweighted
Average Recall” intended to take account of the fact that the
Deceptive Speech Database (DSD) (dataset) from the
University of Arizona was unbalanced (test set contained
24% deceptive / 76% truthful classes). We have not found
evidence of a significant degree of paralinguistic research
outside English.
Facial Microexpressions are short-lived, unexpected
expressions. There is said to be a small “universal” set of
expressions of extreme emotion: disgust, anger, fear, sadness,
happiness, surprise, and contempt, meaning they are common
across cultures. A formalized method of encoding micro
expressions was defined by Paul Ekman, who developed
commercial tools for training interviewers to recognize them
[16]. One of the resources is a manual on a Facial Action
Coding System for training in expression recognition. This
has generated a large body of research in automating FACS
for applications such as lie detection Virtually all of the
findings from micro expression studies are closer to a CKT
than genuine lie detection, so they do not constitute
persuasive evidence for using the technique at border
crossings.
B) Automated Deception Detection
Silent Talker (ST) was designed to answer the criticisms
of the psychology community that there are no meaningful
single non-verbal indicators of deception (such as averted
gaze), by combining information from many (typically 40)
fine-grained nonverbal channels simultaneously and learning
(through Artificial Neural Networks) to generalize about
deceptive NVB from examples [6, 7]. In this respect, it does
not depend on an underlying explanatory model in the same
way as other lie detectors. However, it does have a conceptual
model of NVB. This model assumes that certain mental states
associated with deceptive behaviour will drive an
interviewee’s NVB when deceiving. These include Stress or
Anxiety (factors in psychological Arousal), Cognitive Load,
Behavioral Control and Duping Delight. Stress and Anxiety
are highly related, if not identical states. The key feature of
ST, as a machine learning system, is that it takes a set of
candidate features as input and determines itself which
interactions between them, over time, indicated lying. Thus
is not susceptible to errors caused by whether particular
psychologists are correct about particular NVB gestures.
Evidence to date is that no individual feature can be
identified as a good indicator, only ensembles of features
over a time interval provide effective classification. Early
experiments with ST showed classification rates of between
74% and 87% (p<0.001) depending on the experimental
condition [6]. There are no single, simple indicators of
deception; ST uses complex interactions between multiple
channels of microgestures over time to determine whether the
behaviour is truthful or deceptive. A microgesture is a very
fine-grained non-verbal gesture, such as the movement one
eye from fully-open to half-open. This gesture could be
combined with the same eye moving from half-open to closed
indicating a wink or blink. Over a time interval, e.g. 3
seconds, complex combinations of microgestures can be
mined from the interviewee’s behaviour. Microgestures are
significantly different from micro-expressions (proposed in
other systems), because they much more fine-grained and
require no functional psychological model of why the
behaviour has taken place [6].
C) Conversational Agents in the Border Control context.
A Conversational Agent (CA) is an AI system that engages a
human user in conversation to achieve some practical goal,
usually a task perceived as challenging by the user. Embodied
CAs offer the opportunity of more sophisticated
communication through gesture and supplementing the
dialogue with non-verbal communication [17]. The persona
of an embodied CA is referred to as an Avatar and there is
(limited) evidence supporting the use of an Avatar
interviewer for automated border crossing control.
Nunamaker [18] reported a group of experiments,
culminating in an attempt to smuggle a concealed bomb past
an avatar interviewer. These, collectively, suggest that an
avatar can simulate affective signals during dialogue, can
have a definable persona (gender, appearance) and can elicit
cues to deception. In practice, such systems tend to rely on
vocal features [18] or electrodermal activity and measure
arousal as a proxy for deception. Hooi & Cho [19] have
reported that perceived similarity of appearance between the
avatar and interviewee reduces deceptive behaviour.
Furthermore, Strofer et al. [20] observed that when
interviewees believe that the avatar they are interacting with
is controlled by a human, they produce more physiological
responses (electrodermal), e believed to indicate deception.
In a cognitive neuroscience review, de Borts and deGelder
[21] reported that human-like avatars that move realistically
are more likeable and perceived as similar to real humans.
This prior work suggests a strong potential for the use of
avatars in border control interviews and the need for
substantial research into the influential factors. The state of
the art of this combination of technologies suggest that
Avatars will be suitable for detecting deception in border
crossing interviews, as they are effective extractors of
information from humans [22] and therefore can applied to
deception detection tasks. Secondly, they can provide
dynamic responses to user inputs and can simulate affective
signals [23].
III. AUTOMATIC DECEPTION DETECTION SYSTEM
Figure 1 presents an abstract of the ADDS architecture as
seen from within the final iBorderCtrl system. Each traveler
who engages with the function of pre-traveller registration
will be required (subject to providing informed consent) to
undertake an interview with an avatar. In the final system the
Avatar will adapt its attitude based upon the level of
deception detected by ADDS on a question by question basis.
For the purpose of training the neural network classifiers
within ADDS in this paper a still image was used for the
Avatar.
Fig.1. Automated Deception Detection System Architecture
Also, for the purpose of the research conducted in this paper,
the traveler information for the simulated border crossing
interview will be captured through a series of scenarios (for
deceptive participants), and a post experiment questionnaire.
This information will then be used to populate a local
database. In practice, the ADDS API will receive encrypted
information about a specific traveller from the iBorderCtrl
control system database and populate an instance of a
trip/traveller in the ADDS back end database server.
Classification was performed by the Silent Talker component
of ADDS using an empirically determined risk level. The
silent Talker component outputs the score for each of the
questions and associated classification, the whole interview
(score and classification) and the confirmation radio button
responses. This updated the ADDS back end database server.
In the final system, the ADDS Control module will use the
risk scores to change the avatar attitude when the next
question is asked to the traveller. In this work, the risk scores
and classifications were simply stored in the ADDS local
database for training, testing and validating the neural
network classifiers.
A) Silent Talker
This work is specifically focused on the application specific
development of the Silent Talker (ST) system (Fig.2.). ADDS
utilizes 38 input channels to the deception network. They fall
into 4 categories: eye data, face data, face angle data and
'other'.
Fig.2. Silent Talker component in ADDS
ST uses features extracted from the non-verbal behaviour
(NVB) of interviewees to determine whether they are lying
or telling the truth. In this study, the application specific ST
first receives a video stream (mobile app or web client) being
received for classification The video arrives as a sequence of
frames, each frame is processed in sequence and information
from the frames is compiled or accumulated for the purpose
of classification. The deception classifiers used in this paper
are multi-layer perceptrons producing a continuous output in
the range -1 to +1. Empirically determined thresholding of
this output was used for the truthful and deceptive
classifications. The consequence of this is that some frame
sequences will be labelled as “unclassifiable.” If a single
decision boundary were used, these would have outputs too
close to the decision boundary to justify confidence in them.
A simplified description of the components in Figure 2 now
follows:
Object Locators: Each object locator finds the position of a
particular object, (e.g. the head, the left eye, the right eye etc.)
in the current video frame. A typical object locator would
consist of a back propagation Artificial Neural network
(ANN) trained with samples extracted from video collected
during a training experiment.
Pattern Detectors: Pattern detectors detect particular states
of objects located by the object locators. For example, for the
left eye: left eye closed is true when the left eye is closed (1),
otherwise false (0), left eye half closed is true when the left
eye is half closed (1), otherwise false (0), the left eye may be
considered open if neither of these pattern detectors is true.
Channel Coder: The variations in the state of an object
determined by specific pattern detector are referred to as a
“channel”. Channel coding is the process of collecting these
variations over a specific time period (i.e. over a specific
number of video frames).
Group Channel Coders: Group channel coders refers to the
process of amalgamating and statistically summarizing the
information from the individual channel coders to form a
summary vector, which can be input to the final deception
classifier.
Deception Classifiers: Typically, the deception classifier is a
single ANN trained to classify input vectors from the group
channel coders as either truthful or deceptive. It is also
possible to add other classifiers (for example to detect feeling
of guilt) and combine these to obtain higher deception
detection accuracy.
B) Avatar
The final ADDS system will use animation to pose each
question which will be personalized for each border guard
avatar in accordance with the travelers non-verbal state. A
sample of a border guard avatar posing a question can be
found here: http://stremble.com/iBorderCtrl/1/1/1/1.mp4).
However as the development of ADDS, as a system was
happening in parallel to the training and validation of the
deception detection element, a still image of the male avatar
developed by Stremble [24] was used within this work
(Figure 3). The avatar is presented in a uniform to convey an
air of authority. In this experiment, the avatar is shown as a
still image and the speech is synthesized. One reason for this
was to see if any emotional states were conferred on the
(actually neutral) avatar by participants.
Fig.3. Male Avatar
IV. METHODOLOGY
This section describes the methodology to conduct a
quantitative empirical study of non-verbal behaviour with
samples of volunteer participants concerning truthful and
deceptive conditions. The hypothesis tested was:
H0: A machine based interviewing technique cannot be used
to detect deception from non-verbal behaviour
H1: A machine based interviewing technique can be used to
detect deception from non-verbal behaviour at levels
statistically greater than chance.
A) Overview of data collection process
Following an approved ethical process, data was collected
through participants first performing a simulated deception
task using different levels of arousal by varying the task. This
was followed by a recorded interview by the Avatar Border
Guard to collect video data for analysis of non-verbal
behaviour. On completion of the interview, participants
completed a questionnaire containing simple subjective self-
ratings of internal mental states. The experiment utilised a
number of scenarios in order to test the hypothesis. The two
major categories are Scenario Truthful and Scenario
Deceptive.
S1: Scenario Truthful
All participants will use their true identities as recorded
in their identification documents.
All participants will answer questions about a real
relative or friend who is an EU / UK citizen (equivalent
of a Sponsor in border questions asked by EU border
guards).
All participants will pack a suitcase with harmless items
typical of going on a holiday
Participants will answer questions about identity,
sponsor and suitcase contents.
All answers to questions can be answered truthfully.
Scenarios Deceptive
All participant given fake identities (male / female) and
short life history.
All participants are given a short description of a fake
relative from the EU.
Participants are stratified into 1 of 4 blocks. These blocks
are intended to have different degrees of emotional
intensity / deceptive stakes (in decreasing order of
intensity).
o S2: Simulated biohazard infectious disease in
test tube with informational video about
weaponization
o S3: Simulated biohazard infectious disease in
test tube without informational video
o S4: Simulated Drug package (soap powder in
clear packet)
o S5: Simulated Forbidden agriculture/ food
product i.e. seeds.
Participants will answer questions about identity,
sponsor and suitcase contents.
All participants engaged in truthful or deceptive scenarios
were required to pack a suitcase. The standard items relevant
to both deceptive and truthful scenarios that are packed are: a
pair of unisex trousers, a T-shirt, shower gel, a tube of
toothpaste, a hand towel and a bottle of perfume in a box. If
a participant took part in a deceptive scenario, then an
intervention by a confederate took place that involved the
confederate modifying the contents of the perfume box with
a prohibited item depending on the scenario (S2.S5). The
participant was also shown typical posters of prohibited items
from airport baggage handling areas.
The experimental methodology comprised a pre-interview
task which sets up the scenario for truth-telling or deception,
the interview itself and a debriefing stage which will include
certain ethics aspects (confirmation of consent, permissions
to use materials etc.) and some subjective ratings of feelings
during the interview (e.g. subjective guilt etc.). Each
participant was taken in to the debriefing room and asked to
read the participant information sheet, invited to ask any
questions and then sign the first part of the participant
informed consent document. In the debriefing session,
participants completed the second part of the informed
consent form to confirm they still consented to their data to
be used in the study.
B) Questions for scenario
Table I shows the questions that all participants answered
during the experiment. Some of the questions come from the
set of questions actually asked by border guards at the border
crossing point. However, many of these questions are not
practical to ask in the experimental scenario. Therefore, a
methodology was devised to substitute a minimum-sized set
of proxy questions, which cover the same psychological /
cognitive properties. This was found by analyzing a set of
questions that were provided by experts from the Hungarian
National Police Polish Border Guards, State Border Guard of
the Republic of Latvia and TRAINOSE (Greece).
Table I: Experiment Interview Questions
Question
Number
Question
1
What is your family name?
2
What is in your case?
3
Have you seen any posters of prohibited items?
4
Are there any items from the lists of prohibited items in
your case?
5
How many items are in the case?
6
If you open the case and show me what is inside, will it
confirm that your answers were true?
7
What is your first name?
8
When were you born?
9
Where were you born?
10
What is your current citizenship?
11
Please tell me the name of a friend or family member who
can confirm your identity?
12
What is the relationship of this person to you?
13
Where does this person live?
C) Interview conducted using Wizard of Oz Methodology
Collection of data to train the deception detection component
of ADDS uses the established “Wizard of Oz” methodology.
In this method (figure 4) a human, called the “Wizard”
manually controls a simulated Avatar to create an experiment
which is experienced (as closely as possible) by the
participants as if they were being interviewed by a real
Avatar. In this experiment, the Wizard operated a web app
via Wi-Fi, which controlled the display on the participant’s
screen. The Wizard has access to a GUI allowing the
selection of questions that are played in a window on the
participant’s screen. During the experiment:
The participant aligned their face with the camera using
on screen instructions.
The simulated Avatar maintained a neutral expression
The questions were delivered verbally to the participant
by the static avatar through text to speech recordings.
Video of the participant was captured on a question-by-
question basis and stored for the purposes of training and
testing.
The start time of the question is recorded as when the Avatar
starts speaking. Once the participant has finished answering
the question, the Wizard clicks to progress to the next
question. The time of this click was recorded as the end of a
question. The wizard also had the option of repeating a
question if necessary.
Fig. 4. The Wizard-of-Oz (WoZ) experiment
D) Group design and stratification
The video data of participants was recorded using the
questions presented in Table I and are automatically
cropped/segmented into question-by-question video files.
Table II shows the dataset for truthful and deceptive
participants. The data is captured using the built-in web-cam
with the default video resolution of 640*480 and 30 frames
per second (fps). The channel data is extracted from each
question using a fixed sliding window (slot) of 1 second (i.e.
30 frames) to hold sufficient information of the channel
states. Each slot is considered a single vector encoding the
information/states for 38 channels.
Furthermore, the vector is considered only if it is extracted
from a valid slot. A valid slot always consists of the channel
information for face and both eyes. Detailed explanation of
slot validity is explained in previous work [6].
Table II: Experimental Dataset
No. of Question per Interview
14
Total Participants
32 (17 Deceptive, 15
Truthful)
Total number of video files
448
Deceptive participants
Male (10)
Female (7),
Asian/Arabic (4)
EU White (13)
Truthful Participants
Male (12),
Female (3)
Asian/Arabic (6),
EU White (9)
No of Channel Analyzed
38
Total number of truthful vectors in dataset
43051
Total number of deceptive vectors in the
dataset
43535
Two strategies were employed for training, testing and
validation: Bootstrapping and Leave one out which are
described version in section V. These were used to develop
suitable Artificial Neural Networks for the channels and
stages shown in figure 2. The final classifying ANN (i.e.
deception classifiers) produces an activation, which is
thresholded to determine truthfulness or deceptiveness. This
was not dynamic (i.e. continuously adaptive) but was
optimized manually, based on the suitcase experiment data.
Further manual optimization will be carried out after initial
runs of the pilot. Deception risk scores obtained from the
deception classifiers are bipolar in the range [-1, 1], where
deceptive is +1 and truthful is -1. The deception score for a
question is calculated as:

(1)
Where, Dq is the deception risk score of current question, ds
is the deception score of slot s and n is the total number of
valid slots for the current question. Then the thresholding is
applied as stated below:
IF Question_risk (Dq) <= x THEN
Indicates truthful
ELSE IF Question_risk (Dq) >= y THEN
Indicates deception
ELSE
Indicates unknown
END IF
Where x and y are determined empirically. Initial values
were: x = 0.05 and y = +0.05. This indicates that the final
decision for the question classification (i.e.
truthful/deceptive) is made regardless of the quantity of valid
slots. In other words, a question is classified as unknown
when there is no single valid slot that exists (i.e. n = 0) or the
value lies in between x and y.
V. RESULTS AND FINDINGS
A) Initial Results
The dataset obtained from the group channel coders is fed
into the deception classifier that firstly used the 10-fold cross-
validation strategy for train/validate/test the networks. The
percentage split of the entire input data for training,
validation and testing was 70:20:10 percent respectively.
There are 38 inputs to the network with one hidden layer and
single output. Networks are trained on varying number of
neurons (i.e. 11-20 in these experiments) in the hidden layer
to observe the impact on the performance. A bipolar sigmoid
transfer function is used while training the networks.
Maximum number of epochs are set to 10,000. The aim of
this initial work is to establish whether a machine based
interviewing technique can be used to detect deception from
non-verbal behaviour, and there was no tuning of the
classifiers attempted. With the exception of removing
redundant duplicated vectors, the initial results presented in
Table III and Table IV are derived from raw unprocessed
data.
Table III shows the percentage accuracy of the deception
classifiers obtained using a varying number of neurons in the
hidden layer and the aforementioned parameter settings. It is
observed that the network performance is gradually increased
while increasing the number of neurons. The highest
training/validation/test average accuracy indicated 96.55%
and 96.78% in terms of truthful and deceptive classification
respectively with 20 neurons in the hidden layer. The trained
networks with the optimum classification accuracy are then
further used for the testing on unseen dataset.
Table III: Results using 10 Fold Cross Validation
No. of
Hidden Layer
Neurons
Accuracy (%)
Training
Validation
Test
T
D
T
D
T
D
11
94.13
95.04
93.68
94.41
94.30
93.69
12
94.45
95.63
93.75
94.74
93.62
94.96
13
94.92
95.77
94.41
95.14
94.31
95.15
14
94.85
96.26
94.29
95.67
94.23
95.46
15
96.19
96.19
95.40
95.50
95.50
95.41
16
96.16
96.40
95.58
95.80
95.45
95.91
17
96.56
96.98
95.90
96.32
95.75
96.22
18
96.81
97.17
96.14
96.52
95.91
96.28
19
97.23
97.11
96.48
96.48
96.67
96.45
20
97.28
97.50
96.53
96.81
96.55
96.78
B) Testing classifiers
The strategy used for testing the classifiers is based on
leaving one pair out (i.e. one truthful and one deceptive
participant) for testing while training and validating the
networks on the rest of the participants’ data (30
participants). Then the trained networks performance was
tested using the unseen data of two participants. To examine
the effect of totally unseen participants, 9 experimental runs
were conducted, each involving the random selection of a
pair of test participants (one truthful, one deceptive). Table
IV shows the average test accuracy was measured to be
73.66% for deceptive tests and 75.55% for the truthful tests.
These outcomes indicate a substantial decrease in the
classification accuracy when compared with the
classification outcomes presented in Table III. When using
10-fold cross-validation, the classifiers have seen some of the
material (i.e. image vectors) from a test participant’s
interview in the training set (but training and test sets of
vectors were mutually exclusive). Consequentially, the cross
validation approach builds a model containing some of the
psychological properties of the people who it classifies. In the
second case (leave one pair out strategy), it sees no material
of participants and relies on the commonality between their
behavior and the behaviors of the participants used for
training. We postulate that a large number of participants will
build a larger general model, which will improve
classification accuracy on previously unseen cases.
Table IV: Classification Outcomes using Unseen
Participants
Test
No
Participant
Accuracy (%)
Truthful
Deceptive
Truthful
Deceptive
Gender
Ethnicity
Gender
Ethnicity
1
M
EU
M
A/A
100
57
2
M
A/A
F
EU
50
36
3
M
A/A
F
EU
50
100
4
M
EU
F
EU
90
100
5
M
A/A
M
EU
100
10
6
M
EU
M
EU
72
100
7
M
A/A
F
EU
100
100
8
F
EU
F
A/A
38
100
9
M
EU
M
EU
80
60
Overall Accuracy (%)
75.55
73.66
It is also noted that for these initial experiments there is an
insufficient amount of training data. Based on diversity of the
participants (e.g. ethnicity, age, gender), a larger dataset
would be more helpful to further generalize the classification
networks. Despite of fair distribution of overall truthful and
deceptive dataset (1.e. approximately 43000 vectors each),
the unbalanced dataset in terms of ethnicity and gender might
influence the deception classification network performance.
For instance, the deceptive dataset consists of 4 Asian/Arabic
participants compared to 13 of white EU. Likewise, in
truthful scenario, there are 12 Male compared to only 3
female participants. In addition, the data used in this study
was raw (apart from removal of redundant duplicated
vectors), had not been preprocessed and no tuning of the
ANN deception classifiers had taken place.
VI. CONCLUSIONS AND FURTHER WORK
This paper has described the first stage in development of
an automated deception detection system (ADDS) which will
be developed further to be utilized within the iBorderCtrl
(Intelligent Portable Control System). An experiment was
designed and conducted using a number of truthful and
deceptive scenarios to test the hypothesis that a machine
based interviewing technique could be used to detect
deception from non-verbal behaviour during an interview
conducted by a static avatar. The dataset collected for this
experiment contained image vectors from 30 participants and
contained diversity in terms of gender and ethnicity. Raw
experimental participant data was used to train artificial
neural network deception classifiers using two train-test
strategies. The un-optimized networks gave (as expected)
high results when utilizing a cross validation train-test
strategy, whilst obtaining an average classification of 75% on
both truthful and deceptive interviews when using a leave a
pair out strategy. It was noted that given the diversity of the
dataset, it might not have been large enough to train a
classifier more effectively. Future work will involve
capturing more data for diverse population representation and
optimization of the neural network classifiers
ACKNOWLEDGEMENTS
This project has received funding from the European Union’s
Horizon 2020 research and innovation programme under
grant agreement No 700626. The authors would like to thank
the iBorderCtrl consortium members for their feedback in
developing ADDs in this project.
REFERENCES
[1] European Parliament. (2016). Smart Borders: EU Entry/Exit System.
Brussels: European Parliament.
[2] iBorderCtrl Intelligent Portable Control System [online], Available at
http://www.iborderctrl.eu/ [Accessed 12/1/2018],
[3] Crockett, KA and O'shea, J and Szekely, Z and Malamou, A and
Boultadakis, G and Zoltan, S (2017) Do Europe's borders need multi-
faceted biometric protection. Biometric Technology Today, 2017 (7).
pp. 5-8. ISSN 0969-4765
[4] Hall, J. A. (2007) ‘Nonverbal cues and communication.’ In
Baumeister, R. F. and Vohs,K. D. (eds.) Encyclopedia of Social
Psychology, California: SAGE Publications Inc., pp. 626-627.
[5] Holmes, M. Latham, A. Crockett, K, O’Shea, J. Near real-time
comprehension classification with artificial neural networks: decoding
e-Learner non-verbal behaviour, IEEE Transactions on Learning
Technologies, Year: 2017, Volume: PP, Issue: 99, DOI:
10.1109/TLT.2017.2754497.
[6] Rothwell, J., Bandar, Z., O'Shea, J. and McLean, D., 2006. Silent
talker: a new computer‐based s ystem for the analysis of facial cues to
deception. Applied cognitive psychology, 20(6), 757-777.
[7] Silent Talker Ltd [online], Available at: https://www.silent-
talker.com/ [Accessed 5 Jan. 2018]
[8] International League of Polygraph Examiners (2016), Polygraph/Lie
Detector FAQs. [online]. Available at:
http://www.theilpe.com/faq_eng.html. [Accessed 16/01/2018].
[9] Mangan, D.J., Armitage, T.E. and Adams, G.C., (2008). A field study
on the validity of the Quadri-Track Zone Comparison Technique.
Physiology & behavior, 95(1), 17-23.
[10] Honts, C.R. and Reavy, R., (2015). The comparison question
polygraph test: A contrast of methods and scoring. Physiology &
behavior, 143, 15-26.
[11] Saxe, L., Dougherty, D. and Cross, T., (1985). The validity of
polygraph testing: Scientific analysis and public controversy.
American Psychologist, 40(3), 355.
[12] Meijer, E.H., Verschuere, B., Gamer, M., Merckelbach, H. and Ben‐
Shakhar, G., (2016). Deception detection with behavioral, autonomic,
and neural measures: Conceptual and methodological considerations
that warrant modesty. Psychophysiology.
[13] Walczyk JJ, Igou FP, Dixon AP, Tcholakian T. Advancing lie detection
by inducing cognitive load on liars: a review of relevant theories and
techniques guided by lessons from polygraph-based approaches,
Frontiers in Psychology, 4, 01 February 2013, [online] Available at:
http://dx.doi.org/10.3389/fpsyg.2013.00014 [Accessed 16 Jan. 2018].
[14] Eriksson, A. and Lacerda, F., (2007). Charlatanry in forensic speech
science: A problem to be taken seriously. International Journal of
Speech, Language and the Law, 14(2),169-193.
[15] Herms, R., (2016). Prediction of Deception and Sincerity from Speech
using Automatic Phone Recognition-based Features. Interspeech 2016,
pp.2036-2040.
[16] Ekman, P., (2016). Paul Ekman International Plc. [online] Available
at: http://www.ekmaninternational.com/ [Accessed 18 December
2016].
[17] Cassell, J., 2001. Embodied conversational agents: representation and
intelligence in user interfaces. AI magazine, 22(4), p.67.
[18] Nunamaker, J.F., DErrICk, D.C., Elkins, A.C., Burgoon, J.K. and
Patton, M.W., 2011. Embodied conversational agent-based kiosk for
automated interviewing. Journal of Management Information Systems,
28(1), pp.17-48.
[19] Hooi, R. and Cho, H., 2013. Deception in avatar-mediated virtual
environment. Computers in Human Behavior, 29(1), pp.276-284.
[20] Ströfer, S., Ufkes, E.G., Bruijnes, M., Giebels, E. and Noordzij, M.L.,
2016. Interviewing suspects with avatars: Avatars are more effective
when perceived as human. Frontiers in psychology, 7
[21] de Borst, A.W. and de Gelder, B., 2015. Is it the real deal? Perception
of virtual characters versus humans: an affective cognitive
neuroscience perspective. Frontiers in psychology, 6, p.576.
[22] Derrick, D.C., Read, A., Nguyen, C., Callens, A. and De Vreede, G.J.,
2013, January. Automated group facilitation for gathering wide
audience end-user requirements. In System Sciences (HICSS), 2013
46th Hawaii International Conference on (pp. 195-204). IEEE.
[23] Pollina, D.A., Horvath, F., Denver, J.W., Dollins, A.B. and Brown,
T.E., 2009. Development of technologies and test formats for
credibility assessment. Polygraph, p.99.
[24] Stremble Ventures LTD, [online] Available at: http://stremble.com/
[Accessed 5 Jan. 2018]
... 19 Similarly, Collins argues against machine responsibility on the grounds that machines lack "moral self-awareness", which she defines as "a phenomenal belief-like attitude … to the proposition 'I will do wrong'", although she accepts that suitably-organised collectives can nonetheless be responsible on the condition that there are human members of the collective that as a locus for such awareness. 20 , 21 These arguments are essentially ontological in that they proceed with the aim of arguing what machines are not. The basic argument is: ...
... 488. 20 Collins [3]. 21 A similar argument is made by Bernáth [2]. ...
... Such a system, ADDS[20], was a key component of the EUfunded project iBorderCtrl (https:// cordis. europa. ...
Article
Full-text available
Collectives, such as companies, are generally thought to be moral agents and hence capable of being held responsible for what they do. If collectives, being non-human, can be ascribed moral responsibility, then can we do the same for machines? Is it equally the case that machines, particularly intelligent machines, can be held morally responsible for what they choose to do? I consider the conditions required for moral responsibility, and argue that, in terms of the agency condition, artificial, non-human entities in general are excused from being responsible because, although they may choose their actions, the beliefs and desires that form the basis of their choices are predetermined by their designers, placing them in an analogous position to persons suffering covert manipulation. This creates a problem for collective responsibility, but I argue that collectives, through their supervention on human persons, represent an exception. Finally, I consider that the design of future machines may be sufficiently abstract and high-level as to fall below some threshold of influence, allowing machines enough freedom for us to hold them responsible.
... Research performed by [63] presents an automatic system for detecting deception by analyzing non-verbal behavior captured during an interview conducted by an avatar. The system utilizes artificial neural networks to detect facial objects and extract non-verbal behavior, specifically micro-gestures, over short time periods. ...
... Deception Detection Methods and Techniques [15,20,28,31,47] Machine Learning and Artificial Intelligence [16,25,26,29,50,51,[53][54][55][56]58,61,63,65] Psychophysiological Measures and Traditional Methods [32,46,57,[65][66][67][68][69] Behavioral and Linguistic Analysis [21][22][23][24]30,[35][36][37][38][39]70,71] This categorization offers a comprehensive view of the field, enabling researchers to explore the foundations, technological advancements, traditional practices, and linguistic nuances that shape the captivating realm of neural network applications in polygraph scoring for deception detection. ...
Article
Full-text available
Polygraph tests have been used for many years as a means of detecting deception, but their accuracy has been the subject of much debate. In recent years, researchers have explored the use of neural networks in polygraph scoring to improve the accuracy of deception detection. The purpose of this scoping review is to offer a comprehensive overview of the existing research on the subject of neural network applications in scoring polygraph tests. A total of 57 relevant papers were identified and analyzed for this review. The papers were examined for their research focus, methodology, results, and conclusions. The scoping review found that neural networks have shown promise in improving the accuracy of polygraph tests, with some studies reporting significant improvements over traditional methods. However, further research is needed to validate these findings and to determine the most effective ways of integrating neural networks into polygraph testing. The scoping review concludes with a discussion of the current state of the field and suggestions for future research directions.
... While automated deception detection technologies have shown promise in some studies, a signi cant gap remains in the literature -particularly regarding the validation of these technologies within Asian populations (O'Shea et al., 2018;Bittle, 2020). Addressing this gap is crucial for ensuring that these technologies are universally applicable and effective across diverse cultural contexts. ...
Preprint
Full-text available
Despite significant advancements in deception detection, traditional methods often fall short in real-world applications. This study addresses these limitations by evaluating the effectiveness of various physiological measures — pupil response, electrodermal activity (EDA), heart rate (HR), and facial temperature changes — in predicting deception using the Comparison Question Test (CQT). It also fills a critical research gap by validating these methods within an Asian context. Employing a between-subjects design, data was collected from a diverse sample of 118 participants from Singapore, including Chinese, Indian, and Malay individuals. The research aims to identify which physiological indicators, in combination, offer the most robust predictions of deceptive behavior. Key innovations include the adaptation of the CQT with a modified directed lie paradigm and an expanded sample size to assess the relative importance of each physiological measure. The study’s findings reveal that pupil response is the most significant predictor of deception, with EDA enhancing the model’s explanatory power. HR, while relevant, adds limited value when combined with pupil response and EDA, and facial temperature changes were statistically non-significant. The study highlights the need for further research into the interactions among physiological measures and their application in varied contexts. This research contributes valuable insights into improving deception detection methodologies and sets the stage for future investigations that could incorporate additional physiological indicators and explore real-world applications.
... The Silent Talker system and algorithm are patented, and thus many details on its development and the algorithm itself are not publicly available. The few scientific publications on Silent Talker so far have described the development and test of the system in very small, nondiverse samples with fewer than 40 participants, of whom the majority were white European men and women (see [15]). Here, the authors reported high correct classification rates. ...
Article
Whether an interviewee’s honest and deceptive responses can be detected by the signals of facial expressions in videos has been debated and called to be researched. We developed deep learningmodels enabled by computer vision to extract the temporal patterns of job applicants’ facial expressions and head movements to identify self-reported honest and deceptive impression management (IM) tactics from video frames in real asynchronous video interviews. A 12- to 15-min video was recorded for each of the N = 121 job applicants as they answered five structured behavioral interview questions. Each applicant completed a survey to self-evaluate their trustworthiness on four IM measures. Additionally, a field experiment was conducted to compare the concurrent validity associated with self-reported IMs between our modeling and human interviewers. Human interviewers’ performance in predicting these IMmeasures from another subset of 30 videos was obtained by having N = 30 human interviewers evaluate three recordings. Our models explained 91% and 84% of the variance in honest and deceptive IMs, respectively, and showed a stronger correlation with self-reported IMscores compared to human interviewers.
Article
Full-text available
Between 2016 and 2019, the European Union funded the development and testing of a system called “iBorderCtrl”, which aims to help detect illegal migration. Part of iBorderCtrl is an automatic deception detection system (ADDS): Using artificial intelligence, ADDS is designed to calculate the probability of deception by analyzing subtle facial expressions to support the decision-making of border guards. This text explains the operating principle of ADDS and its theoretical foundations. Against this background, possible deficits in the justification of the use of this system are pointed out. Finally, based on empirical findings, potential societal ramifications of an unjustified use of ADDS are discussed.
Article
In order to discover candidates who would work well with the current team and stay around for the long run, employment interviews seek out enough information from applicants to assess their technical talents and skills, personalities, and behavioural patterns. Naturally, candidates will represent themselves in the best possible light, making it difficult to get below the surface and find the real issues. Any type of interview can use the following three elements to help interpret a candidate: personality, performance types, and facial micro-expressions. The interviewer can direct the interview questions and learn about the candidate's personality to determine if the applicant's personality will be a good fit for the job and the team. Determine whether the candidate will be satisfied with the position in the long term by looking at the candidate's performance patterns. Verbal and non-verbal communication takes place during the interview.
Article
Full-text available
This paper will address ethical concerns surrounding the representation of vulnerable groups as well as the methodological challenges inherent in using artificial intelligence and human-like computer-generated characters in human studies that involve representing such groups. Such concerns focus on consequences arising from the technological affordances of new systems for creating narratives, as well as graphical and audio representations that are capable of portraying beings with close resemblance to humans. Enacting such virtual representations of humans inevitably gives rise to important ethical questions: (1) Who has the right to tell certain stories? (2) Is it ethical to change the medium of a narrative and the identity of a protagonist? (3) Do such changes, or technological mediations, affect whether a vulnerable group will be fairly and accurately portrayed? (4) And what are the implications, either way? While the backdrop of the paper involves discussing the potential of virtual representation as a meditative tool for moral and social change, the ethical implications inherent in the use of new cutting-edge technologies, such as OpenAI’s ChatGPT and Unreal Engine’s MetaHuman, to create human-like virtual character narratives call for theoretical scrutiny from a methodological perspective.
Article
Full-text available
La innovación digital en el campo de los controles fronterizos, la migración y el asilo ha devenido progresivamente un tema crucial en los estudios migratorios, debido al aumento exponencial de instrumentos tecnológicos aplicados en estos ámbitos. El texto se centra en una nueva faceta de este fenómeno representado por el creciente interés, advertido dentro y fuera de la UE, en la aplicación de sistemas de inteligencia artificial (IA), como nuevas herramientas experimentadas y utilizadas para apoyar los procedimientos decisionales en los asuntos migratorios. En particular, el texto centra la atención sobre el proyecto iBorderCtrl, a través del cual se ha experimentado un tipo particular de IA, denominada inteligencia artificial emocional, que supone el ingreso de problemáticas inéditas en el terreno de la protección de los derechos fundamentales de los sujetos interesados y en particular de los extranjeros. A estos efectos, tras repasar la rápida difusión de la IA en el ámbito migratorio y perfilar el concepto de IA emocional, como subsistema merecedor de un análisis especifico, se examinará el proyecto iBorderCtrl como caso de estudio, haciendo hincapié en la sentencia del Tribunal General de la Unión Europea, del 15 de diciembre de 2021 (T-158/19), y contextualizándolo a la luz de la Propuesta de Reglamento de la Comisión Europea en materia de IA, valorando las repercusiones que supone la IA emocional en el ámbito de los derechos fundamentales.
Article
Full-text available
It has been consistently demonstrated that deceivers generally can be discriminated from truth tellers by monitoring an increase in their physiological response. But is this still the case when deceivers interact with a virtual avatar? The present research investigated whether the mere “belief” that the virtual avatar is computer or human operated forms a crucial factor for eliciting physiological cues to deception. Participants were interviewed about a transgression they had been seduced to commit, by a human-like virtual avatar. In a between-subject design, participants either deceived or told the truth about this transgression. During the interviews, we measured the physiological responses assessing participants' electrodermal activity (EDA). In line with our hypothesis, EDA differences between deceivers and truth tellers only were significant for participants who believed they interacted with a human operated (compared to a computer operated) avatar. These results have theoretical as well as practical implications which we will discuss.
Article
Full-text available
The detection of deception has attracted increased attention among psychological researchers, legal scholars, and ethicists during the last decade. Much of this has been driven by the possibility of using neuroimaging techniques for lie detection. Yet, neuroimaging studies addressing deception detection are clouded by lack of conceptual clarity and a host of methodological problems that are not unique to neuroimaging. We review the various research paradigms and the dependent measures that have been adopted to study deception and its detection. In doing so, we differentiate between basic research designed to shed light on the neurocognitive mechanisms underlying deceptive behavior and applied research aimed at detecting lies. We also stress the distinction between paradigms attempting to detect deception directly and those attempting to establish involvement by detecting crime-related knowledge, and discuss the methodological difficulties and threats to validity associated with each paradigm. Our conclusion is that the main challenge of future research is to find paradigms that can isolate cognitive factors associated with deception, rather than the discovery of a unique (brain) correlate of lying. We argue that the Comparison Question Test currently applied in many countries has weak scientific validity, which cannot be remedied by using neuroimaging measures. Other paradigms are promising, but the absence of data from ecologically valid studies poses a challenge for legal admissibility of their outcomes.
Article
Full-text available
Recent developments in neuroimaging research support the increased use of naturalistic stimulus material such as film, avatars, or androids. These stimuli allow for a better understanding of how the brain processes information in complex situations while maintaining experimental control. While avatars and androids are well suited to study human cognition, they should not be equated to human stimuli. For example, the uncanny valley hypothesis theorizes that artificial agents with high human-likeness may evoke feelings of eeriness in the human observer. Here we review if, when, and how the perception of human-like avatars and androids differs from the perception of humans and consider how this influences their utilization as stimulus material in social and affective neuroimaging studies. First, we discuss how the appearance of virtual characters affects perception. When stimuli are morphed across categories from non-human to human, the most ambiguous stimuli, rather than the most human-like stimuli, show prolonged classification times and increased eeriness. Human-like to human stimuli show a positive linear relationship with familiarity. Secondly, we show that expressions of emotions in human-like avatars can be perceived similarly to human emotions, with corresponding behavioral, physiological and neuronal activations, with exception of physical dissimilarities. Subsequently, we consider if and when one perceives differences in action representation by artificial agents versus humans. Motor resonance and predictive coding models may account for empirical findings, such as an interference effect on action for observed human-like, natural moving characters. However, the expansion of these models to explain more complex behavior, such as empathy, still needs to be investigated in more detail. Finally, we broaden our outlook to social interaction, where virtual reality stimuli can be utilized to imitate complex social situations.
Conference Paper
Full-text available
The System development projects continue to fail at unacceptable rates. Including a wide array of users in the requirements development process for a wide-audience system can help to increase system success. Facilitated group workshops can effectively and efficiently gather requirements from several different users. To decrease cost and increase the number of potential workshop participants, we designed an embodied agent facilitator to guide groups through the facilitation process. We extend previous research which found human facilitated prompting to be effective at increasing the completeness of requirements gathered by replacing the facilitator with an avatar which administered the same prompts. We hypothesize that the avatar facilitated group will also have a significant increase in the quality and quantity of requirements gathered and find support for our hypothesis.
Article
We have created an automated kiosk that uses embodied intelligent agents to interview individuals and detect changes in arousal, behavior, and cognitive ef- fort by using psychophysiological information systems. In this paper, we describe the system and propose a unique class of intelligent agents, which are described as Special Purpose Embodied Conversational Intelligence with Environmental Sensors (SPECIES). SPECIES agents use heterogeneous sensors to detect human physiology and behavior during interactions, and they affect their environment by influencing hu- man behavior using various embodied states (i.e., gender and demeanor), messages, and recommendations. Based on the SPECIES paradigm, we present three studies that evaluate different portions of the model, and these studies are used as founda- tional research for the development of the automated kiosk. the first study evaluates human–computer interaction and how SPECIES agents can change perceptions of information systems by varying appearance and demeanor. Instantiations that had the agents embodied as males were perceived as more powerful, while female embodied agents were perceived as more likable. Similarly, smiling agents were perceived as more likable than neutral demeanor agents. the second study demonstrated that a single sensor measuring vocal pitch provides SPECIES with environmental awareness of human stress and deception. the final study ties the first two studies together and demonstrates an avatar-based kiosk that asks questions and measures the responses using vocalic measurements.
Article
Comprehension is an important cognitive state for learning. Human tutors recognise comprehension and non-comprehension states by interpreting learner non-verbal behaviour (NVB). Experienced tutors adapt pedagogy, materials and instruction to provide additional learning scaffold in the context of perceived learner comprehension. Near real-time assessment for e-learner comprehension of on-screen information could provide a powerful tool for both adaptation within intelligent e-learning platforms and appraisal of tutorial content for learning analytics. However, literature suggests that no existing method for automatic classification of learner comprehension by analysis of NVB can provide a practical solution in an e-learning, on-screen, context. This paper presents design, development and evaluation of COMPASS, a novel near real-time comprehension classification system for use in detecting learner comprehension of on-screen information during e-learning activities. COMPASS uses a novel descriptive analysis of learner behaviour, image processing techniques and artificial neural networks to model and classify authentic comprehension indicative non-verbal behaviour. This paper presents a study in which 44 undergraduate students answered on-screen multiple choice questions relating to computer programming. Using a front-facing USB web camera the behaviour of the learner is recorded during reading and appraisal of on-screen information. The resultant dataset of non-verbal behaviour and question-answer scores has been used to train artificial neural network (ANN) to classify comprehension and non-comprehension states in near real-time. The trained comprehension classifier achieved normalised classification accuracy of 75.8\%.
Article
In today’s world, terrorism has become adire and global threat. Within Europe, terrorattacks and participation in terrorist organisa-tions by EU citizens are on the rise. To dealwith this, the European Union has introducedsome significant legal changes to the Schengenagreement – the treaty that led to the creationof Europe’s Schengen area where internal bor-der checks have largely been abolished. Themost recent and interesting of these changeshas meant that systematic controls are beingintroduced at border crossings.“Solutions identified includethe pre-arrival registrationand biometric identificationof people to speed up theprocess, and wearableintelligent border controlequipment”In effect, from 7 April, a new EU rule(Regulation 2017/458) has reinforced checksagainst relevant databases at external borders.This makes checking EU citizens and theirtravel documents against databases compulsory,enhanced with biometric checks where needed.It means that at the border gates, instead ofchecking EU citizens randomly, they should allbe checked. Until now, only third-country citi-zens came under such a rule, so the new regimewill have a very strong impact on the dynamicsof Europe’s cross-border traffic (PDF) Do Europe's borders need multi-faceted biometric protection?. Available from: https://www.researchgate.net/publication/318545650_Do_Europe's_borders_need_multi-faceted_biometric_protection#fullTextFileContent [accessed Dec 22 2023]
Article
We conducted a mock crime experiment with 250 paid participants (126 female, MdnAge=30years) contrasting the validity of the probable-lie and the directed-lie variants of the Comparison Question Test (CQT) for the detection of deception. Subjects were assigned at random to one of eight conditions in a Guilt (Guilty/Innocent) X Test Type (Probable-lie/Directed-lie) X Stimulation (Between Repetition Stimulation/No Stimulation) factorial design. The data were scored by an experienced polygraph examiner who was unaware of subject assignment to conditions and with a computer algorithm known as the Objective Scoring System Version 2 (OSS2). There were substantial main effects of Guilt in both the OSS2 computer scores F(1, 241)=143.82, p<.001, ηp(2)=0.371, and in the human scoring, F(1, 242)=98.92, p<.001, ηp(2)=.29. There were no differences between the test types in the number of spontaneous countermeasure attempts made against them. Although under the controlled conditions of an experiment the probable-lie and the directed-lie variants of the CQT produced equivocal results in terms of detection accuracy, the directed-lie variant has much to recommend it as it is inherently more standardized in its administration and construction. Copyright © 2015. Published by Elsevier Inc.