Content uploaded by Gina Villar
Author content
All content in this area was uploaded by Gina Villar on Mar 03, 2015
Content may be subject to copyright.
Lies, Lies and More Lies
Joanne Arciuli (jarciuli@usyd.edu.au)
Faculty of Health Sciences, University of Sydney
Australia
Gina Villar (gina_villar@optusnet.com.au)
Department of Psychology, Charles Sturt University
Australia
David Mallard (dmallard@csu.edu.au)
Department of Psychology, Charles Sturt University
Australia
Abstract
Lying is a deliberate attempt to transmit messages that mislead
others. Here, we examined the frequency of use of the so-
called filler word ‘um’ during lying versus truth-telling in low-
stakes laboratory-elicited lies (Study 1) and also in high-stakes
real-life lies (Study 2). Results from a within-subjects false
opinion paradigm showed that instances of ‘um’ occur less
frequently during lying compared to truth-telling. Converging
evidence was provided upon examining the lies of a convicted
murderer. These results contribute to our understanding of
linguistic markers of deception behaviour. Mo re generally,
they assist in our understanding of the role of utterances such
as ‘um’ in communication. Utterances such as ‘um’ may not
be accurately conceptualised as filled pauses/hesitations or
speech disfluencies/errors whose increased usage coincides
with increased cognitive load or increased arousal. Rather,
they may carry a lexical status similar to interjections and
form an important part of authentic, natural communication -
that is somewhat lacking during lying.
Keywords: Deception, Lies.
Linguistic Cues to Deception
Lying has been variously described as threatening the moral
fabric of our society (Bok, 1978) and an important
developmental milestone (deVilliers & deVilliers, 1978)
that may be lacking in some developmental disorders (e.g.,
Autism Spectrum Disorders: Sodian & Frith, 1992).
Certainly, lying is a part of everyday social interactions –
with some studies suggesting that people lie on average
once or twice a day (DePaulo, Kashy, Kirkendol, Wyer, &
Epstein, 1996) – and may be prosocial in certain situations
(Spence et al., 2004). Despite the frequency with which we
are exposed to lies, people’s ability to discriminate lies from
truth is equal to that of chance (Bond & DePaulo, 2006).
This inaccuracy appears to stem from a number of factors
including undue reliance on nonverbal cues such as body
movements (Mann, Vrij & Bull 2004). A recent study
showed that even trained school teachers, social workers
and police are poor deception detectors and they perform
poorly regardless of whether the liars are 5-6 years of age,
adolescents or adults (Vrij, Akehurst, Brown & Mann,
2006). Indeed, assessment of behavioural cues to deception
is fraught with difficulty as these cues may be “subtle,
dynamic and transitory [and therefore] often elude humans’
conscious awareness” (Meservy, Jensen, Kruse, Burgoon &
Nunamaker, 2005). The need for accurate deception
detection in view of the poor performance of human lie
detectors and other currently available methods (such as the
polygraph which is suggested by some to be more of a guilt
detector than a lie detector), has led to considerable research
attention being focused on improving detection methods
using formal, objective procedures. The current study
provides an analysis of a particular aspect of language usage
during lying vs. truth-telling – the prevalence of so-called
filler words such as ‘um’.
To date, researchers have investigated a wide range of
language behaviours in both spoken and written output
including measures of quantity, complexity, uncertainty,
nonimmediacy, expressivity, diversity, redundancy,
informality, specificity, causation and affect (e.g., see Bond
& Lee, 2005; DePaulo, Lindsay, Malone, Muhlenbruck,
Charlton & Cooper, 2003; Newman, Pennebaker, Berry &
Richards, 2003; Rassin & Van Der Heijden, 2005; Sporer &
Schwandt, 2006; Zhou, Burgoon, Nunamaker & Twitchell,
2004; Vrij, Edward, Roberts & Bull, 2000; Vrij & Mann,
2004). The results of studies that have examined multiple
linguistic cues are impressive and some studies have
demonstrated deception detection rates of 67% which is
significantly better than the chance levels obtained by
human lie detectors (e.g., Newman et al., 2003). A meta
analysis of 120 deception studies conducted by DePaulo et
al. (2003) found that, in general, liars provide fewer details,
make more negative statements, sound more uncertain,
impersonal, evasive and unclear, and produce more words
that distance themselves from their statements and the
person or people to whom they are lying when compared
with truth-tellers. An important challenge for researchers
working in this area is to focus on refining the definition
and assessment of particular linguistic cues and to provide a
more thorough explanation of why they are related to
deceptive behaviour.
It has been suggested that utterances such as ‘um’
constitute filled pauses/hesitations (e.g., Maclay & Osgood,
1959) or production errors that render speech disfluent in a
2329
similar way to repetitions, repairs and false starts (Chomsky,
1965; Clark & Wasow, 1998; Goldman-Eisler, 1968).
Recent research has challenged such notions by suggesting
that these utterances have lexical status like other English
words. Clark and Fox Tree (2002) claimed that utterances
such as ‘um’ have lexical status (a status that is perhaps
similar to the open-class of words termed interjections
which includes items such as ‘alrighty’ and ‘woops’) and
that they have “conventional phonological shapes and
meanings and are governed by the rules of syntax and
prosody” (p. 3). Unrelated to research on deception, Clark
and Fox Tree’s analysis of 170,000 words from 50 face-to-
face conversations demonstrated that speakers exhibit use of
‘um’ when marking delays in speaking (for example, in an
attempt to keep the floor or cede the floor) and that they
plan for, formulate and produce such utterances just as they
would any other English word. A study of speech
recognition in Spanish demonstrated that incorporation of
such utterances as lexical items (rather than noise) in models
of automatic speech recognition improves the recogniser’s
performance (Rodriguez & Torres, 2006).
Researchers in the area of deception have tended to
theorise that ‘um’ may occur more often during lying than
truth-telling. It has been argued that this increased
prevalence may reflect a lack of language planning that
accompanies the increased cognitive load (e.g., related to
effortful monitoring of responses) and/or increased arousal
(e.g., related to heightened feelings of guilt, fear or
excitement) that often occurs during lying (e.g., Hosman &
Wright, 1987; Vrij & Winkel, 1991). In the current study,
we examined the possibility that ‘um’ may, in fact, appear
less often during lying compared to truth-telling. We
speculate that there are two reasons why this might be the
case. The first relies on an assumption that lying is, at least
to some degree, reflective of inauthentic and somewhat less
natural processes compared to truth-telling. If ‘um’ forms a
part of natural, effortless language use then we might expect
to see less of it when language is inauthentic (i.e., during
lying). In this sense, decreased use of ‘um’ during lying
compared to truth-telling may not be under the direct control
of the speaker. The second reason relies on the assumption
that people may monitor their language use very carefully
during lying and try to strategically remove or mask cues to
deception. Thus, liars may deliberately reduce their use of
‘um’ in line with an understanding of ‘um’ being a
hesitation or disfluency reflective of uncertainty (e.g.,
Akehurst, Köhnken, Vrij & Bull 1996; Vrij & Semin, 1996).
In this sense, decreased use of ‘um’ during lying may be
under the direct control of the speaker. In either case the
result is the same – we would expect to see decreased use of
‘um’ during lying. In a first for deception research, we
examined both low-stakes, laboratory-elicited lies (Study 1)
and high-stakes, real-life lies (Study 2) to determine the role
of ‘um’ as a useful linguistic marker.
Study 1: Low-stakes Laboratory-elicited Lies
We elicited language in the context of an interactive
‘interview’ setting (rather than a monologue) for two
reasons. First, we wanted to ensure a listener was present
because it has been suggested that items such as ‘um’ may
be used, consciously or otherwise, for the listener’s benefit
(as opposed to being reflective of the speaker’s speech-
planning processes). Second, the presence of a conversant
may assist in encouraging speakers to lie convincingly.
Method
Participants A total of 32 participants (22 females and 10
males) with an average age of 20.2 years (SD = 4.8) took
part in exchange for course credit.
Procedure We employed a false opinion paradigm based on
the procedure described by Frank and Ekman (2004) and
participants took part in individual sessions lasting
approximately 30 minutes. At the beginning of the session
each participant was given a social issues questionnaire (on
topics of general interest such as “Should smoking be
banned in all enclosed public places?”). We asked each
participant to provide their opinion on each topic (1 =
strongly disagree, 7 = strongly agree) and to rate how
strongly they personally felt about each issue (1 = no
feelings, 7 = very strong feelings). Based on these responses
we selected two topics for each participant for which
participants held both a strong opinion (of either agreement
or disagreement) and had strong feelings. Wherever
possible, we chose issues for which the participant had
reported an opinion rating of either one or seven and had
also provided a value of seven for personal feelings about
the issue. For one topic participants were asked to give a
truthful account of their views and for the other topic
participants were asked to provide an untruthful account of
their views (i.e., to lie). The two selected topics were
randomly assigned to be either the truthful or the untruthful
account. The experimenter then told the participant that they
would be asked to lie or tell the truth about their opinion on
some of the social issues that had been presented to them in
the social issues questionnaire during a video-taped
interview (a different experimenter conducted the
interviews).
Data Preparation Interviews were transcribed by a blinded
research assistant and checked by a second blinded research
assistant. An excerpt from an interview where the
participant was discussing the topic of same sex marriage is
as follows: “…Um, well I think they’re just like any other
person so um they should just have the same chance as any
other Australian to get married um and it’s sort of up to
them whether or not they want to…”. Tagging was
undertaken by a sound engineer who was blind to the
experimental conditions. In the tagging of ‘um’ instances,
examples of ‘uh’ were not tagged as ‘um’ unless they were
characterised by vowel nasalization (anticipatory
2330
nasalization occurs when speakers intend to close with a
nasal consonant such as /m/).
Results
On average, participants produced 157.61 words when
telling the truth and 174.35 words when lying. A 2
(condition: lying vs. truth-telling) x 2 (sex: female vs. male)
ANOVA revealed no significant effects in terms of total
number of words produced during lying vs. truth-telling (all
Fs < 1). For each participant, we calculated the number of
instances of ‘um’ as a percentage of the total number of
words. Descriptive statistics regarding frequency of ‘um’ (as
a percentage of total output) are provided in Table 1.
Table 1: Means (and standard deviations)
Truth
Lies
Female
2.28
(1.42)
1.61
(1.34)
Male
2.44
(2.54)
1.51
(1.42)
The analysis of the percentage of ‘um’ utterances revealed a
significant main effect of deception (F(1,30) = 10.12, p =
.003) with significantly more instances in the truth-telling
condition. In contrast, there was no main effect of gender
and no interaction between gender and deception (both Fs <
1).
Study 2: High-stakes Real-life Lies
Four weeks following the disappearance of his pregnant
wife and prior to his subsequent arrest for her murder, key
suspect Scott Peterson gave a series of media interviews
prompting intense and often heated public speculation as to
whether or not he was telling the truth when he protested his
innocence. For several weeks prior to these interviews,
police recorded hours of telephone conversations between
Peterson and his mistress, Amber Frey – a person who
Peterson initially believed was unaware he was even
married, let alone the murderer of his wife and unborn child.
The media interviews and taped telephone conversations all
contained examples of lying and truthful speech. Here we
present an analysis of the telephone conversations.
Method
Participant Scott Lee Peterson, a North American
Caucasian male, was arrested in April, 2003, for the murder
of his wife, Laci Peterson, who disappeared on Christmas
Eve, 2002. Peterson was subsequently charged and
convicted under the California Penal Code of the double-
murder of his pregnant wife and their unborn son in 2004.
Peterson had no prior convictions. Peterson was sentenced
to death and, at the time of writing, is on death row in San
Quentin State Prison. He was born in San Diego, California,
on October 24, 1972 and English is his first language.
Peterson’s highest level of academic achievement is a
university degree in agricultural business and prior to his
arrest he was employed as a fertilizer salesman.
Case Details Scott Peterson reported his wife, Laci
Peterson, missing from their Modesto California home on
December 24, 2002. The 27 year old was due to deliver her
first child, to be named Conner, 6 weeks later. Peterson was
interviewed by the police on several occasions and he was
under police surveillance from early January 2003 - search
warrants had been issued on his home, vehicles and place of
business and he was clearly a person of interest in the case.
In the first police interview conducted on the day of
Laci’s disappearance, Peterson was asked if he was involved
with another woman, to which he answered no. However,
six days after Laci was reported missing, a Fresno woman
by the name of Amber Frey contacted police to say she had
been having a romantic relationship with Peterson for
several weeks since November 19, 2002. She claimed that
during that time Peterson had lied to her about his real
circumstances - that he was a widower, his wife had recently
died, he lived in Sacramento and he was flying to Paris for
business over Christmas – and she had only been told of his
real identity by a friend who recognised Peterson from news
reports, the day of Frey’s contact with police. Frey agreed to
co-operate with police by secretly taping her telephone
conversations with Peterson from December 31. He
continued to call her throughout the time of the search for
his pregnant wife during December and January, all the
while maintaining the charade of a jet-setting widower.
The same day Frey first came forward (December 30,
2002), police asked Peterson if he had been having a
relationship with another woman and once again he denied
it. A week later, police confronted him with a photograph of
Frey and once again he denied any involvement with her.
Shortly after that (January 6, 2003), Peterson told Frey he
had lied to her about his circumstances and confessed to her
about the search for his missing pregnant wife. At the
urging of police, Frey made a media statement on January
24, 2002 and so their affair became public knowledge. The
telephone calls between Frey and Peterson continued after
this time, and these too were taped by Modesto police. In
response to the public outcry about Peterson’s relationship
with Frey, Peterson agreed to conduct four televised media
interviews from January 27 – 29, 2002. Peterson was later
found to have lied on at least one occasion during these
interviews.
The bodies of Laci and Connor were discovered on the
shores of San Francisco Bay on March 12, 2003. On April
18, 2003, Scott Peterson was arrested by police for the
murders of his wife and unborn child and charged with
double homicide. The case went to trial in June, 2004, with
Peterson pleading not guilty of the charges. Transcripts of
the four media interviews referred to above, in addition to
audio presentations of the taped telephone conversations
between Frey and Peterson, formed part of the prosecution’s
case against Peterson and were admitted as evidence at trial.
Five months later the jury found him guilty of murder in the
2331
first degree for his wife and murder in the second degree for
his unborn son.
Data Analysis Transcriptions of The Frey Tapes and
corresponding audio recordings were admitted as evidence
at trial and were accessed through electronic material
available on the public record at http://pwc-
sii.com/CourtDocs/Pexhibits.htm. Prior to analysis of the
speech data, each of the transcripts was carefully compared
to the original audio to ensure they were a complete and
verbatim record of the interviews.
The next step was to identify the portions of telephone
conversation that could be verified as being truth or lie, a
methodology that is congruent with the design employed by
Vrij and Mann (2001), Mann, Vrij and Bull (2002) and
Davis, Markus, Walters, Vorus and Connors (2005). This
necessitated a strong familiarisation with the Trial Record
and case information available on the public record. It was
necessary to read through each of the transcripts line by line
and isolate any utterances that could be strongly supported,
by evidence presented at trial or from another reputable
source (such as a police media release), as either truthful or
deceptive. Deception may be defined as a deliberate attempt
to manufacture, hide or manipulate information, in order to
create a belief in others that the communicator knows to be
false (Masip, Garrido & Herrero, 2004). In keeping with this
definition, deceptive utterances were identified as those
samples of speech where information was manufactured,
hidden or manipulated. Fragments of speech that could not
be verified were discarded from further analysis (e.g., all of
Scott Peterson’s personal opinions were eliminated from the
data set).
An example of some speech from the deception
condition: “Okay if you can hear me I’ll be in Paris
tomorrow. I’m taking a flight from here in the country in
Normandy right now so I’ll call you tomorrow.” An
example of some speech from the truth condition: “Um well
I’ll just I’ll just tell you. Ah you haven’t been watching the
news obviously. Um I have not been traveling during the
last couple weeks. I have I have lied to you that I’ve been
traveling.”
Of the remaining data, the number of words in the Lie
and in the Truth conditions was counted as a measure of
sample size. Data were analysed using the log likelihood
ratio (LR) test (see Rayson & Garside, 2000). LR is less
likely to overestimate significance than traditional statistical
tests such as z-ratios that rely upon assumptions of a normal
distribution. Similarly, where rare words are observed in
frequency profiles, LR is less likely to overestimate the
significance of such an event. Of particular relevance here,
it has the added benefit of being suitable for comparison of
relatively small texts and texts of differing lengths
(Dunning, 1993; Rayson, Berridge & Francis, 2004). LR
refers to the logarithm of the ratio between the likelihood
that the truthful and deceptive speech inputs from the
participant have the same linguistic profile and the
likelihood that the linguistic profiles differ from each other.
The sign preceding the log likelihood ratio (LR) shows the
direction of the relationship, with ‘+’ indicating a higher
frequency in the truthful condition and ‘-’ indicating a
higher frequency in the deceptive condition.
Results
There were 883 words in the deception condition and 1,077
words in the truth condition. The frequency of ‘um’ as a
percentage of the total number of words in that condition are
provided in Table 2.
Table 2: Linguistic behaviour as a function of veracity
Truth
Lies
LR
3.71
0.12
+40.09
LR was statistically significant p < .0001.
General Discussion
In a first for research on deception, we investigated the use
of ‘um’ in both low-stakes laboratory-elicited lies and high-
stakes real-life lies. The combination of these methods
provides powerful converging evidence. Results from Study
1 indicated that during low-stakes laboratory-elicited lies
instances of ‘um’ were significantly more frequent during
truth-telling – their usage appeared to be restricted during
lying. Results from Study 2 confirmed this pattern in high-
stakes real-life lies.
We put forward two possible explanations for these
findings. It may be that utterances such as ‘um’ are more
accurately conceptualised as conventional English words
rather than filled pauses/hesitations or speech
disfluencies/errors (see Clark & Fox Tree, 2002; Fox Tree,
2006). Indeed, research unrelated to deception behaviours
provides converging evidence for the special status of
utterances such as ‘um’ which have been found to have
different distribution patterns to other types of disfluencies
such as repetitions and false starts. Bortfeld et al. (2001)
found that these utterances “may be a resource for or a
consequence of interpersonal coordination” (p. 123). As
such, these utterances are an important part of authentic,
natural speech (that is presumably somewhat lacking during
lying). Accordingly, while the use of utterances such as
‘um’ may not be under strategic control we would expect
usage to be lessened during lying (compared to truth-
telling). The second possibility is that the use of utterances
such as ‘um’ is under direct control and that participants
reduce their usage during lying in an effort to mask
deception. In line with this view, speakers remove what they
see as markers of uncertainty (utterances such as ‘um’)
when they lie (e.g., Akehurst et al., 1996; Vrij & Semin,
1996).
The outcome of each of these scenarios is the same –
fewer utterances such as ‘um’ during lying. Importantly,
while it seems possible that the number of instances of ‘um’
(i.e., frequency of use) may be under strategic control it
seems unlikely that other acoustic characteristics of these
2332
utterances, such as duration and amplitude, could be as
easily controlled in a straightforward way. However, this
remains an open empirical question to be investigated in
future studies.
A question that is often raised in research on linguistic
cues to deception is whether rehearsal affects lying. So-
called ‘fillers’ are thought to be used less often in rehearsed
speech. It might be speculated that Peterson was able to
rehearse his lies but is it the case that people’s familiarity
with arguments concerning current social issues resulted in
the use of ‘rehearsed speech’? Over time, people might
become increasingly aware of both sides of the argument
concerning particular social issues; however, we imagine
that if there is any significant rehearsal involved, this would
relate to one side of an argument more than the other (most
likely, the side that the participant believes in, their ‘truth’).
Thus, we might have expected to see fewer so called fillers
in the truthful condition during laboratory-elicited lies as
this condition is more likely to reflect speech that
participants have, personally, rehearsed a number of times.
Our results showed the opposite pattern of results (fewer
fillers during lying).
Of all the potential linguistic cues to investigate in
deceptive speech, frequency of ‘um’ may offer two
advantages in English-speaking forensic contexts. First,
when viewed as legitimate lexical terms, they lend
themselves to automation (as just like any other word they
can be identified and counted using basic part-of-speech
tagging systems) and, second, they may be somewhat
independent of the content of the communication. For
example, Newman et al. (2003) found that a number of
linguistic markers of deception identified in accounts about
abortion were more predictive within the topic than across
topics (e.g., first person pronouns, exclusive words, motion
verbs and negative emotion words) – suggesting a
relationship between subject matter and language behaviour.
By contrast, ‘um’s are more likely to be individual stylistic
markers (Shriberg, 2001) that are attached to the person
rather than the context and hence it is their relative use in
truth-telling versus deception that may provide clues to
veracity. Such context-independence is valuable in real-
world settings where the speech of the speaker cannot
always be constrained. Of course, the accompanying down-
side of speaker-dependent cues to deception, particularly in
automated systems, is the importance of establishing base-
line measures of the target variable before any demarcations
from this can be noted and interpreted.
Avenues for future research include investigation of
utterances such as ‘um’ in participants who are ‘practiced
liars’ (e.g., one might compare poker players and non-poker
players using the laboratory-elicited methods described
here). It would also be interesting to experimentally
manipulate cognitive load using laboratory-elicited
methods. As suggested by Vrij, Fisher, Mann and Leal.
(2006) participants could be asked to engage in a secondary
(unrelated) cognitive task while being interviewed (i.e.,
while they are telling the truth and, also, while they are
lying) to more precisely examine the effects of cognitive
load on lying.
References
Akehurst, L., Köhnken, G., Vrij, A., & Bull, R. (1996). Lay
persons' and police officers' beliefs regarding deceptive
behavior. Applied Cognitive Psychology, 10, 461-473.
Bok, S. (1978). Lying: Moral choices in public and private
life. New York: Vintage Books.
Bond, C. F., Jr., & DePaulo, B. M. (2006). Accuracy of
deception judgments. Personality & Social Psychology
Review, 10, 214-234.
Bond, G. D., & Lee, A. Y. (2005). Language of lies in
prison: Linguistic classification of prisoners' truthful and
deceptive natural language. Applied Cognitive
Psychology, 19, 313-329.
Bortfeld, H., Leon, S., Bloom, J., Schober, M., & Brennan,
S. (2001). Disfluency rates in conversation: Effects of
age, relationship, topic, role, and gender. Language and
Speech, 44, 123-147.
Chomsky, N. (1965). Aspects of the theory of syntax.
Cambridge, MA: MIT Press.
Clark, H. H., & Fox Tree, J. E. (2002). Using uh and um in
spontaneous speaking. Cognition, 84, 73-111.
Clark, H. H. & Wasow, T. (1998). Repeating words in
spontaneous speech. Cognitive Psychology, 37, 201-242.
Davis, M., Markus, K. A., Walters, S. B., Vorus, N., &
Connors, B. (2005). Behavioral cues to deception vs.
topic incriminating potential in criminal confessions. Law
& Human Behavior, 29, 683-704.
DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M.
M., & Epstein, J. A. (1996). Lying in everday life.
Journal of Personality and Social Psychology, 70, 979-
995.
DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck,
L., Charlton, K., & Cooper, H. (2003). Cues to Deception.
Psychological Bulletin, 129, 74-118.
deVilliers, J. G., & deVilliers, P. A. (1978). Language
acquisition. Cambridge, MA: Harvard University Press.
Dunning, T. (1993). Accurate methods for the statistics of
surprise and coincidence. Computational Linguistics, 19,
61-74.
Fox Tree, J. (2006). Placing like in telling stories.
Discourse Studies, 8, 723-743.
Frank, M. G., & Ekman, P. (2004). Appearing truthful
generalizes across different deception situations. Journal
of Personality and Social Psychology, 86, 486-495.
Goldman-Eisler, F. (1968). Psycholinguistics:Experiments
in spontaneous speech. New York: academic Press.
Hosman, L., & Wright, J. (1987). The effects of hedges and
hesitations on impression formation in a simulated
courtroom context. Western Journal of Speech
Communication, 51, 173-188.
Maclay, H., & Osgood, C. (1959). Hesitation phenomena in
spontaneous English speech.Word, 15, 19-44.
2333
Mann, S., Vrij, A., & Bull, R. (2002). Suspects, lies, and
videotape: An analysis of authentic high-Stake Liars. Law
& Human Behavior, 26, 365-376.
Mann, S., Vrij, A., & Bull, R. (2004). Detecting true lies:
Police offers’ ability to detect deceit. Journal of Applied
Psychology, 89, 137-149.
Masip, J., Garrido, E., & Herrero, C. (2004). Defining
deception. Anales de Psicologia, 20, 147-171.
Meservy, T. O., Jensen, M. L., Kruse, J., Burgoon, J. K., &
Nunamaker, J. F. (2005). Automated extraction of
deceptive behavioral cues from video. In P. Kantor (Ed.),
Intelligence and Security Informatics (pp. 198-208).
Berlin: Springer.
Newman, M. L., Pennebaker, J. W., Berry, D. S., &
Richards, J. N. (2003). Lying words: Predicting deception
from linguistic styles. Personality and Social Psychology
Bulletin, 29, 665–675.
Rassin, E., & Van Der Heijden, S. (2005). Appearing
credible? Swearing helps! Psychology, Crime & Law, 11,
177-182.
Rayson, P., Berridge, D., & Francis, B. (2004, March).
Extending the Cochran rule for the comparison of word
frequencies between corpora. Paper presented at the 7th
International Conference on Statistical Analysis of
Textual Data, Louvain-la-neuve, Belgium.
Rayson, P., & Garside, R. (2000). Comparing corpora using
frequency profiling. In A. Kilgariff & T. Sardinha (Eds.),
Proceedings of the workshop on comparing corpora held
in conjunction with the 38th annual meeting of the
Association for Computational Linguistics (pp.1-6). Hong
Kong: ACL.
Rodriguez, L., & Torres, M. (2006). Spontaneous speech
events in two speech databases of human-computer and
human-human dialogs in Spanish. Language and Speech,
49, 333-366.
Shriberg, E. (2001). To ‘errrr’ is human: Ecology and
acoustics of speech disfluencies. Journal of the
International Phonetic Association, 31, 153-169.
Sodian, B., & Frith, U. (1992). Deception and sabotage in
autistic, retarded and normal children. Journal of Child
Psychology and Psychiatry, 33, 591-606.
Spence, S. A., Hunter, M. D., Farrow, T. F., Green, R. D.,
Leung, D. H., Hughes, C. J., et al. (2004). A cognitive
neurobiological account of deception: Evidence from
functional neuroimaging. Philosophical Transactions of
the Royal Society of London B, 359, 1755-1762.
Sporer, S. L., & Schwandt, B. (2006). Paraverbal indicators
of deception: A meta-analytic synthesis. Applied
Cognitive Psychology, 20, 421-446.
Vrij, A., Akehurst, L., Brown, L., & Mann, S. (2006).
Detecting lies in young children, adolescents and adults.
Applied Cognitive Psychology, 20, 1225-1237.
Vrij, A., Edward, K., Roberts, K.P., & Bull, R. (2000).
Detecting deceit via analysis of verbal and nonverbal
behavior. Journal of Nonverbal Behaviour, 24, 239-264.
Vrij, A., Fisher, R., Mann, S., & Leal, S. (2006). Detecting
deception by manipulating cognitive load. Trends in
Cognitive Sciences, 10(4), 141-142.
Vrij, A., & Mann, S. (2001). Telling and detecting lies in a
high-stake situation: The case of a convicted murderer.
Applied Cognitive Psychology, 15, 187-203.
Vrij, A., & Mann, S. (2004). Detecting deception: The
benefit of looking at a combination of behavioral,
auditory and speech content related cues in a systematic
manner. Group Decision & Negotiation, 13, 61-79.
Vrij, A., & Semin, G. (1996). Lie experts’ beliefs about
nonverbal indicators of deception. Journal of Nonverbal
Behaviour, 20, 65-80.
Vrij, A., & Winkel, F. (1991). Cultural patterns in Dutch
and Surinam nonverbal behaviour: an analysis of
simulated police/citizen encounters. Journal of Nonverbal
Behaviour, 14, 169-184.
Zhou, L., Burgoon, J. K., Nunamaker, J. F., & Twitchell, D.
(2004). Automating Linguistics-Based Cues for Detecting
Deception in Text-Based Asynchronous Computer-
Mediated Communications. Group Decision &
Negotiation, 13, 81-106.
2334