ArticlePDF AvailableLiterature Review

Cues to Deception

American Psychological Association
Psychological Bulletin
Authors:

Abstract and Figures

Do people behave differently when they are lying compared with when they are telling the truth? The combined results of 1,338 estimates of 158 cues to deception are reported. Results show that in some ways, liars are less forthcoming than truth tellers, and they tell less compelling tales. They also make a more negative impression and are more tense. Their stories include fewer ordinary imperfections and unusual contents. However, many behaviors showed no discernible links, or only weak links, to deceit. Cues to deception were more pronounced when people were motivated to succeed, especially when the motivations were identity relevant rather than monetary or material. Cues to deception were also stronger when lies were about transgressions.
Content may be subject to copyright.
Cues to Deception
Bella M. DePaulo
University of Virginia James J. Lindsay
University of Missouri—Columbia
Brian E. Malone
University of Virginia Laura Muhlenbruck, Kelly Charlton, and
Harris Cooper
University of Missouri—Columbia
Do people behave differently when they are lying compared with when they are telling the truth? The
combined results of 1,338 estimates of 158 cues to deception are reported. Results show that in some
ways, liars are less forthcoming than truth tellers, and they tell less compelling tales. They also make a
more negative impression and are more tense. Their stories include fewer ordinary imperfections and
unusual contents. However, many behaviors showed no discernible links, or only weak links, to deceit.
Cues to deception were more pronounced when people were motivated to succeed, especially when the
motivations were identity relevant rather than monetary or material. Cues to deception were also stronger
when lies were about transgressions.
Do people behave in discernibly different ways when they are
lying compared with when they are telling the truth? Practitioners
and laypersons have been interested in this question for centuries
(Trovillo, 1939). The scientific search for behavioral cues to
deception is also longstanding and has become especially vigorous
in the past few decades.
In 1981, Zuckerman, DePaulo, and Rosenthal published the first
comprehensive meta-analysis of cues to deception. Their search
for all reports of the degree to which verbal and nonverbal cues
occurred differentially during deceptive communications com-
pared with truthful ones produced 159 estimates of 19 behavioral
cues to deception. These estimates were from 36 independent
samples. Several subsequent reviews updated the Zuckerman et al.
(1981) meta-analysis (B. M. DePaulo, Stone, & Lassiter, 1985a;
Zuckerman, DePaulo, & Rosenthal, 1986; Zuckerman & Driver,
1985), but the number of additional estimates was small. Other
reviews have been more comprehensive but not quantitative (see
Vrij, 2000, for the most recent of these). In the present review, we
summarize quantitatively the results of more than 1,300 estimates
of 158 cues to deception. These estimates are from 120 indepen-
dent samples.
We define deception as a deliberate attempt to mislead others.
Falsehoods communicated by people who are mistaken or self-
deceived are not lies, but literal truths designed to mislead are lies.
Although some scholars draw a distinction between deceiving and
lying (e.g., Bok, 1978), we use the terms interchangeably. As
Zuckerman et al. (1981) did in their review, we limit our analysis
to behaviors that can be discerned by human perceivers without the
aid of any special equipment. We also limit our review to studies
of adults, as the dynamics of deceiving may be markedly different
in children (e.g., Feldman, Devin-Sheehan, & Allen, 1978; Lewis,
Stanger, & Sullivan, 1989; Shennum & Bugental, 1982).
Predicting Cues to Deception: Previous Approaches
Ekman and Friesen (1969)
In 1969, Ekman and Friesen published the first influential the-
oretical statement about cues to deception. They described two
broad categories of cues, leakage cues and deception cues. Leak-
age cues reveal what the liars are trying to hide—typically, how
they really feel. Anticipating the self-presentational perspective
that would become important later, Ekman and Friesen (1969)
noted that the operation of display rules (i.e., culturally and so-
cially determined norms for managing facial expressions of emo-
tions) can result in leakage cues. For example, when deceivers try
to squelch the facial expression of an emotion they are trying to
conceal, the resulting expression—a micro affect display—may be
briefer than it is ordinarily, but the nature of the affect may still be
identifiable. If instead the facial expression is so brief that the
Bella M. DePaulo and Brian E. Malone, Department of Psychology,
University of Virginia; James J. Lindsay, Laura Muhlenbruck, Kelly Charl-
ton, and Harris Cooper, Department of Psychology, University of Missou-
ri—Columbia.
Bella M. DePaulo is now a visiting professor at the Department of
Psychology, University of California, Santa Barbara. James J. Lindsay is
now at the Department of Psychology, University of Minnesota, Twin
Cities Campus. Kelly Charlton is now at the Department of Psychology,
University of North Carolina at Pembroke.
This article was reviewed and accepted under the editorial term of
Nancy Eisenberg. We thank Barry Schlenker for providing many insightful
suggestions and Charlie Bond, Tim Levine, and Aldert Vrij for answering
countless questions about their data and ideas. Many others also responded
to our inquiries about theoretical or empirical issues, including Jack
Brigham, Mark deTurck, Paul Ekman, Tom Feeley, Klaus Fiedler, Howard
Friedman, Mark Frank, Dacher Keltner, Randy Koper, Bob Krauss, Steve
McCornack, Maureen O’Sullivan, Ron Riggio, Jonathon Schooler, and
June Tangney.
Correspondence concerning this article should be addressed to Bella M.
DePaulo, P.O. Box 487, Summerland, California 93067. E-mail:
depaulo@psych.ucsb.edu
Psychological Bulletin Copyright 2003 by the American Psychological Association, Inc.
2003, Vol. 129, No. 1, 74–118 0033-2909/03/$12.00 DOI: 10.1037/0033-2909.129.1.74
74
emotion cannot be discerned, then the resulting micro affect dis-
play functions as a deception cue. Deception cues indicate that
deception may be occurring, without indicating the nature of the
information that is being concealed. Almost all of the cues that
have been reported in the literature are deception cues.
Ekman and Friesen (1969) described various conditions under
which liars would be especially likely to succeed in their deception
attempts (e.g., perhaps by evidencing fewer or less obvious cues).
Their formulation was based on the psychology of both the liars
and the targets of lies as they relate to each other. For example,
they predicted that success is more likely when the salience of
deception is asymmetrical such that the liars are focused on getting
away with their lies while the issue of deception is not salient to
the targets or that the liars are focusing primarily on deceiving
while the targets are simultaneously trying to deceive and detect
deceit.
Zuckerman et al. (1981)
Zuckerman et al. (1981) began their formulation with the widely
accepted premise that no one behavior or set of behaviors would
ever be found that always occurs when people are lying and never
occurs any other time. Instead, they argued, the search should be
for the kinds of thoughts, feelings, or psychological processes that
are likely to occur more or less often when people are lying
compared with when they are telling the truth and for the behav-
ioral cues that may be indicative of those states. They then delin-
eated four factors that could be used to predict cues to deception:
generalized arousal, the specific affects experienced during decep-
tion, cognitive aspects of deception, and attempts to control be-
havior so as to maintain the deception.
Arousal
Citing the research and theory available at the time on the
psychophysiological detection of deception, Zuckerman et al.
(1981) proposed that liars may experience greater undifferentiated
arousal than truth tellers. That arousal could be evidenced by liars
greater pupil dilation, increased blinking, more frequent speech
disturbances, and higher pitch. However, Zuckerman et al. (1981)
also acknowledged that autonomic responses that seem character-
istic of deception may be explained by the specific affects expe-
rienced while lying without invoking the notion of diffuse arousal.
Feelings While Lying
To the extent that liars experience guilt about lying or fear of
getting caught lying, behaviors indicative of guilt and fear are
shown more often by liars than truth tellers. Zuckerman et al.
(1981) suggested that liars might fidget more than truth tellers, and
they may also sound more unpleasant. They also suggested that
guilt and anxiety could become apparent in liarsdistancing of
themselves from their deceptive communications. Drawing from
Wiener and Mehrabians (1968; see also Mehrabian, 1972) ac-
count of the verbal and nonverbal cues indicative of distancing
(which they called nonimmediacy), Zuckerman et al. (1981) pre-
dicted that liars would communicate in more evasive and indirect
ways than truth tellers and that they would maintain less eye
contact with their interaction partners.
Cognitive Aspects of Deception
Zuckerman et al. (1981) conceptualized lying as a more cogni-
tively complex task than telling the truth. Liars, they claimed, need
to formulate communications that are internally consistent and
consistent with what others already know. The greater cognitive
challenges involved in lying (relative to truth telling) were pre-
dicted to result in longer response latencies, more speech hesita-
tions, greater pupil dilation, and fewer illustrators (hand move-
ments that accompany and illustrate speech).
Attempted Control of Verbal and Nonverbal Behaviors
Liarsattempts to control their behaviors so as to maintain their
deception can paradoxically result in cues that instead betray it.
For example, liarsbehaviors may seem less spontaneous than
truth tellers. Also, liarsinability to control all aspects of their
behavior equally effectively could result in verbal and nonverbal
discrepancies.
Ekman (1985/1992)
Ekman (1985/1992) described two major categories of cues,
thinking cues and feeling cues. Liars who prepare their deceptions
inadequately or who cannot keep their stories straight produce
inconsistencies that betray their deceits. Those who overprepare
produce stories that seem rehearsed. If liars need to think carefully
about their lies as they tell them, they may speak more slowly than
truth tellers. These are all thinking cues.
Ekmans (1985/1992) more important contribution, however,
was his conceptualization of the role of emotions in deceiving. By
understanding the emotions that liars are experiencing, Ekman
argued, it is possible to predict behaviors that distinguish liars from
truth tellers. For example, the cues indicative of detection appre-
hension are fear cues. These include higher pitch, faster and louder
speech, pauses, speech errors, and indirect speech. The greater the
liarsdetection apprehension, the more evident these fear cues
should be. For example, liars should appear more fearful as the
stakes become higher and the anticipated probability of success
becomes lower.
Similarly, liars who feel guiltier about their lies, such as those
who are lying to people who trust them, should show more
behavioral indicators of guilt. Ekman (1985/1992) noted that guilt
cues have not been clearly determined, but they could include cues
to sadness such as lower pitch, softer and slower speech, and
downward gazing.
Liarsfeelings about lying are not necessarily negative ones.
Ekman (1985/1992) suggested that liars sometimes experience
duping delight,which could include excitement about the chal-
lenge of lying or pride in succeeding at the lie. This delight could
become evident in cues to excitement such as higher pitch, faster
and louder speech, and more use of illustrators. The duping delight
hypothesis has not yet been tested.
Ekman (1985/1992) pointed out that emotions become signifi-
cant not only when liars feel apprehensive, guilty, or excited about
their lies but also when liars are experiencing emotions that they
are trying to hide or when they are faking emotions that they are
not really experiencing. The particular cues that signal lying de-
pend on the particular emotions that the liars are experiencing and
75
CUES TO DECEPTION
simulating. For example, people who are only pretending to be
enjoying a film would show fewer genuine enjoyment smiles and
more feigned smiles than people who really are enjoying a film.
These differences in smiling would not be predicted if the feelings
that people really were experiencing or just pretending to experi-
ence were, for example, feelings of pain instead of enjoyment.
From this perspective, cues to emotions that liars are trying to hide
or to simulate cannot be combined across all studies in the liter-
ature. Instead, the relevant subset of studies must be selected (e.g.,
only those in which liars are hiding or simulating enjoyment). This
is also a perspective that eschews the notion of undifferentiated
arousal and instead argues for the study of specific emotions
(Ekman, Levenson, & Friesen, 1983; Levenson, Ekman, &
Friesen, 1990).
Buller and Burgoon (1996)
From a communications perspective, Buller and Burgoon (1996)
argued that to predict the behavior of deceivers, it is important to
consider not just individual psychological variables such as moti-
vations and emotions but also interpersonal communicative pro-
cesses. Reiterating Ekman and Friesens (1969) point about the
importance of multiple roles, Buller and Burgoon noted that when
people are trying to deceive, they are engaged in several tasks
simultaneously. They are attempting to convey their deceptive
message, and at the same time, they are continually monitoring the
target of their deception for signs of suspiciousness and then
adapting their behavior accordingly. Although these multiple de-
mands can prove challenging at first, compromising effectiveness
at maintaining credibility, these difficulties should typically dis-
sipate over time as participants acquire more feedback, attempt
further repairs, and gain greater control over their performance
(Buller & Burgoon, 1996, p. 220). They therefore predicted that
deceivers in interactive contexts should display increasing imme-
diacy and involvement, pleasantness, composure, fluency, and
smooth turn taking over the course of the interaction(Buller &
Burgoon, 1996, p. 220). They also noted that patterns of behavior
vary with factors such as the deceiversexpectations, goals, mo-
tivations, and relationship with the targets and with the targets
degree of suspiciousness, so that there would be no one profile of
deceptive behaviors.
One of the moderator variables for which Buller and Burgoon
(1996) made predictions is deceiversmotivations. A number of
taxonomies of motivations for deceiving have been proposed (e.g.,
Camden, Motley, & Wilson, 1984; B. M. DePaulo, Kashy, Kirk-
endol, Wyer, & Epstein, 1996; Hample, 1980; Lippard, 1988;
Metts, 1989; Turner, Edgley, & Olmstead, 1975), and some are
quite complex. For example, Metts (1989) described four catego-
ries of motives (partner focused, teller focused, relationship fo-
cused, and issue focused) and 15 subcategories. Buller and Bur-
goon considered three motivations: instrumental, relational (e.g.,
avoiding relationship problems), and identity (e.g., protecting the
liars image). They predicted that liars would experience more
detection apprehension when motivated by self-interest than by
relational or identity goals. As a result, instrumentally motivated
liars exhibit more nonstrategic behaviors (unintentional behaviors
that Buller & Burgoon, 1996, have described as arousal cues).
Those liars were also predicted by Buller and Burgoon to engage
in more strategic behaviors, which are behaviors used in the
pursuit of high level plans.
The Present Approach to Predicting Cues to Deception:
A Self-Presentational Perspective
In 1992, B. M. DePaulo described a self-presentational perspec-
tive for understanding nonverbal communication. Her formulation
was not specific to the communication of deception. In this sec-
tion, we further articulate her perspective, incorporating subse-
quent research and theory and specifying the implications of a
self-presentational perspective for the prediction of cues to decep-
tion. We begin with a review of the incidence and nature of lying
in everyday life and a comparison of the lies people typically tell
in their lives with the lies studied in the research literature on
deception.
Lies in Social Life
Lying is a fact of everyday life. Studies in which people kept
daily diaries of all of their lies suggest that people tell an average
of one or two lies a day (B. M. DePaulo & Kashy, 1998; B. M.
DePaulo, Kashy, et al., 1996; Kashy & DePaulo, 1996; see also
Camden et al., 1984; Feldman, Forrest, & Happ, 2002; Hample,
1980; Lippard, 1988; Metts, 1989; Turner et al., 1975). People lie
most frequently about their feelings, their preferences, and their
attitudes and opinions. Less often, they lie about their actions,
plans, and whereabouts. Lies about achievements and failures are
also commonplace.
Occasionally, people tell lies in pursuit of material gain, per-
sonal convenience, or escape from punishment. Much more com-
monly, however, the rewards that liars seek are psychological
ones. They lie to make themselves appear more sophisticated or
more virtuous than they think their true characteristics warrant.
They lie to protect themselves, and sometimes others, from disap-
proval and disagreements and from getting their feelings hurt. The
realm of lying, then, is one in which identities are claimed and
impressions are managed. It is not a world apart from nondeceptive
discourse. Truth tellers edit their self-presentations, too, often in
pursuit of the same kinds of goals, but in ways that stay within
boundaries of honesty. The presentations of liars are designed to
mislead.
There are only a few studies in which people have been asked
how they feel about the lies they tell in their everyday lives (B. M.
DePaulo & Kashy, 1998; B. M. DePaulo, Kashy, et al., 1996;
Kashy & DePaulo, 1996). The results suggest that people regard
their everyday lies as little lies of little consequence or regret. They
do not spend much time planning them or worrying about the
possibility of getting caught. Still, everyday lies do leave a
smudge. Although people reported feeling only low levels of
distress about their lies, they did feel a bit more uncomfortable
while telling their lies, and directly afterwards, than they had felt
just before lying. Also, people described the social interactions in
which lies were told as more superficial and less pleasant than the
interactions in which no lies were told.
Interspersed among these unremarkable lies, in much smaller
numbers, are lies that people regard as serious. Most of these lies
are told to hide transgressions, which can range from misdeeds
such as cheating on tests to deep betrayals of intimacy and trust,
76 DEPAULO ET AL.
such as affairs (B. M. DePaulo, Ansfield, Kirkendol, & Boden,
2002; see also Jones & Burdette, 1993; McCornack & Levine,
1990; Metts, 1994). These lies, especially if discovered, can have
serious implications for the liarsidentities and reputations.
Lies in Studies of Cues to Deception
In the literature on cues to deception, as in everyday life, lies
about personal feelings, facts, and attitudes are the most common-
place. Participants in studies of deception might lie about their
opinions on social issues, for example, or about their academic
interests or musical preferences. Sometimes emotions are elicited
with video clips, and participants try to hide their feelings or
simulate entirely different ones. The literature also includes lies
about transgressions, as in studies in which participants are in-
duced to cheat on a task and then lie about it. There are a few
studies (Hall, 1986; Horvath, 1973; Horvath, Jayne, & Buckley,
1994) of lies about especially serious matters, such as those told by
suspects in criminal investigations, and one study (Koper & Sahl-
man, 2001) of the truthful and deceptive communications of peo-
ple whose lies were aired in the media (e.g., Richard Nixon, Pete
Rose, Susan Smith).
Self-Presentation in Truthful and
Deceptive Communications
The prevalence of self-presentational themes in the kinds of lies
that people most often tell and in their reasons for telling them
suggests the potential power of the self-presentational perspective
for predicting cues to deception. Following Schlenker (1982, 2002;
Schlenker & Pontari, 2000), we take a broad view of self-
presentation as peoples attempts to control the impressions that
are formed of them. In self-presenting, people are behaving in
ways that convey certain roles and personal qualities to others
(Pontari & Schlenker, 2000, p. 1092). From this perspective, all
deceptive communications involve self-presentationso do all
truthful communications.
Fundamental to the self-presentational perspective is the as-
sumption, based on our understanding of the nature of lying in
everyday life, that cues to deception ordinarily are quite weak.
There are, however, conditions under which cues are more appar-
ent. As we explain, such moderators of the strength of deception
cues can be predicted from the self-presentational processes in-
volved in communicating truthfully and deceptively.
The Deception Discrepancy
Lies vary markedly in the goals they serve and in the kinds of
self-presentations enacted to achieve those goals. Yet this vast
diversity of lies is united by a single identity claim: the claim of
honesty. From the friend who feigns amusement in response to the
joke that actually caused hurt feelings to the suspect who claims to
have been practicing putts on the night of the murder, liars succeed
in their lies only if they seem to be sincere.
1
However, this claim
to honesty does not distinguish liars from truth tellers either. Truth
tellers fail in their social interaction goals just as readily as liars if
they seem dishonest. The important difference between the truth
tellers claim to honesty and the liars is that the liars claim is
illegitimate. From this discrepancy between what liars claim and
what they believe to be true, we can predict likely cues to deceit.
Implications of the Deception Discrepancy
Two implications of the deception discrepancy are most impor-
tant: First, deceptive self-presentations are often not as convinc-
ingly embraced as truthful ones. Second, social actors typically
experience a greater sense of deliberateness when their perfor-
mances are deceptive than when they are honest. These predictions
are the starting point for our theoretical analyses. There are also
qualifications to the predictions, and we describe those as well.
Deceptive Self-Presentations Are Not as Fully Embraced
as Truthful Ones
The most significant implication of the deception discrepancy is
that social actors typically are unwilling, or unable, to embrace
their false claims as convincingly as they embrace their truthful
ones (cf. Mehrabian, 1972; Weiner & Mehrabian, 1968). Several
factors undermine the conviction with which liars make their own
cases. First, liars, in knowingly making false claims, may suffer
moral qualms that do not plague truth tellers. These qualms may
account for the faint feelings of discomfort described by the tellers
of everyday lies (B. M. DePaulo, Kashy, et al., 1996). Second,
even in the absence of any moral misgivings, liars may not have
the same personal investment in their claims as do truth tellers.
When social actors truthfully describe important aspects of them-
selves, their emotional investment in their claims may be readily
apparent (B. M. DePaulo, Epstein, & LeMay, 1990). Furthermore,
those self-relevant claims are backed by an accumulation of
knowledge, experience, and wisdom that most liars can only
imagine (Markus, 1977). Liars may offer fewer details, not only
because they have less familiarity with the domain they are de-
scribing, but also to allow for fewer opportunities to be disproved
(Vrij, 2000).
In sum, compared with truth tellers, many liars do not have the
moral high ground, the emotional investment, or the evidentiary
basis for staking their claims. As a result, liars relate their tales in
a less compelling manner, and they appear less forthcoming, less
pleasant, and more tense.
Deceptive Self-Presenters Are Likely to Experience a
Greater Sense of Deliberateness Than Truthful Ones
Cues to deliberateness. When attempting to convey impres-
sions they know to be false, social actors are likely to experience
a sense of deliberateness. When instead people are behaving in
ways they see as consistent with their attitudes, beliefs, emotions,
and self-images, they typically have the sense of just acting
1
We could have described our theoretical formulation as impression
management rather than self-presentation. Impression management in-
cludes attempts to control the impressions that are formed of others, as well
as impressions formed of oneself (e.g., Schlenker, 2002). We chose self-
presentation because of the central role in our formulation of the impres-
sion of sincerity conveyed by the actor. Even when people are lying about
the characteristics of another person, the effectiveness of those lies depends
on their own success at appearing sincere.
77
CUES TO DECEPTION
naturally.They are presenting certain roles and personal qualities
to others, and they expect to be seen as truthful, but they do not
ordinarily experience this as requiring any special effort or atten-
tion. Our claim is not that people acting honestly never experience
a sense of deliberateness. Sometimes they do, as for example,
when the thoughts or feelings they are trying to communicate are
difficult to express or when the stakes for a compelling perfor-
mance are high; however, the focus of their deliberateness is
typically limited to the content of their performance and not its
credibility. Liars usually make an effort to seem credible; truth
tellers more often take their credibility for granted (B. M. DePaulo,
LeMay, & Epstein, 1991).
2
Deliberate attempts to manage impressions, including impres-
sions of credibility, are attempts at self-regulation, and self-
regulation consumes mental resources (Baumeister, 1998). Social
actors who are performing deceptively may experience greater
self-regulatory busyness than those who are performing honestly.
Even when the attempted performance is the same (e.g., conveying
enthusiasm), the self-regulatory demands may be greater for the
liar. Enthusiasm flows effortlessly from those who truly are expe-
riencing enthusiasm, but fakers have to marshal theirs. Liars can be
preoccupied with the task of reminding themselves to act the part
that truth tellers are not just role-playing but living.
Other thoughts and feelings could also burden liars more than
truth tellers (Ekman, 1985/1992). These include thoughts about
whether the performance is succeeding, feelings about this (e.g.,
anxiety), and feelings about the fabricated performance (e.g., guilt)
or about discreditable past acts that the liar is trying to hide.
To the extent that liars are more preoccupied with these intru-
sive mental contents than are truth tellers, their performance could
suffer. For example, they could seem less involved and engaged in
the interaction, and any attempts at cordiality could seem strained.
People busy with self-regulatory tasks, compared with those who
are not so busy, sometimes process concurrent information less
deeply (Gilbert & Krull, 1988; Gilbert, Krull, & Pelham, 1988;
Richards & Gross, 1999) and perform less well at subsequent
self-regulatory tasks (Baumeister, Bratslavsky, Muraven, & Tice,
1998; Muraven, Tice, & Baumeister, 1998). One potential impli-
cation of this regulatory depletion may be that liars fail to notice
some of the ways in which the targets of their lies are reacting (cf.
Butterworth, 1978). (This is contrary to Buller & Burgoons, 1996,
assumption that liars monitor targets closely for feedback.) An-
other implication is that liarsbusyness could compromise their
attempts to generate detailed responses of their own.
One likely response to the offending thoughts and feelings liars
experience is to try to control them. For example, liars can try not
to think about their blemished past or the insincerity of their
ongoing performance. However, attempts at thought suppression
can backfire, resulting in even greater preoccupation with those
thoughts (Wegner, 1994). Attempts to regulate emotional experi-
ences can also augment rather than dissipate the targeted feelings
(e.g., Wegner, Erber, & Zanakos, 1993) and increase physiological
activation (Gross, 1998; Gross & Levenson, 1993; Richards &
Gross, 1999).
The primary target of liarsefforts at self-regulation, though, is
probably not their thoughts and feelings but their overt behaviors.
In theory, liars could adopt the goal of trying to appear honest and
sincere, which in some instances could involve trying to behave in
the generally positive and friendly way that they believe to be
more characteristic of truth tellers than of liars (Malone, DePaulo,
Adams, & Cooper, 2002). Especially confident and skilled liars
may do just that, and succeed (cf. Roney, Higgins, & Shah, 1995).
However, it may be more commonplace for people who are mis-
leading others to adopt the defensive goal of trying not to get
caught (e.g., Bell & DePaulo, 1996; B. M. DePaulo & Bell, 1996).
Liars pursuing this strategy may try to avoid behaving in the ways
that they think liars behave. One risk to this strategy is that some
of their beliefs about how liars behave may be wrong. For exam-
ple, social perceivers typically believe that liars cannot stay still;
they expect them to fidget, shift their posture, and shake their legs
(Malone et al., 2002; Vrij, 2000). In trying to avoid these move-
ments (either directly or as a result of the higher level goal of
trying not to give anything away), liars may appear to be holding
back. A sense of involvement and positive engagement would be
lacking.
Deliberate attempts by liars at controlling expressive behaviors,
such as attempts to control thoughts and feelings, can be the seeds
of their own destruction (e.g., B. M. DePaulo, 1992; B. M. De-
Paulo & Friedman, 1998). One route to failure is to try to regulate
expressive behaviors, such as tone of voice, that may not be so
amenable to willful control (e.g., Scherer, 1986). It is possible, for
example, that peoples attempts not to sound anxious would result
in an even higher pitched and anxious sounding tone of voice than
would have resulted if they had not deliberately tried to quiet the
sounds of their insecurity. Another path to self-betrayal is to direct
efforts at expressive control at the wrong level (Vallacher &
Wegner, 1987; Vallacher, Wegner, McMahan, Cotter, & Larsen,
1992). For example, social actors who ordinarily convey convinc-
ing impressions of sincerity and friendliness may instead seem
phony if they deliberately try to smile and nod. In focusing on
specific behaviors, they may be unwittingly breaking apart the
components of the well-practiced and established routine of acting
friendly (e.g., Kimble & Perlmuter, 1970). The process may be
akin to what happens to experienced typists who try to focus on the
location of each of the characters on the keyboard instead of typing
in their usual un-self-conscious way. Finally, if some behaviors are
more controllable than others, or if liars only try to control some
behaviors and not others, discrepancies could develop.
In sum, we predicted that to the extent that liars (more than truth
tellers), deliberately try to control their expressive behaviors,
thoughts, and feelings, their performances would be compromised.
They would seem less forthcoming, less convincing, less pleasant,
and more tense.
Moderators of the strength of cues to deliberateness. As the
motivation to tell a successful lie increases, liars may redouble
their deliberate efforts at self-regulation, resulting in an even more
debilitated performance (B. M. DePaulo & Kirkendol, 1989; B. M.
2
Certain deceptive exchanges are so often practiced that they, too,
unfold in a way that feels effortless (e.g., looking at the baby picture
proffered by the proud parents and exclaiming that the bald wrinkled blob
is just adorable). Lies told in these instances may be guided by what Bargh
(1989) described as goal-dependent automaticity. Although they may not
feel like deliberate lies, the critical intent to mislead is clearly present. The
flatterer would feel mortified if the parents realized he or she thought the
baby was hideous. It is in part because the sense of deliberateness is critical
to peoples sense of having lied that these exchanges are so often unrec-
ognized as lies.
78 DEPAULO ET AL.
DePaulo, Kirkendol, Tang, & OBrien, 1988; B. M. DePaulo,
Stone, & Lassiter, 1985b; see also Ben-Shakhar & Elaad, in press).
We tested this proposed moderator of cues to deception by com-
paring the results of studies in which inducements were offered for
success at deceit with studies in which no special attempts were
made to motivate the participants.
As we have noted all along, identity-relevant concerns are
fundamental to deceptive and nondeceptive communications. They
appear even in the absence of any special motivational induction.
Such concerns can, however, be exacerbated by incentives that are
linked specifically to peoples identities and images. In the liter-
ature we are reviewing, identity-relevant motivators include ones
in which skill at deception was described as indicative of peoples
competence or of their prospects for success at their chosen ca-
reers. Other identity-relevant motivators raised the self-
presentational stakes by informing participants that their perfor-
mances would be evaluated or scrutinized. Compared with other
kinds of incentives such as money or material rewards, identity-
relevant incentives are more likely to exacerbate public self-
awareness, increase rumination, and undermine self-confidence.
All of these factors can further disrupt performance (e.g., Baumeis-
ter, 1998; Carver & Scheier, 1981; B. M. DePaulo et al., 1991;
Wicklund, 1982; Wine, 1971; see also Gibbons, 1990). Conse-
quently, tellers of identity-relevant lies seem especially less forth-
coming, less pleasant, and more tense. They also tell tales that
seem less compelling.
In sum, our predictions were that cues to deception would be
stronger and more numerous among people who have been moti-
vated to succeed in their self-presentations than for those who have
not been given any special incentive. This predicted impairment
would be even more evident when incentives are identity relevant
than when they are not.
Qualifications. There are two important qualifications to our
discussion of the effects of deliberate attempts at self-regulation.
One is that an increase in self-regulatory demands does not always
result in a decrement in performance. When attempts at self-
regulation shift the focus of attention away from negative self-
relevant thoughts (Pontari & Schlenker, 2000) or from the indi-
vidual components of the task (Lewis & Linder, 1997),
performance can improve.
The second is that the self-regulatory demands of lying do not
always exceed those of telling the truth. For example, honest but
insecure actors may be more preoccupied with thoughts of failure
than deceptive but cocky ones. In addition, for most any social
actor, the telling of truths that impugn the truth tellers character or
cause pain or harm to others may pose far greater self-regulatory
challenges than the telling of lies about the same topics.
Finally, it is important, as always, to bear in mind the nature of
the lies that people tell in their everyday lives. Most are little lies
that are so often practiced and told with such equanimity that the
self-regulatory demands may be nearly indistinguishable from the
demands of telling the truth. Therefore, we expected the conse-
quences of deliberate self-regulation that we have described to be
generally weak and that stronger effects of attempted control
would be evident in studies in which participants were motivated
to get away with their lies, particularly if the motivations were
identity relevant.
The Formulation of Deceptive and
Nondeceptive Presentations
The self-regulatory demands we have just described are those
involved in executing the deceptive and nondeceptive perfor-
mances. Earlier descriptions of deceptive communications focused
primarily on the processes involved in formulating lies. We con-
sider those next. As we elaborate below, we reject the argument
that lies are necessarily more difficult to construct than truths. Still,
we predicted that lies would generally be shorter and less detailed
than truths. In doing so, we drew from the literatures on the use of
scripts as guides to storytelling, the differences between accounts
of events that have or have not been personally experienced, and
lay misconceptions about the nature of truthful communications.
Cues to the Formulation of Lies
Previous formulations have typically maintained that it is more
difficult to lie than to tell the truth because telling lies involves the
construction of new and never-experienced tales whereas telling
the truth is a simple matter of telling it like it is (e.g., Buller &
Burgoon, 1996; Miller & Stiff, 1993; Zuckerman et al., 1981; but
see McCornack, 1997, for an important exception). We disagree
with both assumptionsthat lies always need to be assembled and
that truths can simply be removed from the box. When the truth is
hard to tell (e.g., when it would hurt the other persons feelings),
then a careful piecing together of just the right parts in just the
right way would be in order. But even totally mundane and
nonthreatening truths can be conveyed in a nearly infinite variety
of shapes and sizes. For example, in response to the question How
was your day?on a day when nothing special happened, the
answer could be Fine,a listing of the main events (but, what
counts as a main event?), or a description of a part of the day. Even
in the latter instance, there is no one self-evident truth. As much
work on impression management has indicated (e.g., Schlenker,
1980, 1985), presentations are edited differently depending on
identity-relevant cues, such as the tellers relationship with the
other person and the interaction goals. Yet all of this editing can
occur within the bounds of truthfulness.
Truths, then, are not often prepackaged. But lies can be. A
teenage girl who had permission to spend the night at a girlfriends
home but instead went camping with a boyfriend may have no
difficulty spinning a tale to tell to her parents the next morning. For
example, she can easily access a script for what spending the night
at a girlfriends home typically involves. Or, she could relate her
best friends favorite story about an evening at the home of a
girlfriend. Lies based on scripts or familiar stories are unlikely to
be marked by the signs of mental effort (described below) that may
characterize lies that are fabricated. The teller of scripts and of
familiar stories may also be less likely to get tangled in contradic-
tions than the liar who makes up a new story.
Even prepackaged lies, however, may be shorter and less de-
tailed than truthful answers. Liars working from scripts may have
only the basics of the scripted event in mind (e.g., Smith, 1998),
and liars who have borrowed their stories have at hand only those
details they were told (and of those, only the ones they remember).
All lies, whether scripted, borrowed, or assembled anew, could
be shorter and less detailed than truthful accounts for another
reason: The truthful accounts are based on events that were actu-
79
CUES TO DECEPTION
ally experienced, whereas the lies are not. The literature on reality
monitoring (e.g., Johnson & Raye, 1981) suggests ways in which
memories of past experiences or perceptions (i.e., memories based
on external sources) differ from memories of experiences that were
imagined (i.e., memories based on internal sources). This perspec-
tive can be applied to the prediction of cues to deception only by
extrapolation, because reality monitoring describes processes of
remembering whereas deception describes processes of relating
(Vrij, 2000). In relating a story, even a truthful one, people often
fill in gaps and in other ways create a more coherent tale than their
memories actually support. Nonetheless, deceptive accounts may
differ from truthful ones in ways that weakly parallel the ways in
which memories of imagined experiences differ from memories of
externally based experiences. If so, then truthful accounts would
be clearer, more vivid, and more realistic than deceptive ones, and
they would include more sensory information and contextual cues.
Deceptive accounts, in contrast, should be more likely to include
references to cognitive processes such as thoughts and inferences
made at the time of the event.
The conventional wisdom that lies are more difficult to formu-
late than truths is most likely to be supported when liars make up
new stories. Lies that are fabricated mostly from scratch are likely
to be shorter and more internally inconsistent than truths and to be
preceded by longer latencies. Signs of mental effort may also be
evident. These could include increases in pauses and speech dis-
turbances (Berger, Karol, & Jordan, 1989; Butterworth &
Goldman-Eisler, 1979; Christenfeld, 1994; Goldman-Eisler, 1968;
Mahl, 1987; Schachter, Christenfeld, Ravina, & Bilous, 1991;
Siegman, 1987), more pupil dilation (E. H. Hess & Polt, 1963;
Kahneman, 1973; Kahneman & Beatty, 1967; Kahneman, Tursky,
Shapiro, & Crider, 1969; Stanners, Coulter, Sweet, & Murphy,
1979; Stern & Dunham, 1990), decreased blinking (Bagley &
Manelis, 1979; Holland & Tarlow, 1972, 1975; Wallbott &
Scherer, 1991), and decreased eye contact (Fehr & Exline, 1987).
People who are preoccupied with the formulation of a complex lie
may appear to be less involved and expressive, as well as less
forthcoming.
Unfortunately, in the literature we are reviewing, liars were
almost never asked how they came up with their lies, and truth
tellers were not asked how they decided which version of the truth
to relate (e.g., a short version or a long one). In the only study we
know of in which liars were asked about the origins of their lies
(Malone, Adams, Anderson, Ansfield, & DePaulo, 1997), the most
common answer was not any we have considered so far. More than
half the time, liars said that they based their lies on experiences
from their own lives, altering critical details. With this strategy,
liars may be just as adept as truth tellers at accessing a wealth of
details, including clear and vivid sensory details.
Still, even the most informed and advantaged liars may make
mistakes if they share common misconceptions of what truthful
accounts really are like (Vrij, Edward, & Bull, 2001). For example,
if liars believe that credible accounts are highly structured and
coherent, with few digressions or inessential details, their accounts
may be smoother and more pat than those of truth tellers. The
embedding of a story in its spatial and temporal context and the
relating of the specifics of the conversation may provide a richness
to the accounts of truth tellers that liars do not even think to
simulate. Liars may also fail to appreciate that memory is fallible
and reporting skills are imperfect even when people are telling the
truth and that truth tellers who are not concerned about their
credibility may not be defensive about admitting their uncertain-
ties. Consequently, truth tellers may express self-doubts, claim
they do not remember things, or spontaneously correct something
they already said, whereas liars would scrupulously avoid such
admissions of imperfection. The stories told by liars, then, would
be too good to be true.
Liars can also fail if they know less than their targets do about
the topic of the deceit. The babysitter who claims to have taken the
kids to the zoo and relates how excited they were to see the lion,
would be undone by the parent who knows that there are no lions
at that zoo. The man suspected of being a pedophile who points to
his service as leader of his churchs youth group may believe he is
painting a picture of a pillar of the community, whereas instead he
has unwittingly described just the sort of volunteer work that is a
favorite of known pedophiles (Steller & Kohnken, 1989; Un-
deutsch, 1989; Yuille & Cutshall, 1989).
3
Moderators of Cues to the Formulation of Lies
Factors that alter the cognitive load for liars are candidates for
moderators of cognitive cues to deception. We consider two such
moderators in this review: the opportunity to plan a presentation
and the duration of that presentation.
Liars who have an opportunity to plan their difficult lies, relative
to those who must formulate their lies on the spot, may be able to
generate more compelling presentations (e.g., H. D. OHair, Cody,
& McLaughlin, 1981; Vrij, 2000). Because they can do some of
their thinking in advance, their response latencies could be shorter
and their answers longer. However, mistakes that follow from
misconceptions about the nature of truthful responses would not be
averted by planning and may even be exacerbated.
We think that, in theory, cues to deception could occur even for
the simplest lies. For example, when just a yesor noanswer
is required, a lie could be betrayed by a longer response latency in
instances in which the truth comes to mind more readily and must
be set aside and replaced by the lie (Walczyk, Roper, & Seeman,
in press). However, we believe that the cognitive burdens gener-
ally would be greater when a short answer would not suffice and
that cues to deception would therefore become clearer and more
numerous as the duration of the response increases. For example,
lies may be especially briefer than truths when people are expected
to tell a story rather than to respond with just a few words. Also,
3
Statement Validity (or Reality) Analysis was developed initially by
Undeutsch (1989) to assess the credibility of child witnesses in cases of
alleged sexual abuse. The overall assessment includes an evaluation of the
characteristics and possible motives of the child witness. It also includes a
set of 19 criteria to be applied to transcripts of statements made by the
witness (Steller & Kohnken, 1989). This analysis of witness statements,
called Criteria-Based Content Analysis (CBCA), was subsequently applied
to the analysis of statements made by adults in other kinds of criminal
proceedings and in experimental research (e.g., Yuille & Cutshall, 1989).
All of the characteristics discussed in this section of our review, from the
excessive structure and coherence of accounts to the typical characteristics
of criminals or crimes related by people who do not realize their signifi-
cance, are drawn from CBCA, though some of the interpretations are our
own. The use of CBCA to analyze statements made by adults is contro-
versial (e.g., Vrij, 2000).
80 DEPAULO ET AL.
liars who are experiencing affects and emotions that they are trying
to hide may be more likely to show those feelings when they need
to sustain their lies longer (cf. Ekman, 1985/1992).
The Role of Identity-Relevant Emotions in Deceptive and
Nondeceptive Presentations
People experience the unpleasant emotional state of guilt when
they have done something wrong or believe that others may think
that they have (Baumeister, Stillwell, & Heatherton, 1994). Even
more aversive is the feeling of shame that occurs when people fail
to meet their own personal moral standards (Keltner & Buswell,
1996; Tangney, Miller, Flicker, & Barlow, 1996; see also Scheff,
2001). Some lies, especially serious ones, are motivated by a desire
to cover up a personal failing or a discreditable thought, feeling, or
deed (e.g., B. M. DePaulo, Ansfield, et al., 2002). Yet those who
tell the truth about their transgressions or failings may feel even
greater guilt and shame than those whose shortcomings remain
hidden by their lies. If the behavior of truthful transgressors was
compared with that of deceptive transgressors, cues to these self-
conscious emotions would be more in evidence for the truth tellers,
if they distinguished them from the liars at all. In most studies,
however (including all of the studies of transgressions included in
this review), liars who had transgressed were compared with truth
tellers who had not. For those comparisons, then, we expected to
find that liars, compared with truth tellers, showed more shame
and guilt cues.
There is no documented facial expression that is specific to
guilt; therefore, we expected to find only more general cues to
negativity and distress (Keltner & Buswell, 1996; Keltner &
Harker, 1998). Shame, however, does seem to have a characteristic
demeanor that includes gaze aversion, a closed posture, and a
tendency to withdraw (Keltner & Harker, 1998).
Lies about transgressions, though, are the exceptions, both in
everyday life and in the studies in this review. The more common-
place lies cover truths that are not especially discrediting. For
example, people may not feel that it is wrong to have an opinion
that differs from someone elses or to hide their envy of a cowork-
ers success. In most instances, then, we did not, on the basis of the
hidden information alone, expect to find more guilt cues in liars
than in truth tellers.
By definition, though, there is a sense in which all liars are
candidates for experiencing guilt and shame, as they all have done
something that could be considered wrong: They have intention-
ally misled someone. Truth tellers have not. It is important to note,
however, that liars do not always feel badly about their lies, and
truth tellers do not always feel good about their honesty. In fact,
liars often claim that in telling their lies, they have spared their
targets from the greater distress that would have resulted had they
told the truth (B. M. DePaulo, Kashy, et al., 1996).
Guilt and shame are not the only emotions that have been
hypothesized to betray liars. Fear of being detected has also been
described as responsible for cues to deception (e.g., Ekman, 1985/
1992). We believed fear of detection would also vary importantly
with factors such as the nature of the behavior that is covered by
the lie. Liars would fear detection when hiding behaviors such as
transgressions, which often elicit punishment or disapproval. But
the more typical liars, those who claim that their movie preferences
match those of their dates or who conceal their pride in their own
work, would have little to fear from the discovery of that hidden
information.
People may fear detection not only because of the nature of the
behavior they are hiding but also because of the implications of
being perceived as dishonest (Schlenker, Pontari, & Christopher,
2001). The blemishes in perceived and self-perceived integrity that
could result from a discovered deception depend on factors such as
the justifiability of the deceit and are often quite minimal. But even
utterly trivial lies told in the spirit of kindness, such as false
reassurances about new and unbecoming hairstyles, have identity
implications if discovered. For instance, the purveyors of such
kind lies may be less often trusted when honest feedback really is
desired.
Across all of the lies in our data set, we expected to find weak
cues to anxiety and negativity. For example, liars may look and
sound more anxious than truth tellers (Slivken & Buss, 1984) and
speak less fluently (Kasl & Mahl, 1965; Mahl, 1987) and in a
higher pitch (Kappas, Hess, & Scherer, 1991; Scherer, 1986). They
may also blink more (Harrigan & OConnell, 1996), and their
pupils may be more dilated (Scott, Wells, Wood, & Morgan, 1967;
Simpson & Molloy, 1971; Stanners et al., 1979). Relative to truth
tellers, liars may also make more negative statements and com-
plaints, sound less pleasant, and look less friendly and less attrac-
tive. In a moderator analysis comparing lies about transgressions
with other kinds of lies, we expected to find more pronounced
distress cues in the lies about transgressions.
Convergent Perspectives on the Strength
of Cues to Deceit
Our self-presentational perspective has led us to reject the view
that lie telling is typically a complicated, stressful, guilt-inducing
process that produces clear and strong cues. Instead, we believe
that most deceptive presentations are so routinely and competently
executed that they leave only faint behavioral residues. Fiedler and
Walka (1993) offered a similar point of view. They argued that
ordinary people are so practiced, so proficient, and so emotionally
unfazed by the telling of untruths that they can be regarded as
professional liars. Therefore, they also expected to find mostly
only weak links between verbal and nonverbal behaviors and the
telling of lies. Bond, Kahler, and Paolicelli (1985), arguing from
an evolutionary perspective, drew a similar conclusion. Any bla-
tantly obvious cues to deceit, they contended, would have been
recognized by human perceivers long ago; evolution favors more
flexible deceivers.
Methodological Implications of the
Self-Presentational Perspective
Our self-presentational perspective suggests that social actors
try to convey particular impressions of themselves, both when
lying and when telling the truth, and that social perceivers rou-
tinely form impressions of others. We have conceptualized the
ways in which lies could differ from truths in terms of the different
impressions that deceptive self-presentations could convey. For
example, we hypothesized that liars would seem more distant than
truth tellers. One way to assess differences in distancing is to code
the many behaviors believed to be indicative of nonimmediacy,
such as the use of the passive rather than the active voice, the use
81
CUES TO DECEPTION
of negations rather than assertions, and looking away rather than
maintaining eye contact. This approach, which is the usual one, has
the advantage that the behaviors of interest are clearly defined and
objectively measured. However, for many of the kinds of impres-
sions that social actors attempt to convey, the full range of behav-
iors that contribute to the impression may be unknown. For ex-
ample, Wiener and Mehrabian (1968; Mehrabian, 1972) have
described a precise set of behaviors that they believed to be
indicative of verbal and nonverbal immediacy and have reported
some supportive data. However, others who have discussed im-
mediacy and related constructs have included other cues (e.g.,
Brown & Levinson, 1987; Fleming, 1994; Fleming & Rudman,
1993; Holtgraves, 1986; Searle, 1975). This raises the possibility
that social perceivers, who can often form valid impressions even
from rather thin slices of social behavior (e.g., Ambady &
Rosenthal, 1992), can discriminate truths from lies by their sub-
jective impressions of the constructs of interest (e.g., distancing)
just as well, if not better, than can objective coding systems (cf.
B. M. DePaulo, 1994; Malone & DePaulo, 2001). To test this
possibility, we used objective and subjective measurement as
levels of a moderator variable in analyses of cues for which
multiple independent estimates of both levels were available.
Summary of Predictions
Predicted Cues
The self-presentational perspective predicts five categories of
cues to deception. First, liars are predicted to be less forthcoming
than truth tellers. The model predicts they will respond less, and in
less detail, and they will seem to be holding back. For example,
liarsresponse latencies would be longer (an indication of cogni-
tive complexity in the Zuckerman et al., 1981, model) and their
speech would be slower (a thinking cue in Ekmans, 1985/1992,
formulation). Second, the tales told by liars are predicted to be less
compelling than those told by truth tellers. Specifically, liars
would seem to make less sense than truth tellers (e.g., there would
be more discrepancies in their accounts), and they would seem less
engaging, less immediate, more uncertain, less fluent, and less
active than truth tellers. Zuckerman et al. (1981) predicted that
discrepancies would occur as a result of attempted control, and
Ekman (1985/1992) regarded them as a thinking cue. Less imme-
diacy (more distancing) was described as a possible cue to detec-
tion apprehension and guilt by Ekman (1985/1992) and Zucker-
man et al. (1981), and it was regarded as a strategic behavior by
Buller and Burgoon (1996).
The self-presentational perspective also predicts that liars will
be less positive and pleasant than truth tellers, as is also suggested
by the description of cues to guilt and apprehensiveness put forth
by Ekman (1985/1992) and Zuckerman et al. (1981). The fourth
prediction of the self-presentational perspective is that liars will be
more tense than truth tellers. Some cues to tension, such as higher
pitch, have sometimes been conceptualized as indicative of undif-
ferentiated arousal (e.g., Zuckerman et al., 1981). Finally, the
self-presentational perspective alone predicts that liars will include
fewer ordinary imperfections and unusual contents in their stories
than will truth tellers.
Predicted Moderators
A number of perspectives, including the self-presentational one,
maintain that cues to deception, when combined across all lies,
will be weak. However, several factors are predicted to moderate
the strength of the cues. From a self-presentational point of view,
cues to negativity and tension should be stronger when lies are
about transgressions than when they are not. The self-presentation
formulation also maintains that cues will be clearer and more
numerous when told under conditions of high motivation to suc-
ceed, especially when the motivation is identity relevant. Buller
and Burgoon (1996), in contrast, predicted stronger cues when the
liarsmotives are instrumental. They also predicted more pleas-
antness, immediacy, composure, and fluency with increasing
interactivity.
The self-presentation model predicts that for social actors who
have an opportunity to plan their performances, compared with
those who do not, response latency will be a less telling cue to
deception. Also, as the duration of a response increases, cues to
deception will be more in evidence. Finally, the model predicts
that cues assessed by subjective impressions will more powerfully
discriminate truths from lies than the same cues assessed
objectively.
A predicted moderator of cues to deception can be tested only if
the moderator variable can be reliably coded from the information
that is reported and if multiple estimates of the relevant cues are
available for each of the levels of the moderator. Some of the
predictions generated by the perspectives we have reviewed could
not be tested, and that obstacle limited our ability to evaluate each
of the perspectives comprehensively. The self-presentational per-
spective, for example, points to the potential importance of a
number of moderators we could not test, such as the communica-
tors confidence and focus of attention and the emotional impli-
cations of the truths or lies for the targets of those messages. The
self-presentational perspective, as well as the formulations of
Ekman (1985/1992) and Buller and Burgoon (1996), all suggest
that the liars relationship with the target may be another important
moderator of cues to deception (see also Anderson, DePaulo, &
Ansfield, 2002; Levine & McCornack, 1992; Stiff, Kim, &
Ramesh, 1992). However, the number of studies in which the liars
and targets were not strangers was too small to test this moderator.
Method
Literature Search Procedures
We used literature search procedures recommended by Cooper (1998) to
retrieve relevant studies. First, we conducted computer-based searches of
Psychological Abstracts (PsycLIT) and Dissertation Abstracts Interna-
tional through September of 1995 using the key words deception,deceit,
lie, and detection and combinations of those words. Second, we examined
the reference lists from previous reviews (B. M. DePaulo et al., 1985a;
Zuckerman et al., 1981; Zuckerman & Driver, 1985). Third, we reviewed
the reference lists from more than 300 articles on the communication of
deception from Bella M. DePaulos personal files and the references lists
from any new articles added as a result of the computer search. Fourth, we
sent letters requesting relevant papers to 62 scholars who had published on
the communication of deception. We also asked those scholars to continue
to send us their papers in the coming years. We repeated our computer
search in October of 1999. No other reports were added after that date.
82 DEPAULO ET AL.
Criteria for Inclusion and Exclusion of Studies
We included reports in which behavior while lying was compared with
behavior while telling the truth. Behaviors that were measured objectively,
as well as those based on othersimpressions (e.g., impressions that the
social actors seemed nervous or evasive), were all included. Physiological
indices with no discernible behavioral manifestation (e.g., galvanic skin
response, heart rate) were not included, nor were senders(i.e., social
actors) reports of their own behaviors. We excluded reports that were not
in English and reports in which the senders were not adults (i.e., under 17
years old). We included data from adult senders in reports of children and
adults if we could compute effect sizes separately for the subset of the data
in which both the senders and the judges were adults. We excluded reports
in which senders role-played an imagined person in an imagined situation
because we were concerned that the imaginary aspects of these paradigms
could sever the connection between social actors and their self-
presentations that is important to our theoretical analysis.
There were several reports from which we could not extract useful data.
For example, Yerkes and Berry (1909) reported one experiment based on
just one sender and another based on two. Studies comparing different
kinds of lies without also comparing them with truths (e.g., di Battista &
Abrahams, 1995) were not included. Studies describing individual differ-
ences in cues to deception that did not also report overall differences
between truths and lies (e.g., Siegman & Reynolds, 1983) were also
excluded. A series of reports based on the same independent sample (e.g.,
Buller, Burgoon, Buslig, & Roiger, 1996, Study 2) were excluded as well.
(For a detailed explanation, see B. M. DePaulo, Ansfield, & Bell, 1996).
Determining Independent Samples
Our final data set consisted of 120 independent samples from 116 reports
(see Table 1). Of those 120 samples in our review, only 32 were included
in the Zuckerman et al. (1981) review.
4
Most often, the behaviors of a particular sample of senders were de-
scribed in just one report. For example, Bond et al. (1985) coded 11
different cues from 34 different senders. The behaviors of those 34 senders
were not described in any other report. Therefore, we considered the
sample of senders from that study to be an independent sample. Sometimes
senders were divided into different subgroups (e.g., men and women,
Jordanians and Americans, senders who planned their messages and dif-
ferent senders who did not), and cues to deception were reported separately
for each of those subgroups. In those instances, we considered each of the
subgroups to be an independent sample. For example, Bond, Omar, Mah-
moud, and Bonser (1990) coded 10 different cues separately for the 60
Jordanian senders and the 60 American senders. Therefore, the Jordanian
senders were one independent sample and the Americans were another.
In 11 instances, data from the same senders were published in different
reports. For example, Hadjistavropoulos and Craig (1994) coded 11 cues
from 90 senders, and Hadjistavropoulos, Craig, Hadjistavropoulos, and
Poole (1996) coded two cues from the same 90 senders. Therefore, the
samples described in those two reports were not independent. In Table 1,
they have the same letter code in the column labeled Ind. sample code.
Most samples listed in Table 1 have no letter code in that column; all of
those samples are independent samples.
All estimates of a particular cue were included in the analyses of that
cue. We used independent sample codes, not to exclude data, but to
estimate degrees of freedom properly and to weight estimates appropri-
ately. As we explain in more detail below, multiple estimates of the same
cue that came from the same independent sample were averaged before
being entered into the analyses.
Cue Definitions
Within the sample of studies, 158 different behaviors or impressions,
which we call cues to deception, were reported. These are defined in
Appendix A. We categorized most of the 158 cues into the five categories
that followed from our theoretical analysis. To determine whether liars are
less forthcoming than truth tellers, we looked at cues indicative of the
amount of their responding (e.g., response length), the level of detail and
complexity of their responses, and the degree to which they seemed to be
holding back (e.g., pressing lips; Keltner, Young, Heerey, Oemig, &
Monarch, 1998). To explore whether liars tell less compelling tales than
truth tellers, we examined cues indicating whether the presentations
seemed to make sense (e.g., plausibility), whether they were engaging (e.g.,
involving), and whether they were immediate (e.g., eye contact) instead of
distancing. Self-presentations that fell short on characteristics such as
certainty, fluency, or animation may also seem less compelling, so we
included those cues, too. In the third category, we included cues indicating
whether liars are less positive and pleasant than truth tellers, and in the
fourth, we collected behaviors indicating whether liars are more tense than
truth tellers. Finally, in the last category, we determined whether deceptive
self-presentations included fewer ordinary imperfections and unusual con-
tents than truthful ones by examining cues such as spontaneous corrections
and descriptions of superfluous details.
For clarity, we assigned a number, from 1 to 158, to each cue. Cue
numbers are shown along with the cue names and definitions in Appendix
A. The last column of Table 1 lists all of the cues reported in each study
and the number of estimates of each.
Variables Coded From Each Report
From each report, we coded characteristics of the senders, characteristics
of the truths and lies, publication statistics, and methodological aspects of
the studies (see Table 2). In the category of sender characteristics we coded
the population sampled (e.g., students, suspects in crimes, patients in pain
clinics, people from the community), the senderscountry, and the rela-
tionship between the sender and the interviewer or target of the commu-
nications (e.g., strangers, acquaintances, friends). We also coded senders
race or ethnicity and their precise ages, but this information was rarely
reported and therefore could not be analyzed.
To test our predictions about the links between sendersmotivations and
cues to deception, we determined whether senders had identity-relevant
incentives, instrumental incentives, both kinds of incentives, or no special
incentives. Coded as identity-relevant were studies in which senders
success was described as indicative of their competence at their chosen
profession or reflective of their intelligence or other valued characteristics.
Also included were studies in which senders expected to be evaluated or
scrutinized. Studies in which senders were motivated by money or material
rewards were coded as primarily instrumental. Studies in which both
incentives were offered to senders were classified separately.
The characteristics of the messages that we coded included their duration
and whether senders had an opportunity to prepare. If senders had an
opportunity to prepare some but not all of their messages, but behavioral
differences were not reported separately, we classified the study as having
some prepared and some unprepared messages. In other studies, the mes-
sages were scripted. For example, senders may have been instructed to give
a particular response in order to hold verbal cues constant so that investi-
gators could assess nonverbal characteristics of truths and lies more
precisely.
We also coded the experimental paradigm used to elicit the truths and
lies or the context in which they occurred. In some studies, senders lied or
told the truth about their beliefs or opinions or about personal facts. In
others, senders looked at videotapes, films, slides, or pictures and described
(text continues on page 89)
4
There were three unpublished reports (describing results from four
independent samples) in the Zuckerman et al. (1981) review that we were
unable to retrieve for this review.
83
CUES TO DECEPTION
Table 1
Summary of Studies Included in the Meta-Analysis
Report NNo. of
effect sizes No. of
cues Ind. sample
code
a
Mot
b
Trans
c
Msg Int
d
Cues
e
Alonso-Quecuty (1992) P
Unplanned messages 11 5 5 0 0 001, 005, 037, 076, 085
Planned messages 11 5 5 0 0 001, 005, 037, 076, 085
Anolli & Ciceri (1997) 31 36 12 0 0 L 1 001 (8), 004 (2), 010 (6), 032 (2), 039 (2),
063 (2), 094 (2), 097 (2), 110 (2), 112
(2), 113 (2), 140 (4)
Berrien & Huntington (1943) 32 1 1 2 1 1 155
Bond et al. (1985) 34 11 11 2 0 1 003, 022, 027, 035, 038, 044, 045, 046,
052, 058, 068
Bond et al. (1990)
Jordanians 60 10 10 0 0 L 1 001, 027, 037, 038, 045, 046, 052, 058,
066, 068
Americans 60 10 10 0 0 L 1 001, 027, 037, 038, 045, 046, 052, 058,
066, 068
Bradley & Janisse (1979/1980) 60 1 1 0 0 L 1 065
Bradley & Janisse (1981) 192 1 1 2 1 1 065
Buller & Aune (1987) 130 17 15 0 0 1 016, 018, 026, 027, 028, 044 (2), 053, 054
(2), 055, 064, 067, 068, 069, 105, 119
Buller et al. (1996) 120 4 4 A 0 0 L 1 021, 022, 023, 101
Buller et al. (1989) 148 18 16 0 0 L 1 001 (2), 009, 017, 018, 027 (2), 034, 037,
040, 044, 055, 058, 067, 068, 069, 111,
119
Burgoon & Buller (1994) 120 4 4 A 0 0 1 026, 053, 054, 061
Burgoon, Buller, Afifi, et al. (1996) 61 8 5 0 0 1 001, 015 (4), 064, 104, 106
Burgoon, Buller, Floyd, & Grandpre
(1996)
Interactants 18 11 8 0 0 1 004, 015 (2), 025 (2), 026, 031, 049, 061,
115 (2)
Observers 10 11 8 0 0 1 004, 015 (2), 025 (2), 026, 031, 049, 061,
115 (2)
Burgoon, Buller, Guerrero, et al. (1996) 40 4 2 0 0 1 004, 025 (3)
Burns & Kintz (1976) 20 2 1 0 1 1 027 (2)
Chiba (1985) 16 4 2 0 0 L 1 033 (2), 066 (2)
Christensen (1980) 12 6 3 0 0 1 016 (2), 049 (2), 061 (2)
Ciofu (1974) 16 1 1 2 0 1 063
Cody et al. (1989) 66 85 17 B 2 0 P 1 001 (5), 004 (5), 009 (5), 010 (5), 018 (5),
021 (5), 022 (5), 027 (5), 038 (5), 039
(5), 041 (5), 046 (5), 055 (5), 058 (5),
066 (5), 070 (5), 119 (5)
Cody et al. (1984) 42 54 8 0 0 P 1 001 (6), 004 (27), 007 (3), 009 (3), 010
(3), 035 (3), 039 (6), 041 (3)
Cody & OHair (1983)
Men 36 8 4 C1 0 0 1 009 (2), 018 (2), 048 (2), 069 (2)
Women 36 8 4 C2 0 0 1 009 (2), 018 (2), 048 (2), 069 (2)
Craig et al. (1991) 120 28 13 0 0 L 1 033 (2), 056 (2), 057 (2), 059 (2), 060 (2),
066 (2), 129 (4), 130 (2), 131 (2), 132
(2), 133 (2), 146 (2), 148 (2)
Cutrow et al. (1972) 63 3 3 3 0 1 009, 066, 144
B. M. DePaulo et al. (1992) 32 2 2 1 0 0 015, 051
B. M. DePaulo et al. (1990) 96 3 3 D 1 0 1 001, 004, 016
B. M. DePaulo, Jordan, et al. (1982) 8 1 1 0 0 L 0 014
B. M. DePaulo et al. (1983) 32 2 2 1 0 1 061, 091
B. M. DePaulo et al. (1991) 96 1 1 D 1 0 1 012
B. M. DePaulo & Rosenthal (1979a) 40 1 1 E 0 0 L 0 014
B. M. DePaulo, Rosenthal, Green, &
Rosenkrantz (1982) 40 4 3 E 0 0 L 0 014 (2), 061, 090
B. M. DePaulo, Rosenthal,
Rosenkrantz, & Green (1982) 40 16 11 E 0 0 L 0 006 (2), 010 (2), 022, 023, 024 (2), 035,
038, 052 (3), 096, 136, 137
P. J. DePaulo & DePaulo (1989) 14 16 15 2 0 0 001, 004, 010, 014 (2), 021, 034, 035,
039, 044, 049, 052, 055, 066, 070, 091
deTurck & Miller (1985)
Unaroused truth tellers 36 10 10 1 1 1 001, 009, 028, 037, 042, 046, 048, 058,
066, 070
Aroused truth tellers 36 10 10 1 1 1 001, 009, 028, 037, 042, 046, 048, 058,
066, 070
84 DEPAULO ET AL.
Report NNo. of
effect sizes No. of
cues Ind. sample
code
a
Mot
b
Trans
c
Msg Int
d
Cues
e
Dulaney (1982) 20 20 10 0 1 1 001 (2), 004, 007 (3), 009, 019 (6), 020
(3), 022, 024, 042, 139
Ekman & Friesen (1972) 21 3 3 F 1 0 1 034, 069, 070
Ekman et al. (1988) 31 2 2 F 1 0 L 1 117, 118
Ekman et al. (1976) 16 1 1 F 1 0 1 034
Ekman et al. (1985) 14 40 20 0 0 1 011 (2), 044 (2), 045 (2), 056 (2), 057 (2),
059 (2), 060 (2), 088 (2), 129 (2), 130
(2), 131 (2), 132 (2), 133 (2), 146 (2),
147 (2), 148 (2), 149 (2), 156 (2), 157
(2), 158 (2)
Ekman et al. (1991) 31 2 2 F 1 0 L 1 018, 063
Elliot (1979) 62 4 4 2 0 L 1 012, 049, 050, 115
Exline et al. (1970) 34 2 2 2 1 1 027, 061
Feeley & deTurck (1998)
Unsanctioned liars 58 15 14 0 1 1 001 (2), 009, 010, 022, 024, 027, 035,
037, 038, 044, 046, 048, 058, 068
Sanctioned liars 68 15 14 0 1 1 001 (2), 009, 010, 022, 024, 027, 035,
037, 038, 044, 046, 048, 058, 068
Fiedler (1989)
Study 1 23 1 1 0 0 1 012
Study 2 64 1 1 0 0 012
Fiedler et al. (1997) 12 8 6 0 0 1 001, 004 (3), 008, 012, 016, 061
Fiedler & Walka (1993) 10 10 10 0 0 L 1 010, 012, 014, 015, 016, 039, 045, 063,
068, 118
Finkelstein (1978) 20 14 10 E 0 0 L 0 017, 043, 045 (3), 046 (2), 047 (2), 051,
058, 064, 067, 068
Frank (1989) 32 12 12 3 0 0 001, 009, 018, 027, 034, 040, 044, 045,
048, 058, 066, 068
Gagnon (1975)
Men 16 11 9 2 0 1 001 (2), 010, 027, 040 (2), 044, 045, 046,
047, 048
Women 16 11 9 2 0 1 001 (2), 010, 027, 040 (2), 044, 045, 046,
047, 048
Galin & Thorn (1993) 60 26 12 0 0 L 0 011 (4), 033 (2), 056 (2), 059 (2), 060 (2),
066 (2), 129 (2), 130 (2), 132 (2), 133
(2), 147 (2), 149 (2)
Goldstein (1923) 10 2 1 2 0 1 009 (2)
Greene et al. (1985) 39 45 15 0 0 P 1 001 (3), 009 (3), 018 (3), 027 (3), 034 (3),
044 (3), 045 (3), 046 (3), 048 (3), 055
(3), 058 (3), 067 (3), 068 (3), 069 (3),
119 (3)
Hadjistavropoulos & Craig (1994) 90 24 11 G 0 0 L 1 011 (2), 033 (2), 056 (2), 057 (2), 059 (2),
060 (2), 066 (2), 129 (4), 130 (2), 131
(2), 132 (2)
Hadjistavropoulos et al. (1996) 90 2 2 G 0 0 L 1 054, 088
Hall (1986) 80 3 3 3 1 1 010, 032, 063
Harrison et al. (1978) 72 2 2 0 0 L 1 001, 009
Heilveil (1976) 12 1 1 0 0 1 065
Heilveil & Muehleman (1981) 26 9 9 0 0 1 001, 009, 027, 037, 040, 046, 048, 055,
058
Heinrich & Borkenau (1998) 40 6 1 0 0 L 0 014 (6)
Hemsley (1977) 20 13 10 0 0 1 008, 009, 027 (2), 029, 042, 043, 044, 058
(2), 066, 068 (2)
Hernandez-Fernaud & Alonso-Quecuty
(1977) 73 12 4 2 0 1 004 (9), 005, 076, 083
U. Hess (1989) 35 5 4 H 0 0 0 011 (2), 057, 117, 132
U. Hess & Kleck (1990)
Study 1 35 2 2 H 0 0 0 089, 150
Study 2 48 2 2 H 0 0 0 089, 150
U. Hess & Kleck (1994) 35 3 3 H 0 0 0 029, 058, 066
Hocking & Leathers (1980) 16 25 21 1 0 1 009, 010, 018, 027 (2), 036, 037 (2), 038,
044, 045, 048 (2), 054, 058, 061 (2),
062, 069, 070, 107, 108, 109, 144, 145
Horvath (1973) 100 11 8 3 1 1 002, 025, 027, 049, 050, 052 (3), 061 (2),
121 (table continues)
85
CUES TO DECEPTION
Table 1 (continued)
Report NNo. of
effect sizes No. of
cues Ind. sample
code
a
Mot
b
Trans
c
Msg Int
d
Cues
e
Horvath (1978) 60 1 1 0 0 1 062
Horvath (1979) 32 1 1 2 0 L 1 062
Horvath et al. (1994) 60 6 4 3 1 1 025, 050, 064 (3), 090
Janisse & Bradley (1980) 64 1 1 0 0 L 1 065
Kennedy & Coe (1994) 19 10 8 0 0 1 027, 056, 058, 064, 066, 120, 122 (3), 129
Knapp et al. (1974) 38 32 23 2 0 L 1 001 (3), 002, 003, 004, 007, 018, 020 (2),
021 (2), 022 (2), 023, 024 (2), 027 (2),
030 (2), 036, 037, 038, 048, 052, 055,
058, 070, 126 (2), 138
Kohnken et al. (1995) 59 19 18 0 0 1 004, 013, 041, 071, 072, 073 (2), 074,
077, 078, 079, 080, 082, 123, 124, 127,
128, 142, 143
Koper & Sahlman (2001) 83 37 27 3 1 L 001, 009, 012, 014, 015 (2), 017, 018, 025
(2), 028, 031 (3), 035, 039, 043, 044,
054, 055, 058, 061 (3), 062, 066, 067,
068, 104, 105 (2), 119, 134 (4), 153
Krauss (1981)
High arousal, face to face 8 11 11 I1 1 0 L 1 001, 009, 027, 031, 042, 046, 051, 061,
086, 089, 093
High arousal, intercom 8 11 11 I2 1 0 L 0 001, 009, 027, 031, 042, 046, 051, 061,
086, 089, 093
Low arousal, face to face 8 11 11 I3 0 0 L 1 001, 009, 027, 031, 042, 046, 051, 061,
086, 089, 093
Low arousal, intercom 8 11 11 I4 0 0 L 0 001, 009, 027, 031, 042, 046, 051, 061,
086, 089, 093
Kraut (1978) 5 9 9 1 0 L 1 001, 004, 009, 012, 014, 040, 044, 058,
068
Kraut & Poe (1980) 62 14 14 2 1 1 001, 003, 008, 009, 018, 025, 028, 031,
035, 044, 058, 061, 064, 068
Kuiken (1981) 48 1 1 0 0 0 019
Kurasawa (1988) 8 1 1 0 0 L 0 092
Landry & Brigham (1992) 12 14 13 0 0 L 0 004, 013, 072, 073, 074, 075, 076, 077
(2), 078, 079, 080, 082, 083
Manaugh et al. (1970) 80 2 2 0 0 1 006, 009
Marston (1920) 10 1 1 0 0 1 009
Matarazzo et al. (1970)
Discuss college major 60 4 4 0 0 1 006, 009, 027, 119
Discuss living situation 60 4 4 0 0 1 006, 009, 027, 119
McClintock & Hunt (1975) 20 5 5 0 0 1 018, 027, 044, 058, 070
Mehrabian (1971)
Study 1
Men, reward 14 11 10 2 0 0 001, 010, 026, 035, 044, 046, 048 (2),
054, 055, 064
Men, punishment 14 11 10 1 0 0 001, 010, 026, 035, 044, 046, 048 (2),
054, 055, 064
Women, reward 14 11 10 2 0 0 001, 010, 026, 035, 044, 046, 048 (2),
054, 055, 064
Women, punishment 14 11 10 1 0 0 001, 010, 026, 035, 044, 046, 048 (2),
054, 055, 064
Study 2
Men 24 10 9 2 0 0 001, 010, 026, 029, 046, 048 (2), 054,
055, 064
Women 24 10 9 2 0 0 001, 010, 026, 029, 046, 048 (2), 054,
055, 064
Study 3 32 13 12 2 1 1 001, 002, 010, 026, 032, 044, 046, 048
(2), 054, 055, 064, 070
Miller et al. (1983) 32 10 10 3 0 P 1 001, 007, 009, 029, 036, 037, 038, 046,
048, 070
Motley (1974) 20 3 3 0 0 1 001, 032, 063
D. OHair & Cody (1987) P
Men 21 2 1 2 0 1 062 (2)
Women 26 2 1 2 0 1 062 (2)
D. OHair et al. (1990) P
Men 36 2 1 B1 2 0 1 062 (2)
Women 25 2 1 B2 2 0 1 062 (2)
86 DEPAULO ET AL.
Table 1 (continued)
Report NNo. of
effect sizes No. of
cues Ind. sample
code
a
Mot
b
Trans
c
Msg Int
d
Cues
e
H. D. OHair et al. (1981) 72 22 11 C 0 0 P 1 001 (2), 009 (2), 018 (2), 027 (2), 034 (2),
044 (2), 048 (2), 055 (2), 058 (2), 069
(2), 070 (2)
Pennebaker & Chew (1985) 20 2 2 0 0 1 029, 089
Porter & Yuille (1996) 60 18 18 2 1 1 001, 004, 007, 008, 013, 022, 030, 038,
071, 072, 073, 078, 079, 080, 081, 083,
103, 141
Potamkin (1982)
Heroin addicts 10 6 6 2 0 L 1 018, 044, 048, 070, 151, 152,
Nonaddicts 10 6 6 2 0 L 1 018, 044, 048, 070, 151, 152
Riggio & Friedman (1983) 63 12 11 0 0 L 0 010, 027, 029, 035, 038, 044, 045, 046,
048, 058, 068 (2)
Ruby & Brigham (1998) 12 16 15 0 0 0 001, 004, 013, 071, 072, 073, 074, 075,
076, 077 (2), 078, 079, 080, 081, 083
Rybold (1994) 34 4 4 3 0 1 001, 010, 035, 039
Sayenga (1983) 14 24 16 2 0 1 001 (2), 004 (2), 006 (3), 009 (2), 010 (2),
020, 022, 032 (2), 036, 037, 038, 041,
052, 062 (2), 063, 102
Scherer et al. (1985) 15 2 2 F 1 0 L 1 053, 062
Schneider & Kintz (1977)
Men 14 2 2 0 0 1 048, 154
Women 16 2 2 0 0 1 048, 154
Sitton & Griffin (1981) 28 1 1 0 0 1 027
Sporer (1997) 40 22 17 0 0 0 001 (2), 004, 005, 006, 013, 071, 076 (3),
077 (2), 078, 079, 080, 082, 083 (2),
087, 098, 099, 100
Stiff & Miller (1986) 40 19 16 2 1 1 001 (2), 004, 009, 012, 022, 023, 024,
037, 040, 044, 046, 048, 058, 066, 068,
134 (3)
Streeter et al. (1977)
High arousal, face to face 8 1 1 I1 1 0 L 1 063
High arousal, intercom 8 1 1 I2 1 0 L 0 063
Low arousal, face to face 8 1 1 I3 0 0 L 1 063
Low arousal, intercom 8 1 1 I4 0 0 L 0 063
Todd-Mancillas & Kilber (1979) 37 11 9 2 0 1 001 (3), 002, 004, 007, 020, 021, 022,
023, 052
Vrij (1993) 20 1 1 J 2 1 L 1 046
Vrij (1995) 64 11 11 J 2 1 L 1 018, 028, 035, 038, 044, 045, 048, 058,
068, 095, 114
Vrij et al. (1997) 56 1 1 0 1 L 1 114
Vrij & Heaven (1999) 40 6 4 0 0 1 004, 030, 035 (2), 038 (2)
Vrij et al. (1996) 91 3 1 1 1 L 1 043 (3)
Vrij & Winkel (1990/1991) 92 11 10 0 1 L 1 010, 027, 035, 038, 044 (2), 045, 046,
058, 070, 094
Vrij & Winkel (1993) 64 1 1 J 2 1 L 1 001
Wagner & Pease (1976) 49 1 1 0 0 0 019
Weiler & Weinstein (1972) 64 13 8 2 0 1 004 (2), 008, 016, 027, 031, 116 (4), 123,
135 (2)
Zaparniuk et al. (1995) 40 18 16 0 0 1 004, 013, 071, 072, 073, 074, 075, 077
(2), 078, 079, 080, 081 (2), 082, 083,
124, 125
Zuckerman et al. (1979) 60 6 5 0 0 L 0 016, 031 (2), 053, 054, 063
Zuckerman et al. (1982) 59 1 1 K 0 0 L 0 014
Zuckerman et al. (1984) 59 1 1 K 0 0 L 0 084
Note. N number of senders; Ind. independent; Mot motivation of the senders; Trans transgression; Msg message; Int interactivity; P
compared cues to deception for planned messages with cues to deception for unplanned messages; L length (duration) of the messages was reported.
a
Samples with the same letter code report data from the same senders; that is, they are not independent. All samples without a letter code are independent
samples.
b
Motivation of the senders; 0 none; 1 identity-relevant; 2 instrumental; 3 identity-relevant and instrumental.
c
Transgression: 0
lie is not about a transgression; 1 lie is about a transgression.
d
Interactivity: 0 no interaction between sender and target; 1 interaction.
e
Cue
numbers are of the cues described in the current article. The number in parentheses indicates the number of estimates of that cue (if more than one). The
cue names corresponding to the cue numbers are shown in Appendix A.
87
CUES TO DECEPTION
Table 1 (continued)
Table 2
Summary of Study Characteristics
Characteristic kCharacteristic k
Senders
Population sampled
Students 101
Suspects 3
Community members and students 3
Patients in a pain clinic 2
Community members 2
Immigrants to United States 2
Salespersons and customers 1
Travelers in an airport 1
Shoppers in a shopping center 1
Heroin addicts (and nonaddicts) 1
Publicly exposed liars 1
Unable to determine from report 2
Country
United States 88
Canada 9
Germany 7
England 4
Spain 3
Japan 2
Immigrants to United States 2
Jordan 1
Italy 1
Romania 1
The Netherlands and England 1
The Netherlands and Surinam 1
Relationship between sender and interviewer or target
Strangers 103
Acquaintances 2
Acquaintances or friends and strangers 2
Friends 1
Intimates, friends, and strangers 1
No interviewer 9
Unable to determine from report 2
Motivation for telling successful lies
None 68
Identity relevant 13
Instrumental 31
Identity and instrumental 8
Truths and lies
Length of messages
Under 20 s 14
2060 s 14
More than 60 s 8
Unable to determine from report 84
Message preparation
No preparation 44
Messages were prepared 43
Some prepared, some unprepared 18
Messages were scripted 7
Unable to determine from report 8
Paradigm
Described attitudes or facts 44
Described films, slides, or pictures 16
Cheating 8
Mock crime 8
Card test or guilty knowledge test 8
a
Includes correlational measures as well as percentage of agreement (divided by 100).
b
Number of estimates (not number of independent estimates).
Truths and lies (continued)
Paradigm (continued)
Person descriptions 7
Simulated job interview 6
Described personal experiences 5
Naturalistic 4
Responded to personality items 3
Reactions to pain 3
Other paradigms 7
Unable to determine from report 1
Lies were about transgressions
No 99
Yes 21
Publication statistics
Year of report
Before 1970 3
19701979 34
19801989 46
19902000 37
Source of study
Journal article 96
Dissertation, thesis 10
Book chapter 4
Unpublished paper 3
Multiple sources 7
Methodological aspects
Sample size (no. of senders)
520 41
2159 43
60192 36
Experimental design
Within-sender (senders told truths and lies) 78
Between-senders (senders told truths or lies) 42
In between-senders designs, no. of liars
Fewer than 20 15
2032 16
More than 32 11
In between-senders designs, no. of truth tellers
Fewer than 20 15
2032 13
More than 32 14
No. of messages communicated by each sender
121
2459
More than 4 40
Degree of interaction between sender and interviewer or target
No interaction 12
Partial interaction 83
Fully interactive 8
No one else present 12
Unable to determine from report 4
Reliability of measurement of cues
a
Under .70 36
b
.70.79 43
b
.80.89 251
b
.901.00 239
b
Unable to determine from report 769
b
88 DEPAULO ET AL.
what they were seeing truthfully or deceptively. In cheating paradigms,
senders were or were not induced to cheat and then lie about it. Mock crime
paradigms included ones in which some of the senders were instructed to
stealmoney or to hide supposed contraband on their persons and to then
lie to interviewers about their crime. Some paradigms involved card tests
(in which the senders chose a particular card and answered nowhen
asked if they had that card) and guilty knowledge tests (in which senders
who did or did not know critical information, such as information about a
crime, were asked about that information); most of these were modeled
after tests often used in polygraph testing. In person-description paradigms,
senders described other people (e.g., people they liked and people they
disliked) honestly and dishonestly. Some paradigms were simulations of
job interviews; typically in those paradigms, senders who were or were not
qualified for a job tried to convince an interviewer that they were qualified.
In other paradigms, participants described personal experiences (e.g., times
during which they acted especially independently or dependently; trau-
matic experiences that did or did not actually happen to them). Naturalistic
paradigms were defined as ones in which the senders were not instructed
to tell truths or lies but instead did so of their own accord. These included
interrogations of suspects later determined to have been lying or telling the
truth (Hall, 1986; Horvath, 1973; Horvath, Jayne, & Buckley, 1994) and a
study (Koper & Sahlman, 2001) of people who made public statements
later exposed as lies. In another paradigm, senders indicated their responses
to a series of items on a personality scale, then later lied or told the truth
about their answers to those items. In a final category, senders who really
were or were not experiencing pain sometimes expressed their pain freely
and other times masked their pain or feigned pain that they were not
experiencing. A few other paradigms used in fewer than three independent
samples were assigned to a miscellaneous category.
We recoded the paradigms into two categories to test our prediction that
lies about transgressions would produce clearer cues than lies that were not
about transgressions. The lies about mock crimes or real crimes, cheating,
and other misdeeds were categorized as lies about transgressions, the others
as lies that were not about transgressions.
The two publication statistics that we coded were the year of the report
and the source of the report (e.g., journal article, dissertation, thesis). In
some instances, the same data were reported in two places (typically a
dissertation and a journal article); in those cases, we coded the more
accessible report (i.e., the journal article).
The methodological aspects of the studies that we coded included the
sample size and the design of the study. The design was coded as within
senders if each sender told both truths and lies or between senders if each
sender told either truths or lies. This determination was based on the
messages that were included in the analyses of the behavioral cues. For
example, if senders told both truths and lies, but the cues to deception were
assessed from just one truth or one lie told by each sender, the design was
coded as between senders. For each between-senders study, we coded the
number of liars and the number of truth tellers. For all studies, we coded
the total number of messages communicated by each sender.
We also coded the degree of interaction between the sender and the
interviewer or target person. Fully interactive paradigms were ones in
which the senders and interviewers interacted freely, with no scripts. In
partially interactive paradigms, the senders and interviewers interacted, but
the interviewersbehavior was typically constrained, usually by a prede-
termined set of questions they were instructed to ask. In noninteractive
paradigms, an interviewer or target person was present in the room but did
not interact with the sender. In still other paradigms, the senders told truths
and lies (usually into a tape recorder) with no one else present.
We categorized each cue as having been either objectively or subjec-
tively assessed. Behaviors that could be precisely defined and measured
(often in units such as counts and durations) were coded as objectively
assessed. Cues were coded as subjectively assessed when they were based
on observersimpressions.
Behavioral cues were usually coded from videotapes, audiotapes, or
transcripts of the truths and lies. If reliabilities of the measures of the cues
were reported (percentages or correlations), we recorded them.
We attempted to compute the effect size for each cue in each study. To
this end, we indicated whether the effect sizes were (a) ones that could be
precisely calculated (which we call known effects), (b) ones for which only
the direction of the effect was known, or (c) effects that were simply
reported as not significant (and for which we were unable to discern the
direction).
Coding decisions were initially made by James J. Lindsay, Laura
Muhlenbruck, and Kelly Charlton, who had participated in standard train-
ing procedures (discussion of definitions, practice coding, discussion of
disagreements) before beginning their task. Each person coded two thirds
of the studies. Therefore, each study was coded by two people and
discrepancies were resolved in conference. For objective variables such as
the year and the source of the report, the percentage of disagreements was
close to zero. The percentage ranged as high as 12 for more subjective
decisions, such as the initial categorization of paradigms into more than 12
different categories. However, agreement on the two levels of the paradigm
variable that were used in the moderator analysis (transgressions vs. no
transgressions) was again nearly perfect. Bella M. DePaulo also indepen-
dently coded all study characteristics, and any remaining discrepancies
were resolved in consultation with Brian E. Malone, who was not involved
in any of the previous coding. A meta-analysis of accuracy at detecting
deception (Bond & DePaulo, 2002) included some of the same studies that
are in this review. Some of the same study characteristics were coded for
that review in the same manner as for this one. Final decisions about each
characteristic were compared across reviews. There were no discrepancies.
Meta-Analytic Techniques
Effect Size Estimate
The effect size computed for each behavioral difference was d, defined
as the mean for the deceptive condition (i.e., the lies) minus the mean for
the truthful condition (i.e., the truths), divided by the mean of the standard
deviations for the truths and the lies (Cohen, 1988). Positive ds therefore
indicate that the behavior occurred more often during lies than truths,
whereas negative ds indicate that the behavior occurred less often during
lies than truths. In cases in which means and standard deviations were not
provided but other relevant statistics were (e.g., rs,
2
s, ts, or Fs with 1 df)
or in which corrections were necessary because of the use of within-sender
designs (i.e., the same senders told both truths and lies), we used other
methods to compute ds (e.g., Hedges & Becker, 1986; Rosenthal, 1991).
With just a few exceptions, we computed effect sizes for every compar-
ison of truths and lies reported in every study. For example, if the length
of deceptive messages relative to truthful ones was measured in terms of
number of words and number of seconds, we computed both ds. If the same
senders conveyed different kinds of messages (e.g., ones in which they
tried to simulate different emotions and ones in which they tried to mask
the emotions they were feeling) and separate ds were reported for each, we
computed both sets of ds. We excluded a few comparisons in cases in
which the behavior described was uninterpretable outside of the context of
the specific study and in which an effect size could be computed but the
direction of the effect was impossible to determine. Also, if preliminary
data for a particular cue were reported in one source and more complete
data on the same cue were reported subsequently, we included only the
more complete data.
If the difference between truths and lies was described as not significant,
but no further information was reported, the dfor that effect was set to
zero. If the direction of the effect could be determined, but not the precise
magnitude, we used a conservative strategy of assigning the value 0.01
when the behavior occurred more often during lies than truths and 0.01
when it occurred less often during lies than truths. This procedure resulted
in a total of 1,338 effect sizes. Of these, 787 could be estimated precisely,
89
CUES TO DECEPTION
396 were set to zero, and 155 were assigned the values of 0.01. Twenty-
seven (2%) of the effect sizes (ds) were greater than 1.50 and were
windsorized to 1.50.
Estimates of Central Tendency
The most fundamental issue addressed by this review is the extent to
which each cue is associated with deceit. To estimate the magnitude of the
effect size for each cue, we averaged within cues and within independent
samples. For example, within a particular independent sample, all estimates
of response length were averaged. As a result, each independent sample
could contribute to the analyses no more than one estimate of any given
cue. Table 1 shows the number of effect sizes computed for each report and
the number of cues assessed in each report. If the number of effect sizes is
greater than the number of cues, then there was more than one estimate of
at least one of the cues.
The mean dfor each cue within each independent sample was weighted
to take into account the number of senders in the sample.
5
Sample sizes
ranged from 5 to 192 (M41.73, SD 31.93) and are shown in the
second column of Table 1. Because larger samples provide more reliable
estimates of effect sizes than do smaller ones, larger studies were weighted
more heavily in the analyses. For within-sender designs, we weighted each
effect size by the reciprocal of its variance. For between-senders designs,
we computed the weight from the formula: [2(n
1
n
2
)n
1
n
2
]/ [2(n
1
n
2
)
2
(n
1
n
2
d
2
)]. A mean dis significant if the confidence interval does not
include zero.
To determine whether the variation in effect sizes for each cue was
greater than that expected by chance across independent samples, we
computed the homogeneity statistic Q, which is distributed as chi-square
with degrees of freedom equal to the number of independent samples (k)
minus 1. The plevel associated with the Qstatistic describes the likelihood
that the observed variance in effect sizes was generated by sampling error
alone (Hedges & Olkin, 1985).
Moderator Analyses
We have described several factors that have been predicted to moderate
the size of the cues to deception: whether an incentive was provided for
success, the type of incentive that was provided (identity relevant or
instrumental), whether the messages were planned or unplanned, the du-
ration of the messages, whether the lies were about transgressions, and
whether the context was interactive. All of the moderator variables except
planning were ones that could be examined only on a between-studies
basis. For example, it was usually the case that in any given study, all of
the senders who lied were lying about a transgression or they were all lying
about something other than a transgression. Conclusions based on those
analyses (e.g., that the sendersapparent tension is a stronger cue to lies
about transgressions than to lies that are not about transgressions) are open
to alternative interpretations. Any way that the studies differed (other than
the presence or absence of a transgression) could explain the transgression
differences.
Stronger inferences can be drawn when the levels of the moderator
variable occur within the same study. Seven independent samples (indi-
cated in Table 1) included a manipulation of whether sendersmessages
were planned or unplanned.
6
For each cue reported in each of these studies,
we computed a dfor the difference in effect sizes between the unplanned
and planned messages. We then combined these ds in the same manner as
we had in our previous analyses.
Of the remaining moderator variables, all except one (the duration of the
message) were categorical variables. For the categorical moderator vari-
ables, we calculated fixed-effect models using the general linear model
(regression) program of the Statistical Analysis System (SAS Institute,
1985). The model provides a between-levels sum of squares, Q
B
, that can
be interpreted as a chi-square, testing whether the moderator variable is a
significant predictor of differences in effect sizes. A test of the homoge-
neity of effect sizes within each level, Q
W
, is also provided. For the
continuous moderator variable (the duration of the messages), we also used
the general linear model (leaving duration in its continuous form) and
tested for homogeneity (Hedges & Olkin, 1985). A significant Q
B
indicates
that duration did moderate the size of the effect, and the direction of the
unstandardized beta (b) weight indicates the direction of the moderation.
Results
Description of the Literature
Characteristics of the Senders
As indicated in Table 2, the senders in most of the studies were
students from the United States who were strangers to the inter-
viewer or target of their communications. In 52 of the 120 inde-
pendent samples, incentives for success were provided to the
senders.
Characteristics of the Truths and Lies
The duration of the messages was 1 min or less for 28 of the 36
samples for which that information was reported. The number of
samples in which senders were given time to prepare their com-
munications was about the same as the number in which they were
not given any preparation time.
In 44 of the 120 samples, senders told truths and lies about their
attitudes or personal facts. In 16 others, they looked at films,
slides, or pictures and described them honestly or dishonestly. All
other paradigms were used in fewer than 9 samples. In 21 of the
samples, senders told lies about transgressions.
Publication Statistics and Methodological Aspects
Table 2 also shows that only 3 of the 120 independent samples
were published before 1970. Most reports were journal articles.
In 84 of the samples, there were fewer than 60 senders. The
samples included a mean of 22.4 male senders (SD 24.6) and a
mean of 19.2 female senders (SD 19.2). In 25 samples, all of the
senders were men, and in 15 samples, all were women. In 16
samples, the sex of the senders was not reported.
Within-sender designs (in which senders told truths and lies)
were nearly twice as common as between-senders designs (in
which senders told truths or lies). In the between-senders designs,
the number of liars was typically the same as the number of truth
tellers. Senders usually communicated between one and four
messages.
5
Only weighted mean ds are reported, and all estimates of a given cue
are included in each mean. A table of all 1,338 individual effect sizes is
available from Bella M. DePaulo. The table includes the weights for each
effect size and information about the independence of each estimate. The
table also indicates whether each estimate was a known effect (i.e., the
magnitude could be determined precisely) or if only the direction of the
effect or its nonsignificance was reported. Therefore, the information in
that table can be used to calculate weighted effect sizes for each cue that
include only the known estimates or to compute unweighted means that
include all effect sizes or only the precisely estimated ones.
6
We did not include studies in which planning was confounded with
another variable (e.g., Anolli & Ciceri, 1997).
90 DEPAULO ET AL.
In most studies, there was some interaction between the sender
and the interviewer or target. In 24 of the 120 samples, there was
no interaction or there was no one else present when the senders
were telling their truths or lies.
When the reliability of the measurement was reported, the
reliability was usually high (see Table 2). Of the 1,338 estimates of
the 158 cues to deception, 273 (20%) were based on the subjective
impressions of untrained raters.
Meta-Analysis of the Literature
Overview
We first present the combined effect sizes for each individual
cue to deception. The individual cues to deception are grouped by
our five sets of predictions. Cues suggesting that liars may be less
forthcoming than truth tellers are shown in Table 3; cues suggest-
ing that liars may tell less compelling tales than truth tellers are
shown in Table 4; cues suggesting that liars communicate in a less
positive and more tense way are shown in Tables 5 and 6, respec-
tively; and cues suggesting that liars tell tales that are too good to
be true are shown in Table 7. Any given cue is included in Tables
37 only if there are at least three independent estimates of it, at
least two of which could be calculated precisely (as opposed to
estimates of just the direction of the effect or reports that the effect
was not significant). All other cues are reported in Appendix B.
Five of the 88 cues that met the criteria for inclusion in the tables
but did not fit convincingly into any particular table are also
included in Appendix B (brow raise, lip stretch, eyes closed, lips
apart, and jaw drop).
The placement of cues into the five different categories was to
some extent arbitrary. For example, because blinking may be
indicative of anxiety or arousal, we included it in the tense
category (see Appendix A). However, decreased blinking can also
be suggestive of greater cognitive effort; therefore, we could have
placed it elsewhere. Rate of speaking is another example. We
included that cue under forthcomingbecause people who are
speaking slowly may seem to be holding back. However, faster
speech can also be indicative of confidence (C. E. Kimble &
Seidel, 1991); thus, we could have included it under compelling
(certainty) instead.
In Table 8, we have arranged the 88 cues (the ones based on at
least three estimates) into four sections by the crossing of the size
of the combined effect (larger or smaller) and the number of
independent estimates contributing to that effect (more or fewer).
We also present a stem and leaf display of the 88 combined effect
sizes in Table 9. The results of our analyses of the factors that
might moderate the magnitude of the differences between liars and
truth tellers are presented in subsequent tables.
Individual Cues to Deception
Are liars less forthcoming than truth tellers? Table 3 shows
the results of the cues indicating whether liars were less forthcom-
ing than truth tellers. We examined whether liars had less to say,
whether what they did say was less detailed and less complex, and
whether they seemed to be holding back.
We had more independent estimates of the length of the re-
sponses (k49) than of any other cue, but we found just a tiny
and nonsignificant effect in the predicted direction (d0.03).
When amount of responding was operationalized in terms of the
percentage of the talking time taken up by the social actor com-
pared with the actors partner, then liars did take up less of that
time than did truth tellers (d0.35). The entire interaction
tended to terminate nonsignificantly sooner when 1 person was
lying than when both were telling the truth (d0.20).
Our prediction that liars would provide fewer details than would
truth tellers was clearly supported (d0.30). Extrapolating from
reality monitoring theory, we also predicted that there would be
less sensory information in deceptive accounts than in truthful
ones. There was a nonsignificant trend in that direction (d
0.17). The finding that liars pressed their lips more than truth
Table 3
Are Liars Less Forthcoming Than Truth Tellers?
Cue Nk
1
k
2
dCI Q
Amount of responding
001 Response length 1,812 49 26 0.03 0.09, 0.03 92.1*
002 Talking time 207 4 3 0.35* 0.54, 0.16 8.1
003 Length of interaction 134 3 2 0.20 0.41, 0.02 0.7
Detailed, complex responses
004 Details 883 24 16 0.30* 0.38, 0.21 76.2*
005 Sensory information (RM) 135 4 3 0.17 0.39, 0.06 13.2*
006 Cognitive complexity 294 6 3 0.07 0.23, 0.10 0.9
007 Unique words 229 6 3 0.10 0.26, 0.06 6.2
Holding back
008 Blocks access to information 218 5 4 0.10 0.13, 0.33 19.8*
009 Response latency 1,330 32 20 0.02 0.06, 0.10 112.4*
010 Rate of speaking 806 23 14 0.07 0.03, 0.16 21.7
011 Presses lips 199 4 3 0.16* 0.01, 0.30 30.9*
Note. Cue numbers are of the cues described in the current article as indexed in Appendix A. Bold type
indicates statistical significance. Ntotal number of participants in the studies; k
1
total number of
independent effect sizes (ds); k
2
number of ds that could be estimated precisely; CI 95% confidence
interval; Qhomogeneity statistic (significance indicates rejection of the null hypothesis of homogeneity of ds);
RM reality monitoring.
*p.05.
91
CUES TO DECEPTION
tellers did (d0.16) was the only cue in the holding back
subcategory that was statistically reliable.
In sum, the most reliable indicator (in terms of the size of the
effect and the number of independent estimates) that liars may
have been less forthcoming than truth tellers was the relatively
smaller number of details they provided in their accounts. The
directions of the cues in Table 3 tell a consistent story: All except 1
of the 11 cues (rate of speaking) was in the predicted direction,
indicating that liars are less forthcoming than truth tellers, though
usually nonsignificantly so.
Are deceptive accounts less compelling than truthful ones? To
determine whether deceptive accounts were less compelling than
truthful ones, we asked whether the lies seemed to make less sense
than the truths and whether they were told in a less engaging and
less immediate manner. We also asked whether liars seemed more
uncertain or less fluent than truth tellers and whether they seemed
less active or animated. The results are shown in Table 4.
By all three of the indicators, the lies made less sense than the
truths. They were less plausible (d0.23); less likely to be
structured in a logical, sensible way (d0.25); and more likely
to be internally discrepant or to convey ambivalence (d0.34).
For the four cues to the engagingness of the message, the results
of two were as predicted. Liars seemed less involved verbally and
vocally in their self-presentations than did truth tellers (d
Table 4
Do Liars Tell Less Compelling Tales Than Truth Tellers?
Cue Nk
1
k
2
dCI Q
Makes Sense
012 Plausibility 395 9 6 0.23* 0.36, 0.11 13.1
013 Logical structure 223 6 6 0.25* 0.46, 0.04 21.5*
014 Discrepant, ambivalent 243 7 3 0.34* 0.20, 0.48 14.3*
Engaging
015 Involved, expressive (overall) 214 6 4 0.08 0.06, 0.22 23.3*
016 Verbal and vocal involvement 384 7 3 0.21* 0.34, 0.08 5.8
017 Facial expressiveness 251 3 2 0.12 0.05, 0.29 9.6*
018 Illustrators 839 16 10 0.14* 0.24, 0.04 23.9
Immediate
019 Verbal immediacy (all categories) 117 3 2 0.31* 0.50, 0.13 2.4
020 Verbal immediacy, temporal 109 4 3 0.15 0.04, 0.34 2.3
021 Generalizing terms 275 5 3 0.10 0.08, 0.28 1.7
022 Self-references 595 12 9 0.03 0.15, 0.09 30.1*
023 Mutual and group references 275 5 4 0.14 0.31, 0.02 4.4
024 Other references 264 6 5 0.16 0.01, 0.33 5.6
025 Verbal and vocal immediacy (impressions) 373 7 4 0.55* 0.70, 0.41 26.3*
026 Nonverbal immediacy 414 11 3 0.07 0.21, 0.07 6.9
027 Eye contact 1,491 32 17 0.01 0.06, 0.08 41.1
028 Gaze aversion 411 6 4 0.03 0.11, 0.16 7.4
029 Eye shifts 218 7 3 0.11 0.03, 0.25 43.8*
Uncertain
030 Tentative constructions 138 3 3 0.16 0.37, 0.05 12.5*
031 Verbal and vocal uncertainty (impressions) 329 10 4 0.30* 0.17, 0.43 11.0
032 Amplitude, loudness 177 5 3 0.05 0.26, 0.15 2.2
033 Chin raise 286 4 4 0.25* 0.12, 0.37 31.9*
034 Shrugs 321 6 3 0.04 0.13, 0.21 3.3
Fluent
035 Non-ah speech disturbances 750 17 12 0.00 0.09, 0.09 60.5*
036 Word and phrase repetitions 100 4 4 0.21* 0.02, 0.41 0.5
037 Silent pauses 655 15 11 0.01 0.09, 0.11 18.5
038 Filled pauses 805 16 14 0.00 0.08, 0.08 22.2
039 Mixed pauses 280 7 3 0.03 0.11, 0.17 3.6
040 Mixed disturbances (ah plus non-ah) 283 7 5 0.04 0.14, 0.23 7.0
041 Ritualized speech 181 4 3 0.20 0.06, 0.47 2.3
042 Miscellaneous dysfluencies 144 8 5 0.17 0.04, 0.38 13.9
Active
043 Body animation, activity 214 4 4 0.11 0.03, 0.25 11.7*
044 Posture shifts 1,214 29 16 0.05 0.03, 0.12 14.1
045 Head movements (undifferentiated) 536 14 8 0.02 0.12, 0.08 9.4
046 Hand movements 951 29 11 0.00 0.08, 0.08 28.0
047 Arm movements 52 3 3 0.17 0.54, 0.20 3.5
048 Foot or leg movements 857 28 21 0.09 0.18, 0.00 20.5
Note. Cue numbers are of the cues described in the current article as indexed in Appendix A. Bold type
indicates statistical significance. Ntotal number of participants in the studies; k
1
total number of
independent effect sizes (ds); k
2
number of ds that could be estimated precisely; CI 95% confidence
interval; Qhomogeneity statistic (significance indicates rejection of the null hypothesis of homogeneity of ds).
*p.05.
92 DEPAULO ET AL.
0.21). They also displayed fewer of the gestures used to illustrate
speech (d0.14).
The set of immediacy cues includes three composite measures
and a number of individual immediacy measures. The individual
cues were the ones described by Mehrabian (1972) that were
reported separately in several studies or other cues that seemed to
capture the immediacy construct (e.g., Fleming, 1994). The com-
posite measures were verbal immediacy (all categories), verbal and
vocal immediacy, and nonverbal immediacy. The verbal immedi-
acy composite is an index consisting of all of the linguistic cate-
gories described by Wiener and Mehrabian (1968). They are all
verbal constructions (e.g., active vs. passive voice, affirmatives vs.
negations) that are typically coded from transcripts. The verbal and
vocal immediacy measure is based on ratersoverall impressions
of the degree to which the social actors seemed direct, relevant,
clear, and personal. The nonverbal immediacy measure includes
the set of nonverbal cues described by Mehrabian (1972) as indices
of immediacy (e.g., interpersonal proximity, leaning and facing
toward the other person).
The verbal composite and the verbal and nonverbal composite
both indicated that liars were less immediate than truth tellers (d
0.31 and 0.55, respectively). Liars used more linguistic con-
structions that seemed to distance themselves from their listeners
or from the contents of their presentations, and they sounded more
evasive, unclear, and impersonal. The nonverbal composite was
only weakly (nonsignificantly) suggestive of the same conclusion
(d0.07).
The results of other individual indices of immediacy were
inconsistent and unimpressive. It is notable that none of the mea-
sures of looking behavior supported the widespread belief that liars
do not look their targets in the eye. The 32 independent estimates
of eye contact produced a combined effect that was almost exactly
zero (d0.01), and the Qstatistic indicated that the 32 estimates
were homogeneous in size. The estimates of gaze aversion were
Table 5
Are Liars Less Positive and Pleasant Than Truth Tellers?
Cue Nk
1
k
2
dCI Q
049 Friendly, pleasant (overall) 216 6 3 0.16 0.36, 0.05 11.3
050 Cooperative (overall) 222 3 3 0.66* 0.93, 0.38 11.2*
051 Attractive (overall) 84 6 3 0.06 0.27, 0.16 3.1
052 Negative statements and complaints 397 9 6 0.21* 0.09, 0.32 21.5*
053 Vocal pleasantness 325 4 2 0.11 0.28, 0.05 1.4
054 Facial pleasantness 635 13 6 0.12* 0.22, 0.02 25.1*
055 Head nods 752 16 3 0.01 0.09, 0.11 1.5
056 Brow lowering 303 5 4 0.04 0.08, 0.16 9.0
057 Sneers 259 4 3 0.02 0.11, 0.15 38.1*
058 Smiling (undifferentiated) 1,313 27 16 0.00 0.07, 0.07 18.3
059 Lip corner pull (AU 12) 284 4 3 0.00 0.12, 0.12 1.9
060 Eye muscles (AU 6), not during positive emotions 284 4 4 0.01 0.13, 0.11 3.6
Note. Cue numbers are of the cues described in the current article as indexed in Appendix A. Bold type
indicates statistical significance. Ntotal number of participants in the studies; k
1
total number of
independent effect sizes (ds); k
2
number of ds that could be estimated precisely; CI 95% confidence
interval; Qhomogeneity statistic (significance indicates rejection of the null hypothesis of homogeneity of
effect sizes); AU facial action unit (as categorized by Ekman & Friesen, 1978).
*p.05.
Table 6
Are Liars More Tense Than Truth Tellers?
Cue Nk
1
k
2
dCI Q
061 Nervous, tense (overall) 571 16 12 0.27* 0.16, 0.38 37.3*
062 Vocal tension 328 10 8 0.26* 0.13, 0.39 25.4*
063 Frequency, pitch 294 12 11 0.21* 0.08, 0.34 31.2*
064 Relaxed posture 488 13 3 0.02 0.14, 0.10 19.6
065 Pupil dilation 328 4 4 0.39* 0.21, 0.56 1.1
066 Blinking 850 17 13 0.07 0.01, 0.14 54.4*
067 Object fidgeting 420 5 2 0.12 0.26, 0.03 4.0
068 Self-fidgeting 991 18 10 0.01 0.09, 0.08 19.5
069 Facial fidgeting 444 7 4 0.08 0.09, 0.25 7.7
070 Fidgeting (undifferentiated) 495 14 10 0.16* 0.03, 0.28 28.2*
Note. Cue numbers are of the cues described in the current article as indexed in Appendix A. Bold type
indicates statistical significance. Ntotal number of participants in the studies; k
1
total number of
independent effect sizes (ds); k
2
number of ds that could be estimated precisely; CI 95% confidence
interval; Qhomogeneity statistic (significance indicates rejection of the null hypothesis of homogeneity of ds).
*p.05.
93
CUES TO DECEPTION
equally unimpressive (d0.03). Estimates of eye shifts produced
just a nonsignificant trend (d0.11).
The one cue that was consistent with our prediction that liars
would seem less certain than truth tellers was verbal and vocal
uncertainty (as measured by subjective impressions); liars did
sound more uncertain than truth tellers (d0.30). One other
behavior unexpectedly produced results in the opposite direction.
More often than truth tellers, liars raised their chins (d0.25). In
studies of facial expressions in conflict situations, a particular
facial constellation, called a plus face, has been identified (Zivin,
1982). It consists of a raised chin, direct eye contact, and medially
raised brows. People who show this plus face during conflict
situations are more likely to prevail than those who do not show it
or who show a minus face, consisting of a lowered chin, averted
eyes, and pinched brows (Zivin, 1982). That research suggests that
raising the chin could be a sign of certainty.
Mahl and his colleagues (e.g., Kasl & Mahl, 1965; Mahl, 1987)
have suggested that the large variety of disturbances that occur in
spontaneous speech can be classified into two functionally distinct
categories: non-ah disturbances, which indicate state anxiety
(Mahl, 1987), and the commonplace filled pauses such as ah,
um,and er,which occur especially often when the available
options for what to say or how to say it are many and complex
(Berger, Karol, & Jordan, 1989; Christenfeld, 1994; Schachter et
al., 1991). Of the non-ah disturbances, the most frequently occur-
ring are sentence changes, in which the speaker interrupts the flow
of a sentence to change its form or content, and superfluous
repetitions of words or phrases. The other non-ah disturbances are
stutters, omissions of words or parts of words, sentences that are
not completed, slips of the tongue, and intruding incoherent
sounds.
Most studies reported a composite that included all non-ah
disturbances, or one that included non-ahs as well as ahs. When
individual disturbances were reported separately, we preserved the
distinctions. In the fluency subcategory, we also included silent
pauses, mixed pauses (silent plus filled, for studies in which the
two were not reported separately), ritualized speech (e.g., you
know,”“well,”“I mean), and miscellaneous dysfluencies, which
were sets of dysfluencies that were not based on particular systems
such as Mahls (1987).
Results of the fluency indices suggest that speech disturbances
have little predictive power as cues to deceit. The categories of
disturbances reported most often, non-ah disturbances, filled
pauses, and silent pauses, produced combined effect sizes
of 0.00, 0.00, and 0.01, respectively. Only one type of speech
disturbance, the repetition of words and phrases, produced a sta-
tistically reliable effect (d0.21).
Under the subcategory of active,we included all movements
except those defined as expressive (i.e., illustrators were included
in the subcategory of engagingcues) and those believed to be
indicative of nervousness (i.e., forms of fidgeting, included in the
tense category). There were nearly 30 independent estimates of
posture shifts (d0.05), hand movements (d0.00), and foot or
leg movements (d0.09), but we found little relationship with
deceit for these or any of the other movements.
In sum, there were three ways in which liars told less compelling
tales than did truth tellers. Their stories made less sense, and they
told those stories in less engaging and less immediate ways. Cues
based on subjective impressions of verbal and vocal cues (typically
rated from audiotapes) were most often consistent with predic-
tions. Specifically, liars sounded less involved, less immediate,
and more uncertain than did truth tellers.
Are liars less positive and pleasant than truth tellers? All of
the cues that assessed pleasantness in a global way produced
results in the predicted direction, although some of the effects were
small and nonsignificant (see Table 5). A small number of esti-
mates (k3) indicated that liars were less cooperative than truth
tellers (d0.66). Liars also made more negative statements and
complaints (d0.21), and their faces were less pleasant (d
0.12).
Table 7
Do Lies Include Fewer Ordinary Imperfections and Unusual Contents Than Truths?
Cue Nk
1
k
2
dCI Q
071 Unstructured productions 211 5 4 0.06 0.27, 0.15 24.8*
072 Spontaneous corrections 183 5 5 0.29* 0.56, 0.02 3.8
073 Admitted lack of memory 183 5 5 0.42* 0.70, 0.15 18.7*
074 Self-doubt 123 4 3 0.10 0.42, 0.21 5.1
075 Self-deprecation 64 3 3 0.21 0.19, 0.61 0.9
076 Contextual embedding 159 6 6 0.21 0.41, 0.00 21.5*
077 Verbal and nonverbal interactions 163 5 4 0.03 0.25, 0.19 8.6
078 Unexpected complications 223 6 5 0.04 0.16, 0.24 2.2
079 Unusual details 223 6 5 0.16 0.36, 0.05 9.5
080 Superfluous details 223 6 5 0.01 0.21, 0.19 11.0
081 Related external associations 112 3 3 0.35* 0.02, 0.67 2.1
082 Anothers mental state 151 4 4 0.22 0.02, 0.46 7.2
083 Subjective mental state 237 6 6 0.02 0.18, 0.22 8.1
Note. Cue numbers are of the cues described in the current article as indexed in Appendix A. Bold type
indicates statistical significance. All of the cues in this table were coded using the Criteria-Based Content
Analysis system that is part of Statement Validity Analysis (e.g., Steller & Kohnken, 1989). Ntotal number
of participants in the studies; k
1
total number of independent effect sizes (ds); k
2
number of ds that could
be estimated precisely; CI 95% confidence interval; Qhomogeneity statistic (significance indicates
rejection of the null hypothesis of homogeneity of ds).
*p.05.
94 DEPAULO ET AL.
Table 8
Cues With Larger and Smaller Effect Sizes Based on Larger and Smaller Numbers of Estimates
Larger effect size (d0.20)dk Smaller effect size (d0.20)dk
Larger no. of estimates (k5)
025 Verbal and vocal immediacy (impressions) 0.55* 7 042 Miscellaneous dysfluencies 0.17 8
014 Discrepant, ambivalent 0.34* 7 070 Fidgeting (undifferentiated) 0.16* 14
004 Details 0.30* 24 049 Friendly, pleasant 0.16 6
031 Verbal and vocal uncertainty (impressions) 0.30* 10 024 Other references 0.16 6
061 Nervous, tense (overall) 0.27* 16 079 Unusual details 0.16 6
062 Vocal tension 0.26* 10 018 Illustrators 0.14* 16
013 Logical structure 0.25* 6 054 Facial pleasantness 0.12* 13
012 Plausibility 0.23* 9 029 Eye shifts 0.11 7
063 Frequency, pitch 0.21* 12 007 Unique words 0.10 6
052 Negative statements and complaints 0.21* 9 048 Foot or leg movements 0.09 28
016 Verbal and vocal involvement 0.21* 7 069 Facial fidgeting 0.08 7
076 Contextual embedding 0.21 6 015 Involved, expressive (overall) 0.08 6
010 Rate of speaking 0.07 23
066 Blinking 0.07 17
026 Nonverbal immediacy 0.07 11
006 Cognitive complexity 0.07 6
051 Attractive 0.06 6
044 Posture shifts 0.05 29
040 Mixed disturbances (ah plus non-ah) 0.04 7
034 Shrugs 0.04 6
078 Unexpected complications 0.04 6
001 Response length 0.03 49
022 Self-references 0.03 12
039 Mixed pauses 0.03 7
028 Gaze aversion 0.03 6
009 Response latency 0.02 32
045 Head movements (undifferentiated) 0.02 14
064 Relaxed posture 0.02 13
083 Subjective mental state 0.02 6
027 Eye contact 0.01 32
068 Self-fidgeting 0.01 18
055 Head nods 0.01 16
037 Silent pauses 0.01 15
080 Superfluous details 0.01 6
046 Hand movements 0.00 29
058 Smiling (undifferentiated) 0.00 27
035 Non-ah speech disturbances 0.00 17
038 Filled pauses 0.00 16
Smaller no. of estimates (k5)
050 Cooperative (overall) 0.66* 3 041 Ritualized speech 0.20 4
073 Admitted lack of memory 0.42* 5 003 Length of interaction 0.20 3
065 Pupil dilation 0.39* 4 005 Sensory information 0.17 4
002 Talking time 0.35* 4 047 Arm movements 0.17 3
081 Related external associations 0.35* 3 011 Presses lips 0.16* 4
019 Verbal immediacy (all categories) 0.31* 3 030 Tentative constructions 0.16 3
072 Spontaneous corrections 0.29* 5 020 Verbal immediacy, temporal 0.15 4
033 Chin raise 0.25* 4 023 Mutual and group references 0.14 5
082 Anothers mental state 0.22 4 067 Object fidgeting 0.12 5
036 Word and phrase repetitions 0.21* 4 017 Facial expressiveness 0.12 3
075 Self-deprecation 0.21 3 043 Body animation, activity 0.11 4
053 Vocal pleasantness 0.11 4
008 Blocks access to information 0.10 5
021 Generalizing terms 0.10 5
074 Self-doubt 0.10 4
132 Lips apart (AU 25) 0.08 5
071 Unstructured productions 0.06 5
131 Eyes closed 0.06 3
032 Amplitude, loudness 0.05 5
056 Brow lowering 0.04 5
130 Lip stretch (AU 20) 0.04 4
077 Descriptions of verbal and nonverbal interactions 0.03 5
057 Sneers 0.02 4
129 Brow raise (AU 1) 0.01 5
060 Eye muscles (AU 6), not during positive emotions 0.01 4
133 Jaw drop (AU 26) 0.00 5
059 Lip corner pull (AU 12) 0.00 4
Note. AU facial action unit (as categorized by Ekman & Friesen, 1978).
*p.05.
95
CUES TO DECEPTION
Each of the more specific cues to positivity or negativity (e.g.,
head nods, brow lowering, sneers) produced combined effects very
close to zero. The most notable finding was that the 27 estimates
of smiling produced a combined effect size of exactly zero. The
measures of smiling in those studies did not distinguish among
different types of smiles. Ekman (1985/1992) argued that for
smiling to predict deceptiveness, smiles expressing genuinely pos-
itive affect (distinguished by the cheek raise, facial action unit 6
[AU; as categorized by Ekman & Friesen, 1978], produced by
movements of the muscles around the outside corner of the eye)
must be coded separately from feigned smiles. Because our review
contained only two estimates of genuine smiling and two of
feigned smiling, the results are reported in Appendix B with the
other cues for which the number of estimates was limited. The
combined effects tend to support Ekmans position. When only
pretending to be experiencing genuinely positive affect, people
were less likely to show genuine smiles (d0.70) and more
likely to show feigned ones (d0.31). There were no differences
in the occurrence of the cheek raise for liars versus truth tellers in
studies in which the participants were not experiencing or faking
positive emotions (d0.01; e.g., studies of the expression and
concealment of pain). Also as predicted by Ekman, the easily
produced lip corner pull (AU 12) did not distinguish truths from
lies either, again producing a combined effect size of exactly zero.
Are liars more tense than truth tellers? Except for two types
of fidgeting, the results of every cue to tension were in the
predicted direction, though again some were quite small and non-
significant (see Table 6). Liars were more nervous and tense
overall than truth tellers (d0.27). They were more vocally tense
(d0.26) and spoke in a higher pitch (d0.21). Liars also had
more dilated pupils (d0.39).
In studies in which different kinds of fidgeting were not differ-
entiated, liars fidgeted more than truth tellers (d0.16). However,
the effect was smaller for facial fidgeting (e.g., rubbing ones face,
playing with ones hair; d0.08), and the results were in the
opposite direction for object fidgeting (e.g., tapping a pencil,
twisting a paper clip; d0.12) and self-fidgeting (e.g., scratch-
ing; d0.01). The best summary of these data is that there is no
clear relationship between fidgeting and lying.
Do lies include fewer ordinary imperfections and unusual con-
tents than do truths? The people who made spontaneous correc-
tions while telling their stories were more likely to be telling truths
than lies (d⫽⫺0.29). This is consistent with our prediction that
liars would avoid behaviors they mistakenly construe as under-
mining the convincingness of their lies (see Table 7). Liars also
seemed to avoid another admission of imperfection that truth
tellers acknowledge: an inability to remember something (d
0.42). There were also indications that liars stuck too closely to
the key elements of the story they were fabricating. For example,
like good novelists, truth tellers sometimes describe the settings of
their stories; liars were somewhat less likely to do this (d0.21
for contextual embedding), and they provided nonsignificantly
fewer unusual details (d0.16). However, liars did mention
events or relationships peripheral to the key event (d0.35 for
related external associations) more often than truth tellers did.
Summary of individual cues to deception. The most compel-
ling results in this review are the ones based on relatively large
numbers of estimates that produced the biggest combined effects.
In Table 8, the 88 cues are arranged into four sections according to
the number of independent estimates and the size of the combined
effects. On the top half of the table are the cues for which six or
more independent estimates were available. These were the 50
cues that were above the median in the number of estimates on
which they were based (see also Field, 2001). On the bottom half
are the 38 cues for which just three, four, or five estimates were
available. In the first column are the 23 cues with combined effect
sizes larger than |0.20|. In the second column are the 65 effect sizes
equal to |0.20| or smaller. The value of |0.20| was selected based on
Cohens (1988) heuristic that effect sizes (d) of |0.20| are small
effects. Within each section, cues with the biggest effect sizes are
listed first; within cues with the same effect sizes, those based on
a larger number of estimates (k) are listed first.
Twelve cues are in the larger dand ksection. These cues were
based on at least six independent estimates and produced com-
bined effects greater than |0.20|. Half of these cues were from the
compelling category, including all three of the cues in the subcat-
egory makes sense.The effects for those three cues indicate that
self-presentations that seem discrepant, illogically structured, or
implausible are more likely to be deceptive than truthful. Verbal
and vocal immediacy, from the immediacysubcategory, tops the
list. Verbal and vocal uncertainty, from the subcategory uncer-
tain,is in this section, as is verbal and vocal involvement, a cue
in the subcategory engaging.
The larger dand ksection also includes one of the cues in the
forthcoming category (details) and one from the positive, pleas-
antcategory (negative statements and complaints). There are also
three cues from the tense category (overall tension, vocal tension,
and pitch) and one from the category of ordinary imperfections
and unusual details(contextual embedding).
In the larger dand smaller ksection of Table 8 are cues that
produced relatively bigger effects but were based on smaller
numbers of estimates. For example, a handful of estimates suggest
that liars were less cooperative than truth tellers, were less likely
to admit that they did not remember something, and had more
dilated pupils.
Table 9
Stem and Leaf Plot of Combined Effect Sizes (ds) for Individual
Cues to Deception
Stem Leaf
0.6 6
0.6
0.5 5
0.5
0.4
0.4 2
0.3 559
0.3 0014
0.2 55679
0.2 0011111123
0.1 5666666777
0.1 000011122244
0.0 5566677778889
0.0 0000001111111222223333344444
Note. Included are the 88 cues for which at least three independent effect
size estimates were available (at least two of which could be computed
precisely).
96 DEPAULO ET AL.
Some of the cues in the smaller dand larger ksection of Table 8
are noteworthy because the very tiny cumulative ds were based on
large numbers of estimates. For example, response length, re-
sponse latency, and eye contact were all based on more than 30
independent estimates, but they produced cumulative effect sizes
of just 0.03, 0.02, and 0.01, respectively.
Table 9 is a stem and leaf display of the absolute values of
the 88 effect sizes. The median effect size is just |0.10|. Only two
of the effect sizes meet Cohens (1988) criterion of |0.50| for large
effects.
Moderators of Cues to Deception
In Tables 37, in which we present the combined results of the
estimates of individual cues to deception, we included cues only if
they were based on at least three effect sizes, at least two of which
were precise estimates. In our moderator analyses, we needed to
use a more stringent criterion to have a sufficient number of
estimates at each level of the moderator variables. We began by
considering all cues for which we had at least 10 precise estimates.
Eighteen cues met that criterion: response length, details, response
latency, rate of speaking, illustrators, eye contact, non-ah speech
disturbances, silent pauses, filled pauses, posture shifts, hand
movements, foot or leg movements, smiling (undifferentiated),
nervous, pitch, blinking, self-fidgeting, and fidgeting (undifferen-
tiated). Our initial analyses that combined across all estimates (as
reported in Tables 37) indicated that for some of these cues, the
estimates were homogeneous. Because our predictions were theo-
retically driven, we proceeded to test the moderator variables for
all 18 of the cues. Four of the cues for which the estimates were
homogeneousillustrators, posture shifts, smiling (undifferenti-
ated), and hand movementsproduced no significant effects in
any of our moderator analyses, indicating that the size of the
effects was also homogeneous across levels of the moderators.
For the moderator analyses, we report three homogeneity statistics
for each moderator. The Q
T
statistic indicates the variability among all
of the estimates of the cue included in the analysis. The Q
B
statistic
indicates between-groups variation. Significant between-groups ef-
fects indicate that the size of the effects differed across the levels of
the moderator. The Q
W
statistic indicates variability within each level
of the moderator variable; a significant value indicates additional
variability that has not been explained.
Motivation to succeed at lying. We predicted that cues to
deception would be stronger in studies in which the social actors
were motivated to get away with their lies than in studies in which
no special incentives were provided. Table 10 shows the effect
sizes for each cue for those two kinds of studies. Patterns of eye
contact differed significantly between the motivated senders and
the senders with no special motivation. When social actors were
motivated to succeed, they made significantly less eye contact
when lying than when telling the truth (d0.15). When no
special incentive was provided to social actors, they made nonsig-
nificantly more eye contact when lying (d0.09).
Two of the fluency cues, non-ah disturbances and filled pauses,
varied with the motivation moderator. In studies in which no
special incentive was provided, there was a small positive effect
for both cues; deceptive self-presentations were nonsignificantly
more likely to include non-ah disturbances (d0.13) and filled
pauses (d0.09) than truthful ones. However, when incentives
were provided, this effect reversed, and deceptive self-
presentations included nonsignificantly fewer non-ah speech dis-
turbances (d0.10) and filled pauses (d0.13) than truthful
ones.
Several cues to tension also discriminated cues to deception
under the two motivational conditions. Social actors were more
tense overall when lying compared with when telling the truth, and
this effect was significant only when they were motivated to
succeed (d0.35 vs. 0.15). Also, it was only in the incentive
condition that lies were communicated in more highly pitched
voices than were truths (d0.59 vs. 0.02).
Differences in the magnitude of the effects (absolute values) for
studies in which social actors were or were not motivated to
succeed are also telling. For studies in which there was no special
incentive for succeeding, cues to deception were generally weak.
Overall, the size of the effects increased somewhat when some
incentive was provided.
Identity-relevant motivations to succeed. We had predicted
that across all of the estimates in our data set, we would find that
liarsresponses would be shorter than those of truth tellers, would
be preceded by a longer response latency, and would include more
silent pauses. None of these predictions was supported in the
overall analyses. However, all of these predictions were signifi-
cantly more strongly supported under conditions of identity-
relevant motivation than under no-motivation conditions (see Ta-
ble 11). Within the identity-relevant condition, the effect sizes
were nearly significant for response length (d0.23) and silent
pauses (d0.38) but not significant for response latency
(d0.36).
In the identity-relevant condition, the voice pitch of liars was
significantly higher than that of truth tellers; the effect size was
significant (d0.67), and it differed significantly from the effect
size in the no-motivation condition (d0.02). Liars in the
identity-relevant condition also made significantly fewer foot or
leg movements than truth tellers (d0.28); however, the size of
the effect was not significantly different when compared with the
no-motivation condition (d0.02).
Instrumental motivations. Table 11 also shows cues to decep-
tion for studies in which the incentives were primarily instrumental
(e.g., financial). Only two cues differed significantly in size be-
tween the studies that provided no incentives to the social actors
and those that provided instrumental incentives. Non-ah distur-
bances (d0.17) and filled pauses (d0.14) occurred
nonsignificantly less often in the speech of the liars than of the
truth tellers in the studies that provided instrumental incentives. In
the studies in which no incentives were provided, the speech of
liars included somewhat more non-ah disturbances (d0.13) and
filled pauses (d0.09) than the speech of truth tellers. Within the
instrumental-motivation condition, there were no effect sizes that
differed significantly from chance.
Identity-relevant versus instrumental motivations. The self-
presentational perspective predicts stronger effects when incen-
tives are identity relevant than when they are instrumental. Results
(also shown in Table 11) indicate that the responses of liars tended
to be even shorter than those of truth tellers when the social actors
were motivated by identity-relevant incentives than when they
were instrumentally motivated (d0.23 vs. 0.05; for the
difference between conditions, p.06). Response latencies were
significantly longer (d0.36 vs. 0.01), and there were some-
97
CUES TO DECEPTION
what more silent pauses (d0.38 vs. 0.03; for the difference
between conditions, p.07). There were no cues that were
significantly or nearly significantly stronger in the instrumental-
motivation condition.
Unplanned and planned presentations. Seven independent
samples (described in eight reports) included a manipulation of
whether the sendersmessages were unplanned or planned. Results
for 33 specific cues were reported by the authors. However, there
were only two cues (response length and response latency) that
met our criterion of being based on at least three independent
estimates (at least two of which were estimated precisely). Table
12 shows the results for those cues as well as several others that
met a less stringent criterion: At least two independent estimates
were available, and at least one was estimated precisely. Those
results should be interpreted with caution.
We computed the effect sizes in Table 12 by subtracting the
effect size for the planned messages from the effect size from the
unplanned messages. Therefore, more positive effect sizes indicate
that the relationship of the cue to deception was more positive for
the unplanned messages than for the planned messages.
As predicted, the combined effect for response latency was
statistically reliable (d0.20). When social actors did not plan
their messages, there was a longer latency between the end of the
question and the beginning of their answer when they were lying
Table 10
Cues to Deception When Incentives for Success Were or Were Not Provided
Cue
Condition
Q
T
(df )Q
B
(1)
No motivation Motivation
001 Response length
d(CI) 0.03 (0.15, 0.09) 0.03 (0.14, 0.08) 92.1* (48) 0.0
Q
W
(k)59.6* (21) 32.5 (28)
009 Response latency
d(CI) 0.04 (0.17, 0.26) 0.00 (0.22, 0.22) 112.4* (32) 0.3
Q
W
(k)50.1* (18) 62.0* (15)
010 Rate of speaking
d(CI) 0.10 (0.04, 0.25) 0.04 (0.10, 0.17) 21.7 (22) 0.5
Q
W
(k)8.7 (8) 12.5 (15)
027 Eye contact
d(CI) 0.09 (0.01, 0.19) 0.15* (0.29, 0.01) 41.1 (31) 9.0*
Q
W
(k)13.3 (20) 18.8 (12)
035 Non-ah disturbances
d(CI) 0.13 (0.15, 0.41) 0.10 (0.34, 0.14) 60.5* (16) 6.3*
Q
W
(k)24.6* (7) 29.7* (10)
037 Silent pauses
d(CI) 0.02 (0.18, 0.15) 0.06 (0.16, 0.29) 18.5 (14) 0.5
Q
W
(k)7.1 (8) 10.8 (7)
038 Filled pauses
d(CI) 0.09 (0.03, 0.22) 0.13 (0.28, 0.02) 22.2 (15) 6.5*
Q
W
(k)8.3 (8) 7.4 (8)
048 Foot or leg movements
d(CI) 0.02 (0.15, 0.11) 0.13* (0.22, 0.03) 20.4 (27) 1.4
Q
W
(k)5.0 (9) 14.0 (19)
061 Nervous, tense
d(CI) 0.15 (0.15, 0.44) 0.35* (0.11, 0.58) 37.3* (15) 3.0
Q
W
(k)10.8 (8) 23.4* (8)
063 Frequency, pitch
d(CI) 0.02 (0.23, 0.20) 0.59* (0.31, 0.88) 31.2* (11) 18.6*
Q
W
(k)2.9 (6) 9.7 (6)
066 Blinking
d(CI) 0.05 (0.14, 0.25) 0.09 (0.19, 0.36) 54.4* (16) 0.5
Q
W
(k)23.9* (9) 30.3* (8)
068 Self-fidgeting
d(CI) 0.08 (0.03, 0.18) 0.12 (0.25, 0.01) 19.5 (17) 5.5*
Q
W
(k)10.1 (11) 3.9 (7)
070 Fidgeting (undifferentiated)
d(CI) 0.09 (0.33, 0.53) 0.18 (0.08, 0.43) 28.2* (13) 0.3
Q
W
(k)11.3* (3) 16.6* (11)
Note. Cue numbers are of the cues described in the current article as indexed in Appendix A. Bold type
indicates statistical significance. The Qstatistics are homogeneity statistics; significance indicates rejection of
the hypothesis of homogeneity of effect sizes (ds). Therefore, bigger Qs indicate less homogeneity. Q
T
homogeneity among all estimates for a particular cue; df degree of freedom; Q
B
homogeneity between the
two levels of the moderator being compared. CI 95% confidence interval; Q
W
homogeneity of ds within
the level of the moderator; knumber of independent estimates.
*p.05.
98 DEPAULO ET AL.
Table 11
Cues to Deception Under Conditions of No Motivation, Identity-Relevant Motivation, and Instrumental Motivation
Cue
Condition
No motivation (NM) Identity-relevant (IR) Instrumental
(IN) NM vs. IR NM vs. IN IR vs. IN
001 Response length
d(CI) 0.03 (0.17, 0.11) 0.23 (0.48, 0.02) 0.05 (0.21, 0.12)
Q
W
(k)59.6* (21) 5.0 (8) 12.0 (16)
Q
T
(df )69.5* (28) 71.6* (36) 20.5 (23)
Q
B
(1) 4.9* 0.1 3.6
009 Response latency
d(CI) 0.04 (0.15, 0.24) 0.36 (0.11, 0.84) 0.01 (0.43, 0.40)
Q
W
(k)50.2* (18) 10.1 (6) 2.8 (5)
Q
T
(df )64.9* (23) 53.2* (22) 17.1 (10)
Q
B
(1) 4.6* 0.2 4.2*
010 Rate of speaking
d(CI) 0.10 (0.05, 0.26) 0.06 (0.28, 0.40) 0.03 (0.22, 0.17)
Q
W
(k)8.7 (8) 0.3 (3) 10.0 (10)
Q
T
(df )9.0 (10) 20.0 (17) 10.5 (12)
Q
B
(1) 0.1 1.3 0.2
027 Eye contact
d(CI) 0.09* (0.01, 0.17) 0.19 (0.50, 0.12) 0.08 (0.25, 0.09)
Q
W
(k)13.3 (20) 1.2 (3) 10.3 (7)
Q
T
(df )16.8 (22) 26.8 (26) 11.9 (9)
Q
B
(1) 2.3 3.2 0.3
035 Non-ah disturbances
d(CI) 0.13 (0.17, 0.43) 0.17 (0.53, 0.18)
Q
W
(k)24.6* (7) 16.8* (6)
Q
T
(df )49.2* (12)
Q
B
(1) 8.0*
037 Silent pauses
d(CI) 0.02 (0.16, 0.13) 0.38 (0.01, 0.77) 0.03 (0.36, 0.31)
Q
W
(k)7.1 (8) 1.8 (3) 4.4 (3)
Q
T
(df )13.5 (10) 11.5 (10) 9.8 (5)
Q
B
(1) 4.6* 0.0 3.5
038 Filled pauses
d(CI) 0.09 (0.04, 0.23) 0.14 (0.32, 0.04)
Q
W
(k)8.3 (8) 6.2 (6)
Q
T
(df )20.9 (13)
Q
B
(1) 6.4*
048 Foot or leg movements
d(CI) 0.02 (0.14, 0.11) 0.28* (0.51, 0.06) 0.09 (0.22, 0.03)
Q
W
(k)5.0 (9) 2.6 (5) 9.0 (12)
Q
T
(df )10.8 (13) 14.6 (20) 13.3 (16)
Q
B
(1) 3.2 0.6 1.7
061 Nervous, tense
d(CI) 0.15 (0.15, 0.44) 0.02 (0.35, 0.31)
Q
W
(k)10.8 (8) 3.3 (4)
Q
T
(df )15.2 (11)
Q
B
(1) 1.2
063 Frequency, pitch
d(CI) 0.02 (0.15, 0.11) 0.67* (0.43, 0.92)
Q
W
(k)2.9 (6) 0.0 (3)
Q
T
(df )17.0* (8)
Q
B
(1) 14.1*
066 Blinking
d(CI) 0.05 (0.11, 0.22) 0.05 (0.50, 0.39)
Q
W
(k)23.9* (9) 0.3 (3)
Q
T
(df )24.2* (1.1)
Q
B
(1) 0.0
068 Self-fidgeting
d(CI) 0.08 (0.03, 0.19) 0.09 (0.27, 0.09)
Q
W
(k)10.1 (11) 1.3 (4)
Q
T
(df )13.8 (14)
Q
B
(1) 2.4
070 Fidgeting (undifferentiated)
d(CI) 0.09 (0.43, 0.61) 0.11 (0.43, 0.65) 0.33 (0.12, 0.78)
Q
W
(k)11.3* (3) 0.6 (4) 10.6 (6)
Q
T
(df )11.9* (6) 23.8* (8) 12.8 (9)
Q
B
(1) 0.0 1.9 1.6
Note. Cue numbers are of the cues described in the current article as indexed in Appendix A. Bold type indicates statistical significance. The Qstatistics are
homogeneity statistics; significance indicates rejection of the hypothesis of homogeneity of effect sizes (ds). Therefore, bigger Qs indicate less homogeneity. Q
T
homogeneity among all estimates for a particular cue; df degree of freedom; Q
B
homogeneity between the two levels of the moderator being compared. CI
95% confidence interval; Q
W
homogeneity of ds within the level of the moderator; knumber of independent estimates.
*p.05.
than when they were telling the truth, but when the senders
planned their messages, they began responding relatively more
quickly when lying than when telling the truth. There were also
somewhat more silent pauses in the deceptive presentations than
the truthful ones when those presentations were not planned than
when they were planned (d0.57, p.05).
Duration of the presentations. We predicted that if social
actors needed to sustain their presentations for greater lengths of
time, cues to deception would be clearer and more numerous. We
used the mean duration of the messages in each study as an
approximation of the degree to which social actors needed to
sustain their presentations over time. Because duration is a con-
tinuous variable, there are no separate groups. Instead, a significant
Q
B
statistic indicates that the effect sizes were not homogeneous
(i.e., the moderator was significant), and the unstandardized beta
indicates the direction of the effect.
There were three cues for which at least eight independent
estimates were available: response length (Q
B
5.4, k13);
response latency (Q
B
6.1, k8); and pitch (Q
B
6.6, k8),
and for all three, Q
B
indicated that the effect sizes were not
homogeneous across message lengths. This means that all three
cues varied significantly with the duration of the presentations.
When presentations were sustained for greater amounts of time,
deceptive responses were especially shorter than truthful ones (b
0.008), and they were preceded by a longer latency (b0.034).
Lies, relative to truths, were also spoken in an especially higher
pitched voice when the presentations lasted longer (b0.002).
Communications that were or were not about transgressions.
We expected to find stronger cues to negativity and tension in
studies in which social actors lied about transgressions than in
those in which the lies were not about transgressions. As shown in
Table 13, this was an important moderator of cues to deception.
When the lie was about a transgression, compared with when it
was not, liars took longer to begin responding than did truth tellers
(d0.27 vs. 0.01). Once they started talking, they talked
significantly faster than truth tellers (d0.32 vs. 0.01). They also
seemed more tense overall (d0.51 vs. 0.09), and they blinked
more (d0.38 vs. 0.01). A trend suggested that they tended to
avoid eye contact more (d0.13 vs. 0.04, p.07). There were
also some cues suggestive of inhibition: People lying about trans-
gressions made fewer foot or leg movements (d0.24 vs.
0.04), and they fidgeted less (d0.14 vs. 0.07 for self-
fidgeting, d0.16 vs. 0.24 for undifferentiated fidgeting). Once
again, the effect for non-ah disturbances was contrary to expecta-
tions: Lies about transgressions included fewer such disturbances
than truths; the lies that were not about transgressions included
relatively more of them (d0.24 vs. 0.17). Within the trans-
gression condition, the effect sizes for response latency, rate of
speaking, non-ah disturbances, foot or leg movements, tension,
blinking, and self-fidgeting all differed significantly, or nearly so,
from zero. Within the no-transgression condition, only the effect
for undifferentiated fidgeting differed from zero.
Overall differences in the magnitude of the cues to deception for
lies about transgressions compared with lies about other topics are
also noteworthy. For 11 of the 12 cues, the absolute value of the
effect was bigger for the lies about transgressions than for the other
lies. In some instances, however, the direction of the effect was
contrary to predictions (e.g., non-ah disturbances, fidgeting).
Interactivity. Buller and Burgoons (1996) formulation pre-
dicts greater pleasantness, fluency, composure, involvement, and
immediacy with increasingly interactive contexts. Effect sizes
differed significantly for interactive paradigms relative to nonin-
teractive ones for three cues: details (Q
B
4.41), pitch
(Q
B
8.21), and blinking (Q
B
13.15). Liars offered signifi-
cantly fewer details than truth tellers in interactive contexts (d
0.33; 95% confidence interval [CI] 0.51, 0.15; k20); for
noninteractive contexts, the effect was negligible (d0.06;
CI 0.51, 0.39; k4). This result does not seem consistent with
Buller and Burgoons predictions. Liars in interactive contexts
spoke in a significantly higher pitched voice than did truth tellers
(d0.35; CI 0.07, 0.64; k9); for noninteractive contexts,
there was a very small effect in the opposite direction (d0.06;
CI 0.45, 0.33; k3). In that pitch typically rises with stress,
this result is inconsistent with the prediction that liars would show
more composure with increasing interactivity. Finally, liars in
noninteractive contexts blinked significantly more than truth tell-
ers (d0.29; CI 0.03, 0.56; k4); in interactive contexts,
there was little difference (d0.06; CI 0.21, 0.80; k12).
In that blinking can be a sign of tension, this result is consistent
with predictions.
Cues measured objectively and subjectively. To test our pre-
diction that cues based on subjective impressions would more
powerfully discriminate truths from lies than cues measured ob-
jectively, we searched the data set for cues that were assessed
subjectively and objectively and that had at least three estimates
per assessment type. Five cues that met the criterion are shown in
Table 14. In addition, we compared the verbal immediacy com-
posite (Cue 019), which is based on the objective scoring of
linguistic forms, with the verbal and vocal immediacy cue (Cue
025), which is based on subjective impressions.
Three of the six comparisons were significant, and all of them
showed that the effect sizes were stronger when the cues were
assessed subjectively than when they were measured objectively.
Impressions of immediacy separated truths from lies more power-
Table 12
Cues to Deception: Differences Between Unplanned and
Planned Communications
Cue k
1
k
2
dCI Q
001 Response length 6 3 0.07 0.06, 0.20 6.3
009 Response latency 4 1 0.20* 0.07, 0.34 8.7
018 Illustrators 3 1 0.03 0.18, 0.11 0.4
027 Eye contact 3 1 0.09 0.23, 0.06 0.8
037 Silent pauses 2 2 0.57 0.00, 1.14 10.1*
055 Head nods 3 1 0.11 0.25, 0.04 3.1
058 Smiling 3 1 0.07 0.08, 0.22 1.2
070 Fidgeting
(undifferentiated) 320.03 0.19, 0.14 1.6
Note. Cue numbers are of the cues described in the current article as
indexed in Appendix A. Bold type indicates statistical significance. Effect
sizes (ds) were computed by subtracting the dfor planned messages from
the dfor unplanned messages. Therefore, more positive ds indicate that the
behavior was more positively associated with deception for the unplanned
messages than for the planned ones. k
1
total number of ds; k
2
number
of ds that could be estimated precisely; CI 95% confidence interval; Q
homogeneity statistic (significance indicates rejection of the hypothesis
of homogeneity of ds; therefore, bigger Qs indicate less homogeneity).
*p.05.
100 DEPAULO ET AL.
fully than did objective measures of immediacy (d0.55 vs.
0.31; only the dfor subjective impressions was significant).
When eye contact was based on subjective impressions, liars
showed somewhat less eye contact than truth tellers (d0.28);
there was virtually no difference when eye contact was measured
objectively (d0.04). Similarly, subjective impressions of facial
pleasantness indicated that liars were significantly less facially
pleasant than truth tellers (d0.20), but this did not occur when
facial pleasantness was measured objectively (d0.07).
Discussion
Previous perspectives on cues to deception have pointed to the
predictive value of factors such as the feelings of guilt or appre-
hensiveness that people may have about lying, the cognitive chal-
lenges involved in lying, and the attempts people make to control
their verbal and nonverbal behaviors (e.g., Ekman, 1985/1992;
Ekman & Friesen, 1969; Zuckerman et al., 1981). Unlike past
formulations, our self-presentational perspective is grounded in
psychologys growing understanding of the nature of lying in
everyday life. Lying, we now know, is a fact of daily life, and not
an extraordinary event. Lies, like truths, are often told in the
pursuit of identity-relevant goals. People frequently lie to make
themselves (or sometimes others) look better or feel better; they try
to appear to be the kind of person they only wish they could
truthfully claim to be (B. M. DePaulo, Kashy, et al., 1996). Now
that we have recognized the pedestrian nature of most lie telling in
peoples lives, the factors underscored by others assume their
rightful place.
Table 13
Cues to Deception When Senders Did and Did Not Commit a Transgression
Cue
Condition
Q
T
(df )Q
B
(1)
No transgression Transgression
001 Response length
d(CI) 0.02 (0.11, 0.08) 0.08 (0.25, 0.10) 92.1* (48) 0.7
Q
W
(k)74.6* (38) 16.7 (11)
009 Response latency
d(CI) 0.07 (0.24, 0.11) 0.27 (0.02, 0.55) 112.4* (31) 13.7*
Q
W
(k)67.4* (24) 31.3* (8)
010 Rate of speaking
d(CI) 0.01 (0.08, 0.10) 0.32* (0.13, 0.52) 21.7 (22) 6.6*
Q
W
(k)8.7 (18) 6.4 (5)
027 Eye contact
d(CI) 0.04 (0.05, 0.14) 0.13 (0.33, 0.07) 41.1 (31) 3.3
Q
W
(k)24.4 (26) 13.3* (6)
035 Non-ah disturbances
d(CI) 0.17 (0.04, 0.38) 0.24 (0.49, 0.01) 60.5* (16) 19.7*
Q
W
(k)6.5 (11) 34.3* (6)
037 Silent pauses
d(CI) 0.01 (0.15, 0.14) 0.10 (0.24, 0.43) 18.5 (14) 0.5
Q
W
(k)11.8 (10) 6.2 (5)
038 Filled pauses
d(CI) 0.01 (0.13, 0.14) 0.03 (0.26, 0.21) 22.2 (15) 0.1
Q
W
(k)9.5 (11) 12.6* (5)
048 Foot or leg movements
d(CI) 0.04 (0.12, 0.04) 0.24* (0.38, 0.09) 20.4 (27) 3.8*
Q
W
(k)11.4 (21) 5.2 (7)
061 Nervous, tense
d(CI) 0.09 (0.11, 0.29) 0.51* (0.28, 0.75) 37.3* (15) 13.9*
Q
W
(k)15.2 (12) 8.2 (4)
066 Blinking
d(CI) 0.01 (0.14, 0.16) 0.38* (0.03, 0.73) 54.4* (16) 12.2*
Q
W
(k)40.5* (13) 1.7 (4)
068 Self-fidgeting
d(CI) 0.07 (0.03, 0.17) 0.14 (0.28, 0.00) 19.5 (17) 5.7*
Q
W
(k)8.2 (12) 5.5 (6)
070 Fidgeting (undifferentiated)
d(CI) 0.24* (0.02, 0.46) 0.16 (0.58, 0.27) 28.2* (13) 6.1*
Q
W
(k)18.1 (10) 4.1 (4)
Note. Cue numbers are of the cues described in the current article as indexed in Appendix A. Bold type
indicates statistical significance. The Qstatistics are homogeneity statistics; significance indicates rejection of
the hypothesis of homogeneity of effect sizes (ds). Therefore, bigger Qs indicate less homogeneity. Q
T
homogeneity among all estimates for a particular cue; df degree of freedom; Q
B
homogeneity between the
two levels of the moderator being compared. CI 95% confidence interval; Q
W
homogeneity of ds within
the level of the moderator; knumber of independent estimates.
*p.05.
101
CUES TO DECEPTION
Previous Perspectives on Cues to Deception
Feelings While Lying
In that the behaviors or feelings that people try to hide with their
lies are usually only mildly discrediting, feelings of guilt should be
mild as well. Similarly, for most lies, the sanctions attendant on
getting caught are minimal; thus, liars should ordinarily seem only
slightly more apprehensive than truth tellers. Perhaps these faint
feelings of guilt and apprehensiveness account for the twinge of
discomfort reported by the tellers of everyday lies. We believe that
the discomfort is also born of the one identity-relevant implication
that is common to all liars: They are willing to make claims they
believe to be untrue.
Two predictions follow from this analysis. First, cues to nega-
tivity and tension will generally be weak. However, when liars
have reason to feel especially guilty about their lies or apprehen-
sive about the consequences of them, as when they are lying about
transgressions, then those cues should be stronger. Consistent with
predictions, we did find some of the expected cues in our analyses
that combined across all studies. For example, liars made more
negative statements than did truth tellers, and they appeared more
tense. When we looked separately at the lies that were and were
not about transgressions, we found that the cues to lies about
transgressions were more plentiful and more robust than the cues
to deception for any level of any of the other moderators we
examined. In contrast, lies that were not about transgressions were
barely discriminable from the truths.
The self-presentational perspective accords importance, not only
to the feelings that liars experience more routinely than do truth
tellers, but also to the feelings that truth tellers genuinely experi-
ence and that liars can only try to fake. When social actors are
truthfully presenting aspects of themselves that are especially
important to them, they have an emotional investment that is not
easily simulated by those who only pretend to have such personal
qualities. They also have the support of a lifetime of experiences
at living the part. Liarsperformances, then, would pale in com-
parison. Consistent with this formulation is our finding that liars
were generally less forthcoming than truth tellers, and their tales
were less compelling. For example, liars provided fewer details
than did truth tellers. In contrast, truth tellers sounded more in-
volved, more certain, more direct, and more personal.
Arousal
Pupil dilation and pitch did function as cues to deception and
could be regarded as supportive of the hypothesized importance of
generalized arousal. However, we believe that it is theoretically
and empirically more precise and defensible to interpret these cues
as indicative of particular attentional or information-processing
activities or of specific affective experiences (e.g., Cacioppo,
Petty, & Tassinary, 1989; Ekman et al., 1983; Neiss, 1988; Sparks
& Greene, 1992).
Cognitive Complexities
Several theoretical statements share the assumption that lie
telling is more cognitively challenging than telling the truth (e.g.,
Buller & Burgoon, 1996; Zuckerman et al., 1981). From our
Table 14
Cues to Deception Based on Objective and Subjective Measures
Cue
Measurement
Q
T
(df)Q
B
(1)
Objective Subjective
004 Details
d(CI) 0.27* (0.50, 0.04) 0.32* (0.58, 0.07) 76.2* (23) 0.3
Q
W
(k)34.9* (14) 41.0* (10)
019 Verbal immediacy with
025 Verbal, vocal immediacy
d(CI) 0.31 (0.73, 0.10) 0.55* (0.88, 0.23) 28.7* (8) 3.9*
Q
W
(k)2.4 (3) 26.3* (7)
026 Nonverbal immediacy
d(CI) 0.08 (0.26, 0.11) 0.07 (0.28, 0.14) 6.9 (10) 0.0
Q
W
(k)2.2 (7) 4.6 (4)
027 Eye contact
d(CI) 0.04 (0.05, 0.12) 0.28 (0.58, 0.02) 41.1 (31) 5.1*
Q
W
(k)30.0 (27) 5.9 (5)
054 Facial pleasantness
d(CI) 0.07 (0.18, 0.33) 0.20* (0.37, 0.03) 25.1 (12) 6.7*
Q
W
(k)2.3 (8) 16.1* (5)
064 Relaxed posture
d(CI) 0.00 (0.24, 0.24) 0.05 (0.33, 0.23) 19.6 (12) 0.2
Q
W
(k)0.0 (9) 19.5* (4)
Note. Cue numbers are of the cues described in the current article as indexed in Appendix A. Bold type
indicates statistical significance. The Qstatistics are homogeneity statistics; significance indicates rejection of
the hypothesis of homogeneity of effect sizes (ds). Therefore, bigger Qs indicate less homogeneity. Q
T
homogeneity among all estimates for a particular cue; df degree of freedom; Q
B
homogeneity between the
two levels of the moderator being compared. CI 95% confidence interval; Q
W
homogeneity of ds within
the level of the moderator; knumber of independent estimates.
*p.05.
102 DEPAULO ET AL.
self-presentational perspective, we instead agree with McCornack
(1997) in questioning that assumption. Because lie telling is so
routinely practiced, it may generally be only slightly more chal-
lenging than telling the truth.
In the overall analyses combining all estimates of a given cue,
we found some indications that liars may have been more preoc-
cupied and more cognitively taxed than truth tellers. The level of
involvement in their words and in their voices, which does not
quite measure up to that of truth tellers, is one such possibility. So,
too, is the impression of uncertainty that they convey. The dis-
crepancies in their self-presentations may also be telling. Some of
the expected cues, such as the longer response latencies, the
shorter responses, and the more hesitant responses, did not emerge
in the analyses that combined results across all studies. However,
moderator analyses show that, as we had predicted, these cues
were more revealing when the self-presentations may have been
more challenging to generate. When social actors could not plan
their presentations, compared with when they could, the response
latencies of deceivers were greater than those of truth tellers, and
their presentations tended to include more silent pauses. When
presentations were sustained for greater lengths of time, liars
latencies to respond were again greater than those of truth tellers,
and their responses were briefer and spoken in a higher pitch.
Attempted Control
From our self-presentational perspective, liars are attempting to
control not just their behaviors (e.g., Zuckerman et al., 1981) but
also their thoughts and feelings. Truth tellers attempt these forms
of self-regulation as well, but liarsefforts are experienced as more
deliberate. Deliberate self-regulatory efforts may be especially
likely to usurp mental resources, leaving liars more preoccupied
than truth tellers. Liarstales therefore seem less compelling and
less forthcoming. Because so many of the little lies that people tell
require scant self-regulatory effort, the resulting cues generally are
weak. However, when self-regulatory efforts intensify, as when
social actors are highly motivated (especially by identity-relevant
goals) to get away with their lies, then cues intensify, too.
Consistent with our formulation is our finding that motivated
liars (compared with less motivated ones) had even higher pitched
voices than truth tellers, and they seemed even more tense and
inhibited. When the motivation was one that linked success at
deceit to identity and self-presentational concerns, cues became
clearer still. When social actors saw their success as reflective of
important aspects of themselves, compared with when there were
no particular incentives, their lies were betrayed by the time it took
them to begin their deceptive responses (relative to their truthful
ones), the relative brevity of those responses, the silent hesitations
within them, and the higher pitch in which they were spoken.
Incentives that were not self-relevant resulted in cues to deception
that differed less markedly from the cues that occurred when no
special incentive was in place.
Interactivity
In Buller and Burgoons (1996) interpersonal model of decep-
tion, the central theoretical construct is the degree of interaction
between the liar and the target of the lies. The model predicts
greater involvement and immediacy with greater interactivity, but
our review found that liars in interactive contexts, relative to
noninteractive ones, provided fewer details than did truth tellers.
Eye contact, a nonverbal immediacy cue, did not differentially
predict deception in interactive versus noninteractive contexts.
Buller and Burgoons model predicts greater composure with
greater interaction, but we found that higher pitchan indicator of
lack of composurewas a cue to deception in interactive contexts
only. Blinking was a more powerful cue to deception in noninter-
active contexts. Other cues to composure, such as nervousness and
fidgeting, did not vary with the interactivity of the context. Their
model predicts greater fluency with increasing interaction, but our
analysis indicates interactivity was not a significant moderator of
any of the cues to fluency (non-ah speech disturbances, silent
pauses, filled pauses).
We think Buller and Burgoons (1996) interactivity predictions
failed because their construct is theoretically imprecise (B. M.
DePaulo, Ansfield, & Bell, 1996). Totally noninteractive contexts
(e.g., leaving a lie on a target persons voicemail) differ from
totally interactive ones (e.g., unscripted face-to-face interactions)
in many important ways. One is the mere presence of the other
person, even apart from any interaction with that person. That
presence has the potential to affect self-awareness, awareness of
the potential impact of the lie on that person, the salience of
self-presentational goals, and feelings of accountability (e.g.,
Schlenker, 2002; Schlenker, Britt, Pennington, Murphy, &
Doherty, 1994; Wicklund, 1982). Interactive exchanges entangle
participants in multiple roles and tasks (Buller & Burgoon, 1996;
Ekman & Friesen, 1969), which can be cognitively challenging.
However, to the extent that interactive exchanges are the more
familiar mode of communication, participants may find them less
challenging than noninteractive communications. From a conver-
sational analysis perspective, the significance of interactive pro-
cesses may lie in the interpretive frame they provide (e.g., Brown
& Levinson, 1987; Grice, 1989; Jacobs, Brashers, & Dawson,
1996; McCornack, 1992). For example, whether a person has
provided too little, too much, unclear, or irrelevant information in
response to an inquiry is more readily assessed within the context
of the conversation than apart from it. To Buller and Burgoon,
what was especially important about interaction is the opportunity
it affords the participants to evaluate the effectiveness of their
attempts (e.g., the liars can determine whether their targets seem
suspicious) and adjust their behavior accordingly.
Some of the ways in which interactive contexts differ from
noninteractive ones may be inconsistent with each other in their
implications for cues to deception. Clarity should follow, not from
Buller and Burgoons (1996) approach of enumerating variables
that moderate the effects of interactivity, but from looking sepa-
rately at the important component processes. An example of this
approach is Levine and McCornacks (1996, 2002) analysis of the
probing effect,which is the counterintuitive finding that com-
municators who are probed by their targets are perceived as more
honest than those who are not probed. The initial explanation for
this effect was behavioral adaptation: Probed communicators rec-
ognized the skepticism of their targets, and adapted their behavior
to appear more truthful (e.g., Buller & Burgoon, 1996; Stiff &
Miller, 1986). However, when Levine and McCornack (2002)
manipulated the presence of probes in videotaped interviews in
which the communicatorsbehavior was held constant (ruling out
103
CUES TO DECEPTION
the behavioral adaptation explanation) the probing effect still
occurred.
The Self-Presentational Perspective on Cues to Deception
Ordinary Imperfections and Unusual Contents
Only the self-presentational perspective predicts that lies are
characterized by fewer ordinary imperfections and unusual con-
tents than truths. Drawing from research and theory on credibility
assessment (e.g., Yuille, 1989), we suggested that liars try to
anticipate the kinds of communications that targets would find
credible and that in doing so, fall prey to their own misconceptions
about the nature of truth telling. Some of our results were consis-
tent with that prediction. People who spontaneously corrected
themselves and who admitted that they could not remember ev-
erything about the story they were relating, were more likely to be
telling the truth than to be lying. It was also truth tellers who were
somewhat more likely to tell stories richer in contextual embed-
ding and unusual details.
The Looks and Sounds of Deceit Are Faint
We found evidence for all five of the categories of cues we
predicted: Deceptive presentations (relative to truthful ones) were
in some ways less forthcoming, less compelling, more negative,
more tense, and suspiciously bereft of ordinary imperfections and
unusual details. Fundamental to the self-presentational perspective
is the prediction that these cues would be weak. In fact, they were.
The median effect size of the 88 cues was just |0.10|. Only 3 of
these cues had effect sizes greater than |0.40|.
Results of the moderator analyses suggest that pronouncements
about the faintness of the signs of deceit are both understated and
exaggerated. Lies told by social actors who have no special mo-
tivation to succeed in their presentations and lies that are not about
transgressions leave almost no discernible cues. Even some of the
cues that did seem promising in the results combined across all
estimatesfor example, cues to tension and to pitchlost a bit of
their luster for self-presentations that were not about transgressions
and that were not driven by any particular incentives. These nearly
cueless lies most closely resemble the deceptive presentations of
self in everyday life. However, when social actors were using their
lies to hide matters that could spoil their identities (such as when
they were lying about transgressions), and when their success at
lying was linked to important aspects of their self-concepts, then
cues to deception were no longer quite so faint.
When Will Cues to Deception Be Clearer?
Using our self-presentational perspective, we were able to pre-
dict some important moderators of the strength of cues to decep-
tion. In this section, we consider five other ways in which the
results of our overall analyses may have underestimated the po-
tential for verbal and nonverbal cues to separate truths from lies.
First, perhaps effect sizes for cues to deception would be more
impressive if they were computed separately for the different
emotions that senders may be trying to conceal or to convey
(Ekman, 1985/1992). There were not enough relevant studies
available to test this possibility adequately in the present review.
Second, in this review, as in most of the studies in the literature, we
tested the predictive power of each behavioral cue individually.
However, the degree to which lies can be discriminated from truths
could potentially be improved if combinations of cues were con-
sidered (e.g., Ekman, OSullivan, Friesen, & Scherer, 1991; Vrij,
Edward, Roberts, & Bull, 2000). Third, if the replicability of a set
of cues within a particular context can be established, the impli-
cations could be important even if the particular cues could not be
generalized to different contexts. For example, behavioral cues
believed to be indicative of deceit in the context of polygraph
testing (e.g., Reid & Arthur, 1953) or criminal interrogations (e.g.,
Macdonald & Michaud, 1987) are worth establishing even if some
of them occur infrequently outside of those contexts. Fourth, it is
possible that particular individuals telegraph their lies in idiosyn-
cratic yet highly reliable ways (Vrij & Mann, 2001) that are not
captured by our meta-analytic approach. Finally, our results sug-
gest that truths and lies may be discriminated more powerfully by
using subjective measures rather than objective ones. However,
detailed coding systems that are carefully validated and used to test
theoretically based predictions may enable more precise discrim-
inations than untrained observers could achieve with their subjec-
tive impressions (e.g., Ekman & Friesen, 1978; Scherer, 1982).
When Truths and Lies Switch Sides
It is important to emphasize that there are exceptions to the
predictions we derived from the self-presentational perspective.
There are times when people more readily embrace their deceptive
presentations than their truthful ones. For example, a man who has
long fantasized about being a war hero and has claimed repeatedly
to have been one may eventually make that false claim more
convincingly than he can describe his actual war-year experiences
teaching in his homeland, which was at peace. There are also times
when truthful presentations are enacted with a greater sense of
deliberateness than are deceptive ones. Self-incriminating truths
are examples of this (cf. Kraut, 1978). When the tables are turned,
the cues are too; it is the truth tellers who seem less forthcoming,
more tense, and more negative, and it is they who tell stories that
sound less compelling.
The behaviors we have described as cues to deception, then,
may be more accurately described as cues to the hypothesized
processes (e.g., attempts to regulate thoughts, feelings, and behav-
iors) and to psychological states (e.g., investment in, and famil-
iarity with, the attempted performance). Experimental research
that directly tests the role of these processes in producing the
predicted cues remains to be done.
Cues to Truths, to Personalities, and to Situations
Our use of the term cues to deception could suggest that we are
describing the ways that liars behave, but in fact we are describing
the ways in which liars act differently than truth tellers. Experi-
mental manipulations and individual differences can be linked to
cues to deception by their implications for the behavior of liars or
truth tellers, or both. For example, in a study in which participants
were selected because they saw themselves as very independent
(B. M. DePaulo et al., 1991), the truthful life stories they told that
showcased their independence were more responsive to the exper-
imental manipulations than were the life stories that were
fabricated.
104 DEPAULO ET AL.
Caution is also in order in interpreting the moderators of cues to
deception. For example, when we say that eye contact is a cue to
deception when senders are motivated to get away with their lies
but not when they are not motivated, we are not necessarily
claiming that liars more often avoid eye contact when they have an
incentive to succeed than when they do not (though they may).
Instead, we are saying that the degree to which liars avoid eye
contact more than truth tellers do is greater when they are moti-
vated to succeed than when they are not. The cues we describe in
our analyses of motivation as a moderator are not cues to motiva-
tion (that is a different question), they are cues to deception under
different levels of motivation.
For example, people telling lies in high stakes circumstances
(e.g., while on trial for murder) may be expected to seem more
nervous than people telling comparable lies when the stakes are
lower (e.g., in traffic court). But truth tellers may also seem more
nervous in the high stakes setting. Nervousness would only be a
cue to deception in the murder trial if liars feel even more nervous
than truth tellers. It would be a stronger cue to deception in the
murder trial than in traffic court only if the degree to which liars
are more nervous than truth tellers is greater in the murder trial
than in traffic court.
To make these important distinctions clearer in future research,
we suggest investigators adopt a reporting style that has rarely
been used in the deception literature: Mean levels of the cues
should be reported separately for truths and lies at each level of the
experimental manipulations and for each of the individual differ-
ence categories. Results could then be analyzed in the familiar
factorial. That would clearly indicate, for example, whether people
seem more nervous when the stakes are high than when they are
low, regardless of whether they are lying or telling the truth;
whether they are more nervous when lying than when telling the
truth, regardless of the stakes; and whether the degree to which
they are more nervous when lying than when telling the truth is
greater when the stakes are high than when they are low.
The implications for our understanding of individual differences
are also important. For example, we claimed above that liars make
an effort to seem credible whereas truth tellers take their credibility
for granted. This may seem readily countered by the familiar
finding from the social anxiety literature indicating that socially
anxious people rarely take anything positive about themselves for
granted (e.g., B. M. DePaulo, Kenny, Hoover, Web, & Oliver,
1987; Leary & Kowalski, 1995; Schlenker & Leary, 1982), but that
is a main-effect finding about the ways in which socially anxious
people differ from socially secure people. If socially anxious
people do indeed feel insecure about the credibility of their truths,
but they feel even more insecure about the credibility of their lies,
then the predictions we outlined should apply to them as well as to
others.
When Confounded Designs Are of Practical Significance
All of the studies of transgressions were marred by a confound:
The people who lied were only those who committed a transgres-
sion, and the people who told truths were only those who did not.
It is not clear, then, whether any of the resulting cues were cues to
deception at all. They may have been cues to the transgression.
From a scientific stance, we have no unambiguous data from these
studies about the ways that lies differ from truths. However, when
considered from an applied perspective, these studies may tell
practitioners exactly what they want to know. We do not wish to
minimize the frequency or significance of false confessions (Kas-
sin, 1997), but ordinarily, credibility is not much at issue when
people admit to discrediting acts. Of greater interest are the ways
in which truthful denials can be distinguished from deceptive ones.
Blurring the Line Between Truths and Lies
In the studies we reviewed, the line between truths and lies was
drawn clearly. There were good methodological reasons for this.
To distinguish the characteristics of lies from those of truths, it is
of course necessary to first distinguish lies from truths. However,
outside of the lab, the line between them is often blurred.
The self-presentational perspective underscores the similarities
between truths and lies. Telling the whole truth and nothing but the
truth is rarely possible or desirable. All self-presentations are
edited. The question is one of whether the editing crosses the line
from the honest highlighting of aspects of identity that are most
relevant in the ongoing situation to a dishonest attempt to mislead.
This suggests that truthful and deceptive self-presentations may be
construed more aptly as aligned along a continuum rather than
sorted into clear and distinct categories. But there may be cate-
gorical aspects as well. For example, B. R. Schlenker (1982)
distinguished among self-presentations that fit within peoples
private latitudes of acceptance, neutrality, or rejection (cf. Sherif &
Hovland, 1961). Self-presentations that are well within the bound-
aries of the latitude of acceptance are clearly truths. These presen-
tations capture attitudes, feelings, and personal qualities that peo-
ple unambiguously accept as their own. Self-presentations that are
at the cusp of the latitude of acceptance just barely pass as truths.
Self-presentations that are well within peoples private latitudes of
rejection are clearly lies. The most elusive statements are those
falling in the latitude of neutrality; the editing of these self-
statements slips beyond the bounds of honesty but stops just short
of the brink of deceit. One implication of this conceptualization is
that the effect sizes we reported for cues to deception, though
generally small, may actually be overestimates of the true differ-
ences between truths and lies in everyday life (McCornack, 1997).
In the studies we reviewed, the truths and lies were typically well
within the bounds of acceptance and rejection. In many naturalistic
situations, they are not.
Definitional dilemmas also arise in situations in which neither
truths nor lies are entirely satisfying to people trying to decide
what to say. Bavelas, Black, Chovil, and Mullett (1990a, 1990b)
have described many of these intriguing predicaments in which
people may prefer not to lie but dislike the alternative of telling a
hurtful or costly truth. For example, what do people say when an
acquaintance asks their opinion of a class presentation that was
poorly organized and badly delivered? Bavelas et al.s (1990a,
1990b) answer was that they equivocate: They make themselves
unclear; they refrain from answering the question directly and
avoid stating their true opinions. Yet, Bavelas et al. (1990a, 1990b)
argued that equivocal answers are truthful. When participants in
those studies read the responses to the classmate, they rated the
responses as closer to the truthful end of the scale (labeled as
presentation was poorly organized and badly delivered) than
closer to the deceptive end (labeled as well organized and well
delivered). This criterion of truthfulness bypasses the question of
105
CUES TO DECEPTION
intentionality. The perceivers of self-presentations have the full
authority to make the judgments that determine what counts as
deceptive.
We are not yet ready to hand over that authority to the perceiv-
ers. Definitional issues aside, though, we think that studies of
social actorsresponses to communicative dilemmas such as the
ones described by Bavelas et al. (1990a, 1990b) are important for
another reason. They point to some of the ways in which peoples
self-presentational strategies can be more imaginative and their
goals more complex than much of the current literature on cues to
deception might suggest.
In a pair of studies, B. M. DePaulo and Bell (1996) created the
kind of dilemma that Bavelas et al. (1990a, 1990b) described.
Students chose their favorite and least favorite paintings from the
ones on display in the room, and then each interacted with an artist
who claimed that the students least favorite painting was one of
her own. When students were asked what they thought of that
painting, they amassed misleading evidence (i.e., they mentioned
aspects of the painting they really did like while neglecting to note
all of the aspects they disliked), and they implied that they liked
the painting by emphasizing how much they disliked other paint-
ings in the room that were painted by other artists (without stating
directly that they liked the painting in question). B. M. DePaulo
and Bell (1996) posited a defensibility postulateto account for
these ploys. The students were trying to communicate in ways that
could be defended as truthful (e.g., they really did like the aspects
of the paintings they mentioned, and they really did dislike the
other artistswork) but that would also mislead the artist about
their true opinions. These strategies are not captured by any of the
objectively measured cues we reviewed. Yet, they provide hints
about what people are trying to accomplish in these challenging
situations that are perhaps more telling than what can be learned by
counting behaviors such as foot movements and speech
disturbances.
Laboratory Lies
The studies we reviewed included lies told by criminal suspects
and people in the news, but in most of the studies, college students
told truths and lies in laboratory experiments. One common cri-
tique of such studies (e.g., Miller & Stiff, 1993) is that the
participants typically are not highly motivated to get away with
their lies. In many of these studies, there were neither rewards for
successful lies nor sanctions for unsuccessful ones. Moreover, the
participants often told their truths and lies because they were
instructed to do so as part of the experimental procedures; they did
not freely choose to lie or to tell the truth. A related critique (Miller
& Stiff, 1993) is that in many studies, the degree of interaction
between the social actor and another person was minimal; some-
times participants told their truths and lies with little or no feed-
back or skepticism from any other person.
Although these critiques are often cast as attacks on the ecolog-
ical validity of studies of deception, as such they may be largely
wrong. The critiqued characteristics of studies of deception may in
fact aptly capture the nature of the vast majority of lies (B. M.
DePaulo, Ansfield, & Bell, 1996; B. M. DePaulo, Kashy, et al.,
1996). The everyday lies that people tell are rarely consequential.
In many instances, they are essentially obligatory. The guest who
is treated to an extensively prepared but unpalatable dinner rarely
feels free to say truthfully that the food was disgusting. The
students whose late-night partying has interfered with the timely
completion of their take-home exams tell lies to the course instruc-
tor just as readily as if they had been explicitly instructed to do so.
Furthermore, the little lies of everyday life rarely trigger an ex-
tended discourse. The host or hostess nods in appreciation, and the
course instructor waits for the students to depart before rolling his
or her eyes.
One way that truths and lies told in the laboratory really may fail
to reflect the dynamics of self-presentation outside of the lab is that
people may be more self-conscious about their truthful presenta-
tions than they are ordinarily. If this is so, then the feeling of
deliberateness that we have underscored in our analysis may
separate truths from lies less definitively in the lab. In this respect,
the effect sizes of the cues to deception we have reported may
underestimate the true magnitude of the effects.
Discriminating Cues to Deception From Cues
to Other Processes and States
We have combined the results of more than 1,300 estimates of
the relationship between behaviors and deceit; therefore, we can
name with some confidence some of the cues to deceit. But the
behaviors that are indicative of deception can be indicative of other
states and processes as well. In fact, we used a consideration of
such states and processes to generate predictions about the kinds of
behaviors we might expect to be indicative of deceit. However, the
issue of discriminant validity still looms large. For example, is it
possible to distinguish the anxiety that is sometimes associated
with lying from the fear of being unfairly accused of lying (e.g.,
Bond & Fahey, 1987) or even from anxiety that has no necessary
connection to deceit (e.g., nervousness about public speaking,
shyness, distress about a personal problem)? Lying sometimes
results in verbal and nonverbal inconsistencies, but so does genu-
ine ambivalence (B. M. DePaulo & Rosenthal, 1979a, 1979b). Can
the two be differentiated? Some attempts have been made to begin
to address these kinds of issues (e.g., deTurck & Miller, 1985), and
we expect to see some progress in the future. However, we also
expect most future reports to end with the same cautionary note we
issue here: Behavioral cues that are discernible by human perceiv-
ers are associated with deceit only probabilistically. To establish
definitively that someone is lying, further evidence is needed.
References
References marked with an asterisk indicate studies included in the
meta-analysis.
*Alonso-Quecuty, M. (1992). Deception detection and reality monitoring:
A new answer to an old question? In F. Losel, D. Bender, & T. Bliesener
(Eds.), Psychology and law: International perspectives (pp. 328332).
New York: de Gruyter.
Ambady, N., & Rosenthal, R. (1992). Thin slices of expressive behavior as
predictors of interpersonal consequences: A meta-analysis. Psychologi-
cal Bulletin, 111, 256274.
Anderson, D. E., DePaulo, B. M., & Ansfield, M. E. (2002). The devel-
opment of deception detection skill: A longitudinal study of same sex
friends. Personality and Social Psychology Bulletin, 28, 536545.
*Anolli, L., & Ciceri, R. (1997). The voice of deception: Vocal strategies
of naive and able liars. Journal of Nonverbal Behavior, 21, 259284.
106 DEPAULO ET AL.
Bagley, J., & Manelis, L. (1979). Effect of awareness on an indicator of
cognitive load. Perceptual and Motor Skills, 49, 591594.
Bargh, J. A. (1989). Conditional automaticity: Varieties of automatic
influence in social perception and cognition. In J. Uleman & J. A. Bargh
(Eds.), Unintended thought (pp. 351). New York: Guilford Press.
Baumeister, R. F. (1998). The self. In D. T. Gilbert, S. T. Fiske, & G.
Lindzey (Eds.), Handbook of social psychology (4th ed., Vol. 1, pp.
680740). Boston: McGraw-Hill.
Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (1998).
Ego depletion: Is the active self a limited resource? Journal of Person-
ality and Social Psychology, 74, 12521265.
Baumeister, R. F., Stillwell, A. M., & Heatherton, T. F. (1994). Guilt: An
interpersonal approach. Psychological Bulletin, 115, 243267.
Bavelas, J. B., Black, A., Chovil, N., & Mullett, J. (1990a). Equivocal
communication. Newbury Park, CA: Sage.
Bavelas, J. B., Black, A., Chovil, N., & Mullett, J. (1990b). Truths, lies,
and equivocations. The effects of conflicting goals on discourse. Journal
of Language and Social Psychology, 9, 135161.
Bell, K. L., & DePaulo, B. M. (1996). Liking and lying. Basic and Applied
Social Psychology, 18, 243266.
Ben-Shakhar, G., & Elaad, E. (in press). The validity of psychophysiolog-
ical detection of deception with the Guilty Knowledge Test: A meta-
analytic study. Journal of Applied Psychology.
Berger, C. R., Karol, S. H., & Jordan, J. M. (1989). When a lot of
knowledge is a dangerous thing: The debilitating effects of plan com-
plexity on verbal fluency. Human Communication Research, 16, 91
119.
*Berrien, F. K., & Huntington, G. H. (1943). An exploratory study of
pupillary responses during deception. Journal of Experimental Psychol-
ogy, 32, 443449.
Bok, S. (1978). Lying: Moral choice in public and private life. New York:
Pantheon.
Bond, C. F., Jr., & DePaulo, B. M. (2002). Accuracy and truth bias in
the detection of deception: A meta-analytic review. Manuscript in
preparation.
Bond, C. F., Jr., & Fahey, W. E. (1987). False suspicion and the misper-
ception of deceit. British Journal of Social Psychology, 26, 4146.
*Bond, C. F., Jr., Kahler, K. N., & Paolicelli, L. M. (1985). The miscom-
munication of deception: An adaptive perspective. Journal of Experi-
mental Social Psychology, 21, 331345.
*Bond, C. F., Jr., Omar, A., Mahmoud, A., & Bonser, R. N. (1990). Lie
detection across cultures. Journal of Nonverbal Behavior, 14, 189204.
*Bradley, M. T., & Janisse, M. P. (1979/1980). Pupil size and lie detection:
The effect of certainty on detection. Psychology: A Quarterly Journal of
Human Behavior, 16, 3339.
*Bradley, M. T., & Janisse, M. P. (1981). Accuracy demonstrations, threat,
and the detection of deception: Cardiovascular, electrodermal, and pu-
pillary measures. Psychophysiology, 18, 307315.
Brown, P., & Levinson, S. (1987). Politeness: Some universals in language
usage. New York: Cambridge University Press.
*Buller, D. B., & Aune, R. K. (1987). Nonverbal cues to deception among
intimates, friends, and strangers. Journal of Nonverbal Behavior, 11,
269290.
Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory.
Communication Theory, 3, 203242.
*Buller, D. B., Burgoon, J. K., Buslig, A., & Roiger, J. (1996). Testing
interpersonal deception theory: The language of interpersonal deception.
Communication Theory, 6, 268289.
*Buller, D. B., Comstock, J., Aune, R. K., & Strzyzewski, K. D. (1989).
The effect of probing on deceivers and truthtellers. Journal of Nonverbal
Behavior, 13, 155170.
*Burgoon, J. K., & Buller D. B. (1994). Interpersonal deception: III.
Effects of deceit on perceived communication and nonverbal behavior
dynamics. Journal of Nonverbal Behavior, 18, 155184.
*Burgoon, J. K., Buller, D. B., Afifi, W., White, C., & Buslig, A. (1996,
May). The role of immediacy in deceptive interpersonal interactions.
Paper presented at the annual meeting of the International Communica-
tion Association, Chicago, IL.
*Burgoon, J. K., Buller, D. B., Floyd, K., & Grandpre, J. (1996). Deceptive
realities: Sender, receiver, and observer perspectives in deceptive con-
versations. Communication Research, 23, 724748.
*Burgoon, J. K., Buller, D. B., Guerrero, L. K., Afifi, W. A., & Feldman,
C. M. (1996). Interpersonal deception: XII. Information management
dimensions underlying deceptive and truthful messages. Communication
Monographs, 63, 5069.
*Burns, J. A., & Kintz, B. L. (1976). Eye contact while lying during an
interview. Bulletin of the Psychonomic Society, 7, 8789.
Butterworth, B. (1978). Maxims for studying conversations. Semiotica, 24,
317339.
Butterworth, B., & Goldman-Eisler, F. (1979). Recent studies on cognitive
rhythm. In A. W. Siegman & S. Feldstein (Eds.), Of speech and time:
Temporal patterns in interpersonal contexts (pp. 211224). Hillsdale,
NJ: Erlbaum.
Cacioppo, J. T., Petty, R. E., & Tassinary, L. G. (1989). Social psycho-
physiology: A new look. In L. Berkowitz (Ed.), Advances in experimen-
tal social psychology (Vol. 22, pp. 3991). San Diego, CA: Academic
Press.
Camden, C., Motley, M. T., & Wilson, A. (1984). White lies in interper-
sonal communication: A taxonomy and preliminary investigation of
social motivations. Western Journal of Speech Communication, 48,
309325.
Carver, C. S., & Scheier, M. F. (1981). Attention and self-regulation: A
control-theory approach to human behavior. New York: Springer-
Verlag.
*Chiba, H. (1985). Analysis of controlling facial expression when experi-
encing negative affect on an anatomical basis. Journal of Human De-
velopment, 21, 2229.
Christenfeld, N. J. S. (1994). Options and ums. Journal of Language and
Social Psychology, 13, 192199.
*Christensen, D. (1980). Decoding of intended versus unintended nonver-
bal messages as a function of social skill and anxiety. Unpublished
doctoral dissertation, University of Connecticut, Storrs.
*Ciofu, I. (1974). Audiospectral analysis in lie detection. Archiv fur Psy-
chologie, 126, 170180.
*Cody, M. J., Lee, W. S., & Chao, E. Y. (1989). Telling lies: Correlates of
deception among Chinese. In J. P. Forgas & J. M. Innes (Eds.), Recent
advances in social psychology: An international perspective (pp. 359
368). Amsterdam: North-Holland.
*Cody, M. J., Marston, P. J., & Foster, M. (1984). Deception: Paralinguis-
tic and verbal leakage. In R. N. Bostrom & B. H. Westley (Eds.),
Communication yearbook 8 (pp. 464490). Beverly Hills, CA: Sage.
*Cody, M. J., & OHair, H. D. (1983). Nonverbal communication and
deception: Differences in deception cues due to gender and communi-
cator dominance. Communication Monographs, 50, 175192.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences
(Rev. ed.). Hillsdale, NJ: Erlbaum.
Cooper, H. (1998). Synthesizing research: A guide for literature reviews
(3rd ed.). Beverly Hills, CA: Sage.
*Craig, K. D., Hyde, S. A., & Patrick, C. J. (1991). Genuine, suppressed
and faked facial behavior during exacerbation of chronic low back pain.
Pain, 46, 161172.
*Cutrow, R. J., Parks, A., Lucas, N., & Thomas, K. (1972). The objective
use of multiple physiological indices in the detection of deception.
Psychophysiology, 9, 578588.
DePaulo, B. M. (1992). Nonverbal behavior and self-presentation. Psycho-
logical Bulletin, 111, 203243.
DePaulo, B. M. (1994). Spotting lies: Can humans do better? Current
Directions in Psychological Science, 3, 8386.
107
CUES TO DECEPTION
DePaulo, B. M., Ansfield, M. E., & Bell, K. L. (1996). Theories about
deception and paradigms for studying it: A critical appraisal of Buller
and Burgoons interpersonal deception theory and research. Communi-
cation Theory, 3, 297310.
DePaulo, B. M., Ansfield, M. E., Kirkendol, S. E., & Boden, J. M. (2002).
Serious lies. Manuscript submitted for publication.
DePaulo, B. M., & Bell, K. L. (1996). Truth and investment: Lies are told
to those who care. Journal of Personality and Social Psychology, 71,
703716.
*DePaulo, B. M., Blank, A. L., Swain, G. W., & Hairfield, J. G. (1992).
Expressiveness and expressive control. Personality and Social Psychol-
ogy Bulletin, 18, 276285.
*DePaulo, B. M., Epstein, J. A., & LeMay, C. S. (1990). Responses of the
socially anxious to the prospect of interpersonal evaluation. Journal of
Personality, 58, 623640.
DePaulo, B. M., & Friedman, H. S. (1998). Nonverbal communication. In
D. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), Handbook of social
psychology (4th ed., Vol. 2, pp. 340). New York: Random House.
*DePaulo, B. M., Jordan, A., Irvine, A., & Laser, P. S. (1982). Age
changes in the detection of deception. Child Development, 53, 701709.
DePaulo, B. M., & Kashy, D. A. (1998). Everyday lies in close and casual
relationships. Journal of Personality and Social Psychology, 74, 6379.
DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein,
J. A. (1996). Lying in everyday life. Journal of Personality and Social
Psychology, 70, 979995.
DePaulo, B. M., Kenny, D. A., Hoover, C., Webb, W., & Oliver, P. (1987).
Accuracy of person perception: Do people know what kinds of impres-
sions they convey? Journal of Personality and Social Psychology, 52,
303315.
DePaulo, B. M., & Kirkendol, S. E. (1989). The motivational impairment
effect in the communication of deception. In J. C. Yuille (Ed.), Credi-
bility assessment (pp. 5170). Dordrecht, the Netherlands: Kluwer
Academic.
DePaulo, B. M., Kirkendol, S. E., Tang, J., & OBrien, T. P. (1988). The
motivational impairment effect in the communication of deception:
Replications and extensions. Journal of Nonverbal Behavior, 12, 177
202.
*DePaulo, B. M., Lanier, K., & Davis, T. (1983). Detecting the deceit of
the motivated liar. Journal of Personality and Social Psychology, 45,
10961103.
*DePaulo, B. M., LeMay, C. S., & Epstein, J. A. (1991). Effects of
importance of success and expectations for success on effectiveness at
deceiving. Personality and Social Psychology Bulletin, 17, 1424.
*DePaulo, B. M., & Rosenthal, R. (1979a). Ambivalence, discrepancy, and
deception in nonverbal communication. In R. Rosenthal (Ed.), Skill in
nonverbal communication (pp. 204248). Cambridge, MA: Oelge-
schlager, Gunn, & Hain.
DePaulo, B. M., & Rosenthal, R. (1979b). Telling lies. Journal of Person-
ality and Social Psychology, 37, 17131722.
*DePaulo, B. M., Rosenthal, R., Green, C. R., & Rosenkrantz, J. (1982).
Diagnosing deceptive and mixed messages from verbal and nonverbal
cues. Journal of Experimental Social Psychology, 18, 433446.
*DePaulo, B. M., Rosenthal, R., Rosenkrantz, J., & Green, C. R. (1982).
Actual and perceived cues to deception: A closer look at speech. Basic
and Applied Social Psychology, 3, 291312.
DePaulo, B. M., Stone, J. I., & Lassiter, G. D. (1985a). Deceiving and
detecting deceit. In B. R. Schlenker (Ed.), The self and social life (pp.
323370). New York: McGraw-Hill.
DePaulo, B. M., Stone, J. I., & Lassiter, G. D. (1985b). Telling ingratiating
lies: Effects of target sex and target attractiveness on verbal and non-
verbal deceptive success. Journal of Personality and Social Psychol-
ogy, 48, 11911203.
*DePaulo, P. J., & DePaulo, B. M. (1989). Can attempted deception by
salespersons and customers be detected through nonverbal behavioral
cues? Journal of Applied Social Psychology, 19, 15521577.
*deTurck, M. A., & Miller, G. R. (1985). Deception and arousal: Isolating
the behavioral correlates of deception. Human Communication Re-
search, 12, 181201.
di Battista, P., & Abrahams, M. (1995). The role of relational information
in the production of deceptive messages. Communication Reports, 8,
120127.
*Dulaney, E. F., Jr. (1982). Changes in language behavior as a function of
veracity. Human Communication Research, 9, 7582.
Ekman, P. (1992). Telling lies. New York: Norton. (Original work pub-
lished 1985)
Ekman, P., & Friesen, W. V. (1969). Nonverbal leakage and clues to
deception. Psychiatry, 32, 88106.
*Ekman, P., & Friesen, W. V. (1972). Hand movements. Journal of
Communication, 22, 353374.
Ekman, P., & Friesen, W. V. (1978). The facial action coding system. Palo
Alto, CA: Consulting Psychologists Press.
*Ekman, P., Friesen, W. V., & OSullivan, M. (1988). Smiles while lying.
Journal of Personality and Social Psychology, 54, 414420.
*Ekman, P., Friesen, W. V., & Scherer, K. R. (1976). Body movement and
voice pitch in deceptive interactions. Semiotica, 16, 2327.
*Ekman, P., Friesen, W. V., & Simons, R. C. (1985). Is the startle reaction
an emotion? Journal of Personality and Social Psychology, 49, 1416
1426.
Ekman, P., Levenson, R. W., & Friesen, W. V. (1983, September 16).
Autonomic nervous system activity distinguishes among emotions. Sci-
ence, 221, 12081210.
*Ekman, P., OSullivan, M., Friesen, W. V., & Scherer, K. R. (1991). Face,
voice, and body in detecting deceit. Journal of Nonverbal Behavior, 15,
125135.
*Elliott, G. L. (1979). Some effects of deception and level of self-
monitoring on planning and reacting to a self-presentation. Journal of
Personality and Social Psychology, 37, 12821292.
*Exline, R. V., Thibaut, J., Hickey, C. B., & Gumpert, P. (1970). Visual
interaction in relation to Machiavellianism and an unethical act. In R.
Christie & F. Geis (Eds.), Studies in Machiavellianism (pp. 5375). New
York: Academic Press.
*Feeley, T. H., & deTurck, M. A. (1998). The behavioral correlates of
sanctioned and unsanctioned deceptive communication. Journal of Non-
verbal Behavior, 22, 189204.
Fehr, B. J., & Exline, R. V. (1987). Social visual interaction: Conceptual
and literature review. In A. W. Siegman & S. Feldstein (Eds.), Nonver-
bal behavior and communication (2nd ed., pp. 225326). Hillsdale, NJ:
Erlbaum.
Feldman, R. S., Devin-Sheehan, L., & Allen, V. L. (1978). Nonverbal cues
as indicants of verbal dissembling. American Educational Research
Journal, 15, 217231.
Feldman, R. S., Forrest, J. A., & Happ, B. R. (2002). Self-presentation and
verbal deception: Do self-presenters lie more? Basic and Applied Social
Psychology, 24, 163170.
Field, A. P. (2001). Meta-analysis of correlation coefficients: A Monte
Carlo comparison of fixed- and random-effects methods. Psychological
Methods, 6, 161180.
*Fiedler, K. (1989). Suggestion and credibility: Lie detection based on
content-related cues. In V. A. Gheorghiu, P. Netter, H. J. Eysenck, & R.
Rosenthal (Eds.), Suggestion and suggestibility (pp. 323335). New
York: Springer-Verlag.
*Fiedler, K., Schmid, J., Kurzenhauser, S., & Schroter, V. (1997). Lie
detection as an attribution process: The anchoring effect revisited.
Unpublished manuscript.
*Fiedler, K., & Walka, I. (1993). Training lie detectors to use nonverbal
cues instead of global heuristics. Human Communication Research, 20,
199223.
108 DEPAULO ET AL.
*Finkelstein, S. (1978). The relationship between physical attractiveness
and nonverbal behaviors. Unpublished honors thesis, Hampshire Col-
lege, Amherst, MA.
Fleming, J. H. (1994). Multiple-audience problems, tactical communica-
tion, and social interaction: A relational-regulation perspective. Ad-
vances in Experimental Social Psychology, 26, 215292.
Fleming, J. H., & Rudman, L. A. (1993). Between a rock and a hard place:
Self-concept regulating and communicative properties of distancing
behaviors. Journal of Personality and Social Psychology, 64, 4459.
*Frank, M. G. (1989). Human lie detection ability as a function of the liar’s
motivation. Unpublished doctoral dissertation, Cornell University,
Ithaca, NY.
*Gagnon, L. R. (1975). The encoding and decoding of cues to deception.
Unpublished doctoral dissertation, Arizona State University, Tempe.
*Galin, K. E., & Thorn, B. E. (1993). Unmasking pain: Detection of
deception in facial expressions. Journal of Social and Clinical Psychol-
ogy, 12, 182197.
Gibbons, F. X. (1990). Self-attention and behavior: A review and theoret-
ical update. Advances in experimental social psychology, 23, 249303.
Gilbert, D. T., & Krull, D. S. (1988). Seeing less and knowing more: The
benefits of perceptual ignorance. Journal of Personality and Social
Psychology, 54, 193202.
Gilbert, D. T., Krull, D. S., & Pelham, B. W. (1988). Of thoughts unspo-
ken: Social inference and the self-regulation of behavior. Journal of
Personality and Social Psychology, 55, 685694.
Goldman-Eisler, F. (1968). Psycholinguistics: Experiments in spontaneous
speech. New York: Academic Press.
*Goldstein, E. R. (1923). Reaction times and the consciousness of decep-
tion. American Journal of Psychology, 34, 562581.
*Greene, J. O., OHair, H. D., Cody, M. J., & Yen, C. (1985). Planning and
control of behavior during deception. Human Communication Re-
search, 11, 335364.
Grice, P. (1989). Studies in the way of words. Cambridge, MA: Harvard
University Press.
Gross, J. J. (1998). Antecedent and response-focused emotion regulation:
Divergent consequences for experience, expression, and physiology.
Journal of Personality and Social Psychology, 74, 224237.
Gross, J. J., & Levenson, R. W. (1993). Emotional suppression: Physiol-
ogy, self-report, and expressive behavior. Journal of Personality and
Social Psychology, 64, 970986.
*Hadjistavropoulos, H. D., & Craig, K. D. (1994). Acute and chronic low
back pain: Cognitive, affective, and behavioral dimensions. Journal of
Consulting and Clinical Psychology, 62, 341349.
*Hadjistavropoulos, H. D., Craig, K. D., Hadjistavropoulos, T., & Poole,
G. D. (1996). Subjective judgments of deception in pain expression:
Accuracy and errors. Pain, 65, 251258.
*Hall, M. E. (1986). Detecting deception in the voice: An analysis of
fundamental frequency, syllabic duration, and amplitude of the human
voice. Unpublished doctoral dissertation, Michigan State University,
East Lansing.
Hample, D. (1980). Purposes and effects of lying. Southern Speech Com-
munication Journal, 46, 3347.
Harrigan, J. A., & OConnell, D. M. (1996). Facial movements during
anxiety states. Personality and Individual Differences, 21, 205212.
*Harrison, A. A., Hwalek, M., Raney, D., & Fritz, J. G. (1978). Cues to
deception in an interview situation. Social Psychology, 41, 156161.
Hedges, L. V., & Becker, B. J. (1986). Statistical methods in the meta-
analysis of research on gender differences. In J. S. Hyde & M. C. Linn
(Eds.), The psychology of gender: Advances through meta-analysis (pp.
1450). Baltimore: Johns Hopkins University Press.
Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis.
Orlando, FL: Academic Press.
*Heilveil, I. (1976). Deception and pupil size. Journal of Clinical Psychol-
ogy, 32, 675676.
*Heilveil, I., & Muehleman, J. T. (1981). Nonverbal clues to deception in
a psychotherapy analogue. Psychotherapy: Theory, Research, and Prac-
tice, 18, 329335.
*Heinrich, C. U., & Borkenau, P. (1998). Deception and deception detec-
tion: The role of cross-modal inconsistency. Journal of Personality, 66,
687712.
*Hemsley, G. D. (1977). Experimental studies in the behavioral indicants
of deception. Unpublished doctoral dissertation, University of Toronto,
Toronto, Ontario, Canada.
*Hernandez-Fernaud, E., & Alonso-Quecuty, M. (1997). The cognitive
interview and lie detection: A new magnifying glass for Sherlock
Holmes? Applied Cognitive Psychology, 11, 5568.
Hess, E. H., & Polt, J. M. (1963, March 13). Pupil size in relation to mental
activity during simple problem-solving. Science, 140, 11901192.
*Hess, U. (1989). On the dynamics of emotional facial expression. Un-
published doctoral dissertation, Dartmouth College, Hanover, NH.
*Hess, U., & Kleck, R. E. (1990). Differentiating emotion elicited and
deliberate emotional facial expressions. European Journal of Social
Psychology, 20, 369385.
*Hess, U., & Kleck, R. E. (1994). The cues decoders use in attempting to
differentiate emotion-elicited and posed facial expressions. European
Journal of Social Psychology, 24, 367381.
*Hocking, J. E., & Leathers, D. G. (1980). Nonverbal indicators of decep-
tion: A new theoretical perspective. Communication Monographs, 47,
119131.
Holland, M. K., & Tarlow, G. (1972). Blinking and mental load. Psycho-
logical Reports, 31, 119127.
Holland, M. K., & Tarlow, G. (1975). Blinking and thinking. Psycholog-
ical Reports, 41, 403406.
Holtgraves, T. (1986). Language structure in social interaction: Perceptions
of direct and indirect speech acts and interactants who use them. Journal
of Personality and Social Psychology, 51, 305314.
*Horvath, F. S. (1973). Verbal and nonverbal clues to truth and deception
during polygraph examinations. Journal of Police Science and Admin-
istration, 1, 138152.
*Horvath, F. (1978). An experimental comparison of the psychological
stress evaluator and the galvanic skin response in detection of deception.
Journal of Applied Psychology, 63, 338344.
*Horvath, F. (1979). Effect of different motivational instructions on de-
tection of deception with the psychological stress evaluator and the
galvanic skin response. Journal of Applied Psychology, 64, 323330.
*Horvath, F., Jayne, B., & Buckley, J. (1994). Differentiation of truthful
and deceptive criminal suspects in behavior analysis interviews. Journal
of Forensic Sciences, 39, 793807.
Jacobs, S., Brashers, D., & Dawson, E. J. (1996). Truth and deception.
Communication Monographs, 63, 98103.
*Janisse, M. P., & Bradley, M. T. (1980). Deception, information, and the
pupillary response. Perceptual and Motor Skills, 50, 748750.
Johnson, M. K., & Raye, C. L. (1981). Reality monitoring. Psychological
Bulletin, 88, 6785.
Jones, W. H., & Burdette, M. P. (1993). Betrayal in close relationships. In
A. L. Weber & J. Harvey (Eds.), Perspectives on close relationships (pp.
114). New York: Allyn & Bacon.
Kahneman, D. (1973). Attention and effort. Englewood Cliffs, NJ:
Prentice-Hall.
Kahneman, D., & Beatty, J. (1967). Pupillary responses in a pitch-
discrimination task. Perception and Psychophysics, 2, 101105.
Kahneman, D., Tursky, B., Shapiro, D., & Crider, A. (1969). Pupillary,
heart rate, and skin resistance changes during a mental task. Journal of
Experimental Psychology, 79, 164167.
Kappas, A., Hess, U., & Scherer, K. R. (1991). Voice and emotion. In R. S.
Feldman & B. Rime (Eds.), Fundamentals of nonverbal behavior (pp.
200238). Cambridge, England: Cambridge University Press.
109
CUES TO DECEPTION
Kashy, D. A., & DePaulo, B. M. (1996). Who lies? Journal of Personality
and Social Psychology, 70, 10371051.
Kasl, S. V., & Mahl, G. F. (1965). The relationship of disturbances and
hesitations in spontaneous speech to anxiety. Journal of Personality and
Social Psychology, 1, 425433.
Kassin, S. M. (1997). The psychology of confession evidence. American
Psychologist, 52, 221233.
Keltner, D., & Buswell, B. N. (1996). Evidence for the distinctness of
embarrassment, shame, and guilt: A study of recalled antecedents and
facial expressions of emotions. Cognition & Emotion, 10, 155171.
Keltner, D., & Harker, L. A. (1998). Forms and functions of the nonverbal
signal of shame. In P. Gilbert & B. Andrews (Eds.), Interpersonal
approaches to shame (pp. 7898). Oxford, England: Oxford University
Press.
Keltner, D., Young, R. C., Heerey, E. A., Oemig, C., & Monarch, N. D.
(1998). Teasing in hierarchical and intimate relations. Journal of Per-
sonality and Social Psychology, 75, 12311247.
*Kennedy, J., & Coe, W. C. (1994). Nonverbal signs of deception during
posthypnotic amnesia: A brief communication. International Journal of
Clinical and Experimental Hypnosis, 42, 1319.
Kimble, C. E., & Seidel, S. D. (1991). Vocal signs of confidence. Journal
of Nonverbal Behavior, 15, 99105.
Kimble, G. A., & Perlmuter, L. C. (1970). The problem of volition.
Psychological Review, 77, 361384.
*Knapp, M. L., Hart, R. P., & Dennis, H. S. (1974). An exploration of
deception as a communication construct. Human Communication Re-
search, 1, 1529.
*Kohnken, G., Schimossek, E., Aschermann, E., & Hofer, E. (1995). The
cognitive interview and the assessment of the credibility of adults
statements. Journal of Applied Psychology, 80, 671684.
*Koper, R. J., & Sahlman, J. M. (2001). The behavioral correlates of
naturally occurring, high motivation deceptive communication. Manu-
script submitted for publication.
*Krauss, R. M. (1981). Impression formation, impression management,
and nonverbal behaviors. In E. T. Higgins, C. P. Herman, & M. P. Zanna
(Eds.), Social cognition: Vol. 1. The Ontario Symposium (pp. 323341).
Hillsdale, NJ: Erlbaum.
*Kraut, R. E. (1978). Verbal and nonverbal cues in the perception of lying.
Journal of Personality and Social Psychology, 36, 380391.
*Kraut, R. E., & Poe, D. (1980). Behavioral roots of person perception:
The deception judgments of customs inspectors and laymen. Journal of
Personality and Social Psychology, 39, 784798.
*Kuiken, D. (1981). Nonimmediate language style and inconsistency be-
tween private and expressed evaluations. Journal of Experimental Social
Psychology, 17, 183196.
*Kurasawa, T. (1988). Effects of contextual expectations on deception-
detection. Japanese Psychological Research, 30, 114121.
*Landry, K. L., & Brigham, J. C. (1992). The effect of training in
Criteria-Based Content Analysis on the ability to detect deception in
adults. Law and Human Behavior, 16, 663676.
Leary, M. R., & Kowalski, R. (1995). Social anxiety. New York: Guilford
Press.
Levenson, R. W., Ekman, P., & Friesen, W. V. (1990). Voluntary facial
action generates emotion-specific autonomic nervous system activity.
Psychophysiology, 27, 363384.
Levine, T. R., & McCornack, S. A. (1992). Linking love and lies: A formal
test of the McCornack and Parks model of deception detection. Journal
of Social and Personal Relationships, 9, 143154.
Levine, T. R., & McCornack, S. A. (1996). A critical analysis of the
behavioral adaptation explanation of the probing effect. Human Com-
munication Research, 22, 575588.
Levine, T. R., & McCornack, S. A. (2002). Behavioral adaptation, confi-
dence, and heuristic-based explanations of the probing effect. Human
Communication Research, 27, 471502.
Lewis, B. P., & Linder, D. E. (1997). Thinking about choking? Attentional
processes and paradoxical performance. Personality and Social Psychol-
ogy Bulletin, 23, 937944.
Lewis, M., Stanger, C., & Sullivan, M. W. (1989). Deception in 3-year-
olds. Developmental Psychology, 25, 439443.
Lippard, P. V. (1988). Ask me no questions, Ill tell you no lies:
Situational exigencies for interpersonal deception. Western Journal of
Speech Communication, 52, 91103.
Macdonald, J., & Michaud, D. (1987). The confession: Interrogation and
criminal profiles for police officers. Denver: Apache Press.
Mahl, G. F. (1987). Explorations in nonverbal and vocal behavior. Hills-
dale, NJ: Erlbaum.
Malone, B. E., Adams, R. B., Anderson, D. E., Ansfield, M. E., & DePaulo,
B. M. (1997, May). Strategies of deception and their correlates over the
course of friendship. Poster presented at the annual meeting of the
American Psychological Society, Washington, DC.
Malone, B. E., & DePaulo, B. M. (2001). Measuring sensitivity to decep-
tion. In J. A. Hall & F. Bernieri (Eds.), Interpersonal sensitivity: Theory,
measurement, and application (pp. 103124). Mahwah, NJ: Erlbaum.
Malone, B. E., DePaulo, B. M., Adams, R. B., & Cooper, H. (2002).
Perceived cues to deception: A meta-analytic review. Unpublished
manuscript, University of Virginia, Charlottesville.
*Manaugh, T. S., Wiens, A. N., & Matarazzo, J. D. (1970). Content
saliency and interviewee speech behavior. Journal of Clinical Psychol-
ogy, 26, 1724.
Markus, H. (1977). Self-schemata and processing information about the
self. Journal of Personality and Social Psychology, 35, 6378.
*Marston, W. M. (1920). Reaction-time symptoms of deception. Journal of
Experimental Psychology, 3, 7287.
*Matarazzo, J. D., Wiens, A. N., Jackson, R. H., & Manaugh, T. S. (1970).
Interviewee speech behavior under conditions of endogenously-present
and exogenously-induced motivational states. Journal of Clinical Psy-
chology, 26, 141148.
*McClintock, C. C., & Hunt, R. G. (1975). Nonverbal indicators of affect
and deception in an interview setting. Journal of Applied Social Psy-
chology, 5, 5467.
McCornack, S. A. (1992). Information manipulation theory. Communica-
tion Monographs, 59, 116.
McCornack, S. A. (1997). The generation of deceptive messages. In J. O.
Greene (Ed.), Message production (pp. 91126). Mahwah, NJ: Erlbaum.
McCornack, S. A., & Levine, T. R. (1990). When lies are uncovered:
Emotional and relational outcomes of deception. Communication Mono-
graphs, 57, 119138.
*Mehrabian, A. (1971). Nonverbal betrayal of feeling. Journal of Exper-
imental Research in Personality, 5, 6473.
Mehrabian, A. (1972). Nonverbal communication. Chicago: Aldine
Atherton.
Metts, S. (1989). An exploratory investigation of deception in close rela-
tionships. Journal of Social and Personal Relationships, 6, 159179.
Metts, S. (1994). Relational transgressions. In W. R. Cupach & B. H.
Spitzberg (Eds.), The dark side of interpersonal communication (pp.
217239). Hillsdale, NJ: Erlbaum.
*Miller, G. R., deTurck, M. A., & Kalbfleisch, P. J. (1983). Self-
monitoring, rehearsal, and deceptive communication. Human Commu-
nication Research, 10, 97117.
Miller, G. R., & Stiff, J. B. (1993). Deceptive communication. Newbury
Park, CA: Sage.
*Motley, M. T. (1974). Acoustic correlates of lies. Western Speech, 38,
8187.
Muraven, M., Tice, D. M., & Baumeister, R. F. (1998). Self-control as a
limited resource: Regulatory depletion patterns. Journal of Personality
and Social Psychology, 74, 774789.
Neiss, R. (1988). Reconceptualizing arousal: Psychobiological states in
motor performance. Psychological Bulletin, 103, 345366.
110 DEPAULO ET AL.
*OHair, D., & Cody, M. J. (1987). Gender and vocal stress differences
during truthful and deceptive information sequences. Human Rela-
tions, 40, 113.
*OHair, D., Cody, M. J., Wang, X., & Chen, E. (1990). Vocal stress and
deception detection among Chinese. Communication Quarterly, 38,
158169.
*OHair, H. D., Cody, M. J., & McLaughlin, M. L. (1981). Prepared lies,
spontaneous lies, Machiavellianism, and nonverbal communication. Hu-
man Communication Research, 7, 325339.
*Pennebaker, J. W., & Chew, C. H. (1985). Behavioral inhibition and
electrodermal activity during deception. Journal of Personality and
Social Psychology, 49, 14271433.
Pontari, B. A., & Schlenker, B. R. (2000). The influence of cognitive load
on self-presentation: Can cognitive busyness help as well as harm social
performance? Journal of Personality and Social Psychology, 78, 1092
1108.
*Porter, S., & Yuille, J. C. (1996). The language of deceit: An investigation
of the verbal clues to deception in the interrogation context. Law and
Human Behavior, 20, 443458.
*Potamkin, G. G. (1982). Heroin addicts and nonaddicts: The use and
detection of nonverbal deception cues. Unpublished doctoral disserta-
tion, California School of Professional Psychology, Los Angeles.
Reid, J. E., & Arthur, R. O. (1953). Behavior symptoms of lie detector
subjects. Journal of Criminal Law, Criminology, and Police Science, 44,
104108.
Richards, K. M., & Gross, J. J. (1999). Composure at any cost? The
cognitive consequences of emotion suppression. Personality and Social
Psychology Bulletin, 25, 10331044.
*Riggio, R. E., & Friedman, H. S. (1983). Individual differences and cues
to deception. Journal of Personality and Social Psychology, 45, 899
915.
Roney, C. J., Higgins, E. T., & Shah, J. (1995). Goals and framing: How
outcome focus influences motivation and emotion. Personality and
Social Psychology Bulletin, 21, 11511160.
Rosenthal, R. (1991). Meta-analytic procedures for social research (Rev.
ed.). Newbury Park, CA: Sage.
*Ruby, C. L., & Brigham, J. C. (1998). Can Criteria-Based Content
Analysis distinguish between true and false statements of African-
American speakers? Law and Human Behavior, 22, 369388.
*Rybold, V. S. (1994). Paralinguistic cue leakage during deception: A
comparison between Asians and Euroamericans. Unpublished masters
thesis, California State University, Fullerton.
SAS Institute. (1985). SAS users guide: Statistics (Version 5). Cary, NC:
Author.
*Sayenga, E. R. (1983). Linguistic and paralinguistic indices of deception.
Unpublished doctoral dissertation, University of Michigan, Ann Arbor.
Schachter, S., Christenfeld, N. J. S., Ravina, B., & Bilous, F. (1991).
Speech disfluency and the structure of knowledge. Journal of Person-
ality and Social Psychology, 20, 362367.
Scheff, T. J. (2001). Emotions, the social bond and human reality: Part/
whole analysis. Cambridge, England: Cambridge University Press.
Scherer, K. R. (1982). Methods of research on vocal communication:
Paradigms and parameters. In K. R. Scherer & P. Ekman (Eds.), Hand-
book of methods in nonverbal behavior research (pp. 136198). Cam-
bridge, England: Cambridge University Press.
Scherer, K. R. (1986). Vocal affect expression: A review and a model for
future research. Psychological Bulletin, 99, 143165.
*Scherer, K. R., Feldstein, S., Bond, R. N., & Rosenthal, R. (1985). Vocal
cues to deception: A comparative channel approach. Journal of Psycho-
linguistic Research, 14, 409425.
Schlenker, B. R. (1980). Impression management: The self-concept, social
identity, and interpersonal relations. Monterey, CA: Brooks/Cole.
Schlenker, B. R. (1982). Translating actions into attitudes: An identity-
analytic approach to the explanation of social conduct. Advances in
Experimental Social Psychology, 15, 193247.
Schlenker, B. R. (1985). Identity and self-identification. In B. R. Schlenker
(Ed.), The self and social life (pp. 6599). New York: McGraw-Hill.
Schlenker, B. R. (2002). Self-presentation. In M. R. Leary & J. P. Tangney
(Eds.), Handbook of self and identity (pp. 492518). New York: Guil-
ford Press.
Schlenker, B. R., Britt, T. W., Pennington, J., Murphy, R., & Doherty, K.
(1994). The triangle model of responsibility. Psychological Review, 101,
632652.
Schlenker, B. R., & Leary, M. R. (1982). Social anxiety and self-presen-
tation: A conceptualization and model. Psychological Bulletin, 92, 641
669.
Schlenker, B. R., & Pontari, B. A. (2000). The strategic control of infor-
mation: Impression management and self-presentation in daily life. In A.
Tesser, R. Felson, & J. Suls (Eds.), Perspectives on self and identity (pp.
199232). Washington, DC: American Psychological Association.
Schlenker, B. R., Pontari, B. A., & Christopher, A. N. (2001). Excuses and
character: Personal and social implications of excuses. Personality and
Social Psychology Review, 5, 1532.
*Schneider, S. M., & Kintz, B. L. (1977). The effect of lying upon foot and
leg movements. Bulletin of the Psychonomic Society, 10, 451453.
Scott, T. R., Wells, W. H., Wood, D. Z., & Morgan, D. I. (1967). Pupillary
response and sexual interest reexamined. Journal of Clinical Psychol-
ogy, 23, 433438.
Searle, J. R. (1975). Indirect speech acts. In P. Cole & J. L. Morgan (Eds.),
Syntax and semantics: Vol. 3. Speech acts (pp. 5982). New York:
Academic Press.
Shennum, W. A., & Bugental, D. B. (1982). The development of control
over affective expression in nonverbal behavior. In R. S. Feldman (Ed.),
Development of nonverbal behavior in children (pp. 101121). New
York: Springer-Verlag.
Sherif, M., & Hovland, C. I. (1961). Social judgment. New Haven, CT:
Yale University Press.
Siegman, A. W. (1987). The telltale voice: Nonverbal messages of verbal
communication. In A. W. Siegman & S. Feldstein (Eds.), Nonverbal
behavior and communication (2nd ed., pp. 351434). Hillsdale, NJ:
Erlbaum.
Siegman, A. W., & Reynolds, M. A. (1983). Self-monitoring and speech in
feigned and unfeigned lying. Journal of Personality and Social Psychol-
ogy, 45, 13251333.
Simpson, H. M., & Molloy, F. M. (1971). Effects of audience anxiety on
pupil size. Psychophysiology, 8, 491496.
*Sitton, S. C., & Griffin, S. T. (1981). Detection of deception from clients
eye contact patterns. Journal of Counseling Psychology, 28, 269271.
Slivken, K. E., & Buss, A. H. (1984). Misattribution and speech anxiety.
Journal of Personality and Social Psychology, 47, 396402.
Smith, E. R. (1998). Mental representation and memory. In D. T. Gilbert,
S. T. Fiske, & G. Lindzey (Eds.), Handbook of social psychology (4th
ed., Vol. 1, pp. 391445). Boston: McGraw-Hill.
Sparks, G. G., & Greene, J. O. (1992). On the validity of nonverbal
indicators as measures of physiological arousal. Human Communication
Research, 18, 445471.
*Sporer, S. L. (1997). The less traveled road to truth: Verbal cues in
deception detection in accounts of fabricated and self-experienced
events. Applied Cognitive Psychology, 11, 373397.
Stanners, R. F., Coulter, M., Sweet, A. W., & Murphy, P. (1979). The
pupillary response as an indicator of arousal and cognition. Motivation
and Emotion, 3, 319340.
Steller, M., & Kohnken, G. (1989). Criteria-Based Content Analysis. In
D. C. Raskin (Ed.), Psychological methods in criminal investigation and
evidence (pp. 217245). New York: Springer-Verlag.
Stern, J. A., & Dunham, D. N. (1990). The ocular system. In J. T. Cacioppo
& L. G. Tassinary (Eds.), Principles of psychophysiology: Physical,
111
CUES TO DECEPTION
social, and inferential elements (pp. 513553). Cambridge, England:
Cambridge University Press.
Stiff, J. B., Kim, H. J., & Ramesh, C. N. (1992). Truth biases and aroused
suspicion in relational deception. Communication Research, 19, 326
345.
*Stiff, J. B., & Miller, G. R. (1986). Come to think of it...: Interrog-
ative probes, deceptive communication, and deception detection. Human
Communication Research, 12, 339357.
*Streeter, L. A., Krauss, R. M., Geller, V., Olsen, C., & Apple, W. (1977).
Pitch changes during attempted deception. Journal of Personality and
Social Psychology, 35, 345350.
Tangney, J. P., Miller, R. S., Flicker, L., & Barlow, D. H. (1996). Are
shame, guilt, and embarrassment distinct emotions? Journal of Person-
ality and Social Psychology, 70, 12561269.
*Todd-Mancillas, W. R., & Kibler, R. J. (1979). A test of concurrent
validity for linguistic indices of deception. Western Journal of Speech
Communication, 43, 108122.
Trovillo, P. V. (1939). A history of lie detection. Journal of Criminal Law
and Criminology, 29, 848881.
Turner, R. E., Edgley, C., & Olmstead, G. (1975). Information control in
conversations: Honesty is not always the best policy. Kansas Journal of
Sociology, 11, 6989.
Undeutsch, U. (1989). The development of Statement Reality Analysis. In
J. C. Yuille (Ed.), Credibility assessment (pp. 101119). Dordrecht, the
Netherlands: Kluwer Academic.
Vallacher, R. R., & Wegner, D. M. (1987). What do people think theyre
doing? Action identification and human behavior. Psychological Re-
view, 94, 315.
Vallacher, R. R., Wegner, D. M., McMahan, S. C., Cotter, J., & Larsen,
K. A. (1992). On winning friends and influencing people: Action iden-
tification and self-presentation success. Social Cognition, 10, 335355.
*Vrij, A. (1993). Credibility judgments of detectives: The impact of
nonverbal behavior, social skills, and physical characteristics on impres-
sion formation. Journal of Social Psychology, 133, 601610.
*Vrij, A. (1995). Behavioral correlates of deception in a simulated police
interview. Journal of Psychology, 129, 1528.
Vrij, A. (2000). Detecting lies and deceit. Chichester, England: Wiley.
*Vrij, A., Akehurst, L., & Morris, P. (1997). Individual differences in hand
movements during deception. Journal of Nonverbal Behavior, 21, 87
101.
Vrij, A., Edward, K., & Bull, R. (2001). Stereotypical verbal and nonverbal
responses while deceiving others. Personality and Social Psychology
Bulletin, 27, 899909.
Vrij, A., Edward, K., Roberts, K. P., & Bull, R. (2000). Detecting deceit via
analysis of verbal and nonverbal behavior. Journal of Nonverbal Behav-
ior, 24, 239264.
*Vrij, A., & Heaven, S. (1999). Vocal and verbal indicators of deception
as a function of lie complexity. Psychology, Crime, & Law, 5, 203215.
Vrij, A., & Mann, S. (2001). Telling and detecting lies in a high-stake
situation: The case of a convicted murderer. Applied Cognitive Psychol-
ogy, 15, 187203.
*Vrij, A., Semin, G. R., & Bull, R. (1996). Insight into behavior displayed
during deception. Human Communication Research, 22, 544562.
*Vrij, A., & Winkel, F. W. (1990/1991). The frequency and scope of
differences in nonverbal behavioral patterns: An observational study of
Dutch and Surinamese. In N. Bleichrodt & P. J. D. Drenth (Eds.),
Contemporary issues in cross-cultural psychology (pp. 120136). Am-
sterdam: Swets & Zeitlinger.
*Vrij, A., & Winkel, F. W. (1993). Objective and subjective indicators of
deception. In N. K. Clark & G. M. Stephenson (Eds.), Children, evi-
dence and procedure: Issues in criminological and legal psychology, 20,
5157.
*Wagner, H., & Pease, K. (1976). The verbal communication of inconsis-
tency between attitudes held and attitudes expressed. Journal of Person-
ality, 44, 115.
Walczyk, J. J., Roper, K., Seeman, E., & Humphrey, A. M. (in press).
Cognitive mechanisms underlying lying. Criminal Justice and Behavior.
Wallbott, H. G., & Scherer, K. R. (1991). Stress specificities: Differential
effects of coping style, gender, and type of stressor on autonomic
arousal, facial expression, and subjective feeling. Journal of Personality
and Social Psychology, 61, 147156.
Wegner, D. M. (1994). Ironic processes of mental control. Psychological
Review, 101, 3452.
Wegner, D. M., Erber, R., & Zanakos, S. (1993). Ironic processes in the
mental control of mood and mood-related thought. Journal of Person-
ality and Social Psychology, 65, 10931104.
*Weiler, J., & Weinstein, E. (1972). Honesty, fabrication, and the enhance-
ment of credibility. Sociometry, 35, 316331.
Wicklund, R. A. (1982). How society uses self-awareness. In J. Suls (Ed.),
Psychological perspectives on the self (Vol. 1, pp. 209230). Hillsdale,
NJ: Erlbaum.
Wiener, M., & Mehrabian, A. (1968). Language within language: Imme-
diacy, a channel in verbal communication. New York: Appleton-
Century-Crofts.
Wine, J. D. (1971). Text anxiety and direction of attention. Psychological
Bulletin, 76, 92104.
Yerkes, R. M., & Berry, C. S. (1909). The association reaction method of
mental diagnosis. American Journal of Psychology, 20, 2237.
Yuille, J. C. (Ed.). (1989). Credibility assessment. Dordrecht, the Nether-
lands: Kluwer Academic.
Yuille, J. C., & Cutshall, J. (1989). Analysis of statements of victims,
witnesses, and suspects. In J. C. Yuille (Ed.), Credibility assessment (pp.
175191). Dordrecht, the Netherlands: Kluwer Academic.
*Zaparniuk, J., Yuille, J. C., & Taylor, S. (1995). Assessing the credibility
of true and false statements. International Journal of Law and Psychi-
atry, 18, 343352.
Zivin, G. (1982). Watching the sands shift: Conceptualizing the develop-
ment of nonverbal mastery. In R. S. Feldman (Ed.), Development of
nonverbal behavior in children (pp. 6398). New York: Springer-
Verlag.
*Zuckerman, M., DeFrank, R. S., Hall, J. A., Larrance, D. T., & Rosenthal,
R. (1979). Facial and vocal cues of deception and honesty. Journal of
Experimental Social Psychology, 15, 378396.
Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and
nonverbal communication of deception. In L. Berkowitz (Ed.), Advances
in experimental social psychology (Vol. 14, pp. 159). New York:
Academic Press.
Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1986). Humans as
deceivers and lie detectors. In P. D. Blanck, R. Buck, & R. Rosenthal
(Eds.), Nonverbal communication in the clinical context (pp. 1335).
University Park: Pennsylvania State University Press.
Zuckerman, M., & Driver, R. E. (1985). Telling lies: Verbal and nonverbal
correlates of deception. In A. W. Siegman & S. Feldstein (Eds.), Mul-
tichannel integrations of nonverbal behavior (pp. 129147). Hillsdale,
NJ: Erlbaum.
*Zuckerman, M., Driver, R., & Koestner, R. (1982). Discrepancy as a cue
to actual and perceived deception. Journal of Nonverbal Behavior, 7,
95100.
*Zuckerman, M., Kernis, M. R., Driver, R., & Koestner, R. (1984).
Segmentation of behavior: Effects of actual deception and expected
deception. Journal of Personality and Social Psychology, 46, 1173
1182.
112 DEPAULO ET AL.
Appendix A
Definitions of Cues to Deception
Cue Definition
Are Liars Less Forthcoming Than Truth Tellers?
001 Response length Length or duration of the speakers message
002 Talking time Proportion of the total time of the interaction that the speaker spends
talking or seems talkative
003 Length of interaction Total duration of the interaction between the speaker and the other person
004 Details Degree to which the message includes details such as descriptions of
people, places, actions, objects, events, and the timing of events;
degree to which the message seemed complete, concrete, striking, or
rich in details
005 Sensory information (RM) Speakers describe sensory attributes such as sounds and colors
006 Cognitive complexity Use of longer sentences (as indexed by mean length of the sentences),
more syntactically complex sentences (those with more subordinate
clauses, prepositional phrases, etc.), or sentences that includes more
words that precede the verb (mean preverb length); use of the words
but or yet; use of descriptions of people that are differentiating and
dispositional
007 Unique words Typetoken ratio; total number of different or unique words
008 Blocks access to information Attempts by the communicator to block access to information, including,
for example, refusals to discuss certain topics or the use of unnecessary
connectors (then,next, etc.) to pass over information (The volunteering
of information beyond the specific information that was requested was
also included, after being reversed.)
009 Response latency Time between the end of a question and the beginning of the speakers
answer
010 Rate of speaking Number of words or syllables per unit of time
011 Presses lips (AU 23, 24) Lips are pressed together
Do Liars Tell Less Compelling Tales Than Truth Tellers?
012 Plausibility Degree to which the message seems plausible, likely, or believable
013 Logical structure (CBCA) Consistency and coherence of statements; collection of different and
independent details that form a coherent account of a sequence of
events(Zaparniuk, Yuille, & Taylor, 1995, p. 344)
014 Discrepant, ambivalent Speakerscommunications seem internally inconsistent or discrepant;
information from different sources (e.g., face vs. voice) seems
contradictory; speaker seems to be ambivalent
015 Involved, expressive (overall) Speaker seems involved, expressive, interested
016 Verbal and vocal involvement Speakers describe personal experiences, or they describe events in a
personal and revealing way; speakers seems vocally expressive and
involved
017 Facial expressiveness Speakers face appears animated or expressive
018 Illustrators Hand movements that accompany speech and illustrate it
019 Verbal immediacy Linguistic variations called verbal nonimmediacy devices, described by
Wiener and Mehrabian (1968) as indicative of speakersefforts to
distance themselves from their listener, the content of their
communications, or the act of conveying those communications.
Wiener and Mehrabian (1968) described 19 categories and
subcategories, such as spatial nonimmediacy (e.g., Theres Johnnyis
more nonimmediate than Heres Johnny), temporal nonimmediacy
(the present tense is more immediate than other tenses), and passivity
(the passive voice is more nonimmediate than the active voice).
020 Verbal immediacy, temporal A subcategory of verbal immediacy in which speakers use the present
tense instead of past or future tenses
021 Generalizing terms Generalizing terms (sometimes called levelers) such as everyone,no one,
all,none, and every; statements implying that unspecified others agree
with the speaker
022 Self-references Speakersreferences to themselves or their experiences, usually indexed
by the use of personal pronouns such as I,me,mine, and myself
023 Mutual and group references Speakersreferences to themselves and others, usually indexed by the use
of second-person pronouns such as we,us, and ours
(Appendixes continue)
113
CUES TO DECEPTION
Appendix A (continued)
Cue Definition
Do Liars Tell Less Compelling Tales Than Truth Tellers? (continued)
024 Other references Speakersreferences to others or their experiences, usually indexed by the
use of third-person pronouns such as he,she,they,orthem
025 Verbal and vocal immediacy
(impressions) Speakers respond in ways that seem direct, relevant, clear, and personal
rather than indirect, distancing, evasive, irrelevant, unclear, or
impersonal
026 Nonverbal immediacy Speakers are nonimmediate when they maintain a greater distance from
the other person, lean away, face away, or gaze away, or when their
body movements appear to be nonimmediate.
027 Eye contact Speaker looks toward the other persons eyes, uses direct gaze
028 Gaze aversion Speakers look away or avert their gaze
029 Eye shifts Eye movements or shifts in the direction of focus of the speakers eyes
030 Tentative constructions Verbal hedges such as may,”“might,”“could,”“I think,”“I guess,and
it seems to me(Absolute verbs, which include all forms of the verb
to be, were included after being reversed.)
031 Verbal and vocal uncertainty
(impressions) Speakers seem uncertain, insecure, or not very dominant, assertive, or
emphatic; speakers seem to have difficulty answering the question
032 Amplitude, loudness Intensity, amplitude, or loudness of the voice
033 Chin raise (AU 17) Chin is raised; chin and lower lip are pushed up
034 Shrugs Up and down movement of the shoulders; or, the palms of the hand are
open and the hands are moving up and down
035 Non-ah speech disturbances Speech disturbances other than ums,”“ers,and ahs,as described by
Kasl and Mahl (1965); categories include grammatical errors,
stuttering, false starts, incomplete sentences, slips of the tongue, and
incoherent sounds
036 Word and phrase repetitions Subcategory of non-ah speech disturbances in which words or phrases are
repeated with no intervening pauses or speech errors
037 Silent pauses Unfilled pauses; periods of silence
038 Filled pauses Pauses filled with utterances such as ah,”“um,”“er,”“uh,and
hmmm
039 Mixed pauses Silent and filled pauses (undifferentiated)
040 Mixed disturbances (ah plus
non-ah) Non-ah speech disturbances and filled pauses (undifferentiated)
041 Ritualized speech Vague terms and cliches such as you know,”“well,”“really,and I
mean
042 Miscellaneous dysfluencies Miscellaneous speech disturbances; speech seems dysfluent
043 Body animation, activity Movements of the head, arms, legs, feet, and/or postural shifts or leans
044 Postural shifts Postural adjustments, trunk movements, or repositionings of the body
045 Head movements
(undifferentiated) Head movements (undifferentiated)
046 Hand movements
(undifferentiated) Hand movements or gestures (undifferentiated)
047 Arm movements Movements of the arms
048 Foot or leg movements Movements of the legs and/or feet
Are Liars Less Positive and Pleasant Than Truth Tellers?
049 Friendly, pleasant (overall) Speaker seems friendly, pleasant, likable (Impressions of negative affect
were also included after being reversed.)
050 Cooperative Speaker seems cooperative, helpful, positive, and secure
051 Attractive Speaker seems physically attractive
052 Negative statements and
complaints Degree to which the message seems negative or includes negative
comments or complaints (Measures of positive comments were
included after being reversed.)
053 Vocal pleasantness Voice seems pleasant (e.g., positive, friendly, likable)
054 Facial pleasantness Speakers face appears pleasant; speakers show more positive facial
expressions (such as smiles) than negative expressions (such as frowns
or sneers)
055 Head nods Affirmative head nods; vertical head movements
056 Brow lowering (AU 4) Eyebrows are lowered
057 Sneers (AU 9, 10) Upper lip is raised
058 Smiling (undifferentiated) Smiling as perceived by the coders, who were given no specific definition
or were given a definition not involving specific AUs (e.g., corners of
the mouth are pulled up); laughing is sometimes included too
059 Lip corner pull (AU 12) Corners of the lips are pulled up and back
114 DEPAULO ET AL.
Appendix A (continued)
Cue Definition
Are Liars More Tense Than Truth Tellers?
060 Eye muscles (AU 6), not
during positive emotions Movement of the orbicularis oculi, or muscles around the eye, during
emotions that are not positive
061 Nervous, tense (overall) Speaker seems nervous, tense; speaker makes body movements that seem
nervous
062 Vocal tension Voice sounds tense, not relaxed; or, vocal stress as assessed by the
Psychological Stress Evaluator, which measures vocal micro-tremors,
or by the Mark II voice analyzer
063 Frequency, pitch Voice pitch sounds high; or, fundamental frequency of the voice
064 Relaxed posture Posture seems comfortable, relaxed; speaker is leaning forward or
sideways
065 Pupil dilation Pupil size, usually measured by a pupillometer
066 Blinking (AU 45) Eyes open and close quickly
067 Object fidgeting Speakers are touching or manipulating objects
068 Self-fidgeting Speakers are touching, rubbing, or scratching their body or face
069 Facial fidgeting Speakers are touching or rubbing their faces or playing with their hair
070 Fidgeting (undifferentiated) Object fidgeting and/or self-fidgeting and/or facial fidgeting
(undifferentiated)
Do Lies Include Fewer Ordinary Imperfections and Unusual Contents Than Do Truths?
071 Unstructured production
(CBCA) Narratives are presented in an unstructured fashion, free from an
underlying pattern or structure.(Zaparniuk et al., 1995, p. 344)
072 Spontaneous corrections
(CBCA) Spontaneous correction of ones statements(Zaparniuk et al., 1995, p.
344)
073 Admitted lack of memory,
unspecified (CBCA) Admission of lack of memory
074 Self-doubt (CBCA) Raising doubts about ones own testimony; raising objections to the
accuracy of recalled information(Zaparniuk et al., 1995, p. 344)
075 Self-deprecation (CBCA) Inclusion of unfavorable, self-incriminating details(Zaparniuk et al.,
1995, p. 344)
076 Contextual embedding
(CBCA) Statements that place the event within its spatial and temporal context
(Zaparniuk et al., 1995, p. 344)
077 Verbal and nonverbal
interactions (CBCA) Verbatim reproduction of dialogueand descriptions of interrelated
actions and reactions(Zaparniuk et al., 1995, p. 344)
078 Unexpected complications
(CBCA) The reporting of either an unforseen interruption or difficulty, or
spontaneous termination of the event(Zaparniuk et al., 1995, p. 344)
079 Unusual details (CBCA) Inclusion of detail that is not unrealistic, but has a low probability of
occurrence(Zaparniuk et al., 1995, p. 344)
080 Superfluous details (CBCA) Vivid and concrete descriptions of superfluous details(Zaparniuk et al.,
1995, p. 344)
081 Related external associations
(CBCA) Reference to events or relationships that are external to the event of
immediate focus(Zaparniuk et al., 1995, p. 344)
082 Anothers mental state
(CBCA) Statements inferring the cognitive and emotional state of others involved
in the event(Zaparniuk et al., 1995, p. 344)
083 Subjective mental state
(CBCA) Accounts of the witnesss own cognitive and emotional state at the time
of the event(Zaparniuk et al., 1995, p. 344)
Cues Listed in Appendix B
a
084 Number of segments Perceived number of behavioral units
085 Idiosyncratic information
(RM) Speakers mention idiosyncratic information
086 Facial shielding Speakers appear to be shielding their face
087 Realism (RM) The story is realistic and makes sense
088 Intensity of facial expression Speakers facial expression appears to be intense; rated intensity of AUs
089 Face changes Changes in facial expressions; onset, offset, and apex phases; face seems
mobile
090 Indifferent, unconcerned Speaker seems indifferent, unconcerned
091 Seems planned, not
spontaneous Message seems planned or rehearsed
092 Cognitively busy Speaker seems to be making mental calculations
093 Serious Speaker seems serious, formal
094 Pitch variety Variation in fundamental frequency
095 Pitch changes Frequency of changes in the pitch of the voice
096 Rate change Rate of speaking in the second half of the message minus rate of
speaking in the first half (Appendixes continue)
115
CUES TO DECEPTION
Appendix A (continued)
Cue Definition
Cues Listed in Appendix B
a
(continued)
097 Loudness variety Standard deviation of amplitude
098 Clarity (RM) Clarity and vividness of the statement(Vrij, 2000, p. 160)
099 Reconstructability (RM) The event can be reconstructed with the information given
100 Cognitive processes (RM) Descriptions of inferences made by the participant at the time of the
event(Vrij, 2000, p. 160)
101 Modifiers A subcategory of verbal nonimmediacy in which speakers qualify their
responses (e.g., sometimes) or objectify them (e.g., it is obvious)
102 Verbally distal versus proximal Ratio of distal (nonimmediacy) indices to proximal (immediacy) indices
103 Pronoun and tense deletion Deviations from the use of the first person and the past tense
104 Facial immediacy (eye
contact, head orientation) Speaker is facing the other person and gazing at that person; speakers
face seems direct and intense
105 Direct orientation Degree to which the body and head were directly oriented to the other
person
106 Proximity Speaker seems to be in close physical proximity to the other person
107 Sentence changes Subcategory of non-ah speech disturbances in which the flow of the
sentence is interrupted by a correction in the form or content (e.g.,
Well shes...already shes lonesome;That was...itwill be 2
years ago in the fall; Mahl, 1987, p. 167)
108 Stutters Subcategory of non-ah speech disturbances in which the speaker stutters
109 Intruding sounds Subcategory of non-ah speech disturbances in which the speaker makes
intruding sounds that are totally incoherent and are not stutters
110 Subset of non-ah Subset of non-ah speech disturbances (interrupted words and repeated
words)
111 Interruptions Interruptions; simultaneous talk that results in a change in turns
112 Filled pause length Duration of filled pauses
113 Unfilled pause length Duration of unfilled pauses
114 Specific hand and arm
movements Hand movements that do not include arm movements and finger
movements that do not include hand movements
115 Competent Speakers performance seems successful; speaker manages the
conversation smoothly; speaker makes a good impression
116 Ingratiation Speakersuse of tactics of ingratiation, such as agreeing with others
opinions or values, expressing approval of others, or revealing their
own values that are relevant to the conversational context
117 Genuine smile (AU 6) Movement of the muscles around the eye, orbicularis oculi, as well as the
zygomatic major, during positive emotions
118 Feigned smile Masking smiles involving the action of the zygomatic major and muscle
movements associated with emotions that are not positive ones;
incomplete smiling that appears masked or unnatural
119 Head shakes Negative head shakes; side-to-side head movements
120 Mouth asymmetry Mouth is asymmetrical
121 Relaxed face Speakers appear to show nervous facial movements (reversed)
122 Hand, arm, and leg relaxation Hands or legs are asymmetrical; hands are relaxed
123 Admitted uncertainties Qualifying descriptions by expressions of uncertainty such as Im not
sure butor at least I believe it was like that
124 Details misunderstood
(CBCA) Inclusion of actions and details that are not understood by the witness
but may be understood by the interviewer(Zaparniuk et al., 1995, p.
344)
125 Pardoning the perpetrator
(CBCA) Providing explanations or rationalizations for the offenders actions
(Zaparniuk et al., 1995, p. 344)
126 Self-interest statements Speakersreferences to benefits to themselves (References to benefits to
others were also included, after being reversed.)
127 Issue-related reporting style Speakersdescription stays on topic
128 Reasons for lack of memory Speakers describe reasons for inability to provide a complete description
129 Brow raise (AU 1, 2) Inner (AU 1) or outer (AU 2) corner of the brow is raised
130 Lip stretch (AU 20) Lips are stretched sideways
131 Eyes closed (AU 43) Eyes are closed
132 Lips apart (AU 25) Lips are relaxed, parted slightly, as jaws remain closed
133 Jaw drop (AU 26) Jaw is dropped open
134 Mentions responsibility All mentions of responsibility for behavior, including accepting
responsibility, blaming others, offering excuses or justifications, or
denying participation in the behavior
135 Claims qualifications and
truthfulness Speakersexplicit claims that they have the necessary qualifications or
that they are telling the truth
136 Extreme descriptions Speakersuse of extreme descriptions of others (e.g., the most
aggressive person I know,”“extremely intelligent)
137 Neutral descriptions Speakersuse of evaluatively neutral descriptions
116 DEPAULO ET AL.
Appendix A (continued)
Cue Definition
Cues Listed in Appendix B
a
(continued)
138 Hypothetical statements Speakersreferences to conditions that did not currently exist but might
exist in the future
139 Nonsensory-based words Words referring to concepts not verifiable by the senses, such as love,
accidentally,interesting, and dishonesty
140 Provides standard description Speaker provides a description in a standard way (as instructed)
(Modifications of the standard description were included after being
reversed.)
141 Ratio of conclusion to
introduction Ratio of the number of words in the conclusion of a story to the number
of words in the introduction
142 Repetition of story elements Aspects of the story that were previously described are repeated without
elaboration
143 Comments and interpretations Speakers comment on others involved in an event or interpret the event
144 Eye blink latency Time until the first eye blink
145 Eye flutters A barely discernible movement of the eyes in which, without fully
breaking eye contact, the eyes jiggle’” (Hocking & Leathers, 1980, p.
127)
146 Eyelids tight (AU 7) Eyelids are tightened
147 Eyelids droop (AU 41) Eyelids are drooping
148 Lip pucker (AU 18) Mouth is pushed forward in such a way that the lips pucker
149 Tongue out (AU 19) Speakers tongue is out
150 Duration of facial expression Total duration of a facial expression
151 Hands together Speakershands are clasped, folded, or otherwise touching or resting on
their lap
152 Hands apart Each hand rests separately on a different part of the body
153 Emblems Hand movements with direct verbal translations
154 Changes in foot movements Changes in the number of foot or leg movements over time (absolute
value)
155 Pupillary changes Changes in pupil size
156 Biting lips Speakers are biting their lips
157 Facial reaction time Time until the first facial movement
158 Neck muscles tightened Neck muscles (typically the platysma muscle) are tightened
Note. RM reality monitoring; AU facial action unit (as categorized by Ekman & Friesen, 1978); CBCA
Criteria-Based Content Analysis.
a
Any given cue is included in Tables 37 only if there are at least three independent estimates of it, at least two of which
could be calculated precisely. All other cues are reported in Appendix B.
(Appendixes continue)
117
CUES TO DECEPTION
Received June 16, 2000
Revision received June 4, 2002
Accepted June 4, 2002
Appendix B
Cues Based on a Small Number of Estimates (Organized by Category of the
Self-Presentational Perspective) and Miscellaneous Cues
Cue k
1
k
2
dCI Cue k
1
k
2
dCI
Are Liars Less Forthcoming Than Truth Tellers?
084 Number of segments 1 1 0.47* 0.73, 0.20
085 Idiosyncratic information 2 0 0.01 0.41, 0.43
086 Facial shielding 4 0 0.00 0.35, 0.35
Do Liars Tell Less Compelling Tales Than Truth Tellers?
087 Realism 1 1 0.42* 0.74, 0.10
088 Intensity of facial expression 2 2 0.32* 0.52, 0.12
089 Face changes 7 1 0.06 0.24, 0.11
090 Indifferent, unconcerned 2 2 0.59* 0.31, 0.87
091 Seems planned, not
spontaneous 2 1 0.35* 0.05, 0.65
092 Cognitively busy 1 1 0.61 0.14, 1.36
093 Serious 4 0 0.00 0.35, 0.35
094 Pitch variety 2 1 0.12 0.15, 0.39
095 Pitch changes 1 1 0.42 0.16, 0.68
096 Rate change 1 1 0.12 0.19, 0.43
097 Loudness variety 1 0 0.00 0.35, 0.35
098 Clarity 1 0 0.01 0.32, 0.30
099 Reconstructability 1 0 0.01 0.32, 0.30
100 Cognitive processes 1 0 0.01 0.30, 0.32
101 Modifiers 1 1 0.52* 1.03, 0.01
102 Verbally distal versus
proximal 110.10 0.63, 0.43
103 Pronoun and tense deletion 1 1 0.24 0.27, 0.75
104 Facial immediacy (eye
contact, head orientation) 2 1 0.13 0.04, 0.29
105 Direct orientation 2 1 0.20* 0.38, 0.01
106 Proximity 1 0 0.00 0.25, 0.25
107 Sentence changes 1 1 0.35 0.15, 0.85
108 Stutters 1 1 0.22 0.28, 0.72
109 Intruding sounds 1 1 0.16 0.34, 0.65
110 Subset of non-ah 1 1 0.38* 0.01, 0.74
111 Interruptions 3 0 0.01 0.24, 0.25
112 Filled pause length 1 0 0.01 0.36, 0.34
113 Silent pause length 1 0 0.00 0.35, 0.35
114 Specific hand and arm
movements 220.36* 0.54, 0.17
Are Liars Less Positive and Pleasant Than Truth Tellers?
115 Competent 3 1 0.08 0.39, 0.22
116 Ingratiation 1 0 0.00 0.49, 0.49
117 Genuine smile 2 2 0.70* 0.97, 0.43
118 Feigned smile 2 1 0.31 0.00, 0.63
119 Head shakes 5 1 0.12 0.27, 0.03
120 Mouth asymmetry 1 1 0.14 0.79, 1.07
Are Liars More Tense Than Truth Tellers?
121 Relaxed face 1 1 0.29 0.68, 0.10
122 Hand, arm, and leg relaxation 1 1 0.26 1.19, 0.68
Note. Cue numbers are of the cues described in the current article as indexed in Appendix A. All independent effect sizes (ds) were statistically significant.
k
1
total number of ds; k
2
number of ds that could be estimated precisely; CI 95% confidence interval.
*p.05.
Do Lies Include Fewer Ordinary Imperfections and
Unusual Contents Than Do Truths?
123 Admitted uncertainties 2 1 0.63* 1.00, 0.25
124 Details misunderstood 2 2 0.22 0.62, 0.18
125 Pardoning the perpetrator 1 1 0.00 0.62, 0.62
126 Self-interest statements 1 1 0.32 0.64, 0.01
127 Issue-related reporting style 1 1 0.87* 1.40, 0.34
128 Reasons for lack of memory 1 1 0.75* 1.28, 0.22
Miscellaneous Cues
129 Brow raise 5 5 0.01 0.10, 0.13
130 Lip stretch 4 4 0.04 0.15, 0.08
131 Eyes closed 3 3 0.06 0.19, 0.07
132 Lips apart 5 4 0.08 0.19, 0.03
133 Jaw drop 3 2 0.00 0.14, 0.14
134 Mentions responsibility 2 2 0.34* 0.13, 0.55
135 Claims qualifications and
truthfulness 1 1 0.00 0.50, 0.50
136 Extreme descriptions 1 1 0.16 0.47, 0.15
137 Neutral descriptions 1 1 0.26 0.06, 0.58
138 Hypothetical statements 1 1 0.08 0.24, 0.40
139 Non-sensory based words 1 0 0.00 0.44, 0.44
140 Provides standard description 1 1 0.18 0.19, 0.55
141 Ratio of conclusion to
introduction 1 1 0.12 0.39, 0.63
142 Repetition of story elements 1 1 0.65* 1.17, 0.13
143 Comments and interpretations 1 1 0.14 0.65, 0.37
144 Eye blink latency 2 2 0.21 0.01, 0.44
145 Eye flutters 1 1 0.08 0.57, 0.42
146 Eyelids tight 2 1 0.02 0.19, 0.15
147 Eyelids droop 2 1 0.09 0.14, 0.32
148 Lip pucker 2 1 0.08 0.25, 0.09
149 Tongue out 2 1 0.16 0.40, 0.07
150 Duration of facial expression 2 0 0.00 0.22, 0.21
151 Hands together 2 2 0.21 0.66, 0.24
152 Hands apart 2 2 0.15 0.59, 0.29
153 Emblems 1 1 0.01 0.21, 0.23
154 Changes in foot movements 2 2 1.05* 0.60, 1.49
155 Pupillary changes 1 1 0.90* 0.17, 1.63
156 Biting lips 1 0 0.00 0.52, 0.52
157 Facial reaction time 1 1 0.49 0.09, 1.06
158 Neck muscles tightened 1 1 0.10 0.63, 0.43
118 DEPAULO ET AL.
Article
As consumer awareness of fake online reviews grows, platforms face increasing challenges in maintaining trust. Although skepticism toward reviews is rising, our research finds that consumers still exhibit a “truth bias,” meaning that they tend to accept individual reviews as genuine—even when fake review detection rates are low. This highlights the need for platforms to proactively identify and address fraudulent content rather than relying on user reporting of suspected fakes, which is largely ineffective. Platforms might also consider labeling suspected fake reviews with warning badges or fact-check indicators. Additionally, we find that truth bias is stronger for negative reviews, making fake negative reviews particularly impactful and damaging. Consequently, platforms should prioritize detecting and mitigating fake negative reviews over fake positive ones. Our findings also suggest that structuring reviews into separate positive and negative sections or allowing (or defaulting to) valence-based review sorting might reduce consumer likelihood of being fooled by fake negative reviews. These insights inform platform policy by emphasizing the importance of proactive fraud detection, transparent labeling, and interface design in safeguarding consumer trust and lowering fraud.
Article
Full-text available
Zusammenfassung Im März 2024 erschien Die Methode der forensischen Glaubhaftigkeitsbegutachtung im deutschen Sprachraum – Ein interdisziplinäres Plädoyer für eine kritische Bestandsaufnahme zur Anwendung der sogenannten „Nullhypothese“ in unterschiedlichen Verfahrenskontexten (Fegert et al. 2024a), welche mit Mitteln der Unabhängigen Beauftragten für Fragen des sexuellen Kindesmissbrauchs (UBSKM) gefördert wurde. Dort wird argumentiert, die Glaubhaftigkeitsbegutachtung wecke wegen der Bezugnahme auf das Nullhypothesentesten den irreführenden Eindruck, es handele sich um ein quantitatives Vorgehen. In der Folge fordern die Autor*innen für das Strafverfahren eine Abkehr vom Nullhypothesentesten und stattdessen ein qualitatives Vorgehen unter Berücksichtigung individueller Umstände und Voraussetzungen. Für familiengerichtliche Verfahren und für das Soziale Entschädigungsrecht fordern sie einen völligen Verzicht auf diesen Ansatz. Für das Soziale Entschädigungsrecht wird stattdessen eine Plausibilitätseinschätzung durch Personen in Heilberufen vorgeschlagen. Die Argumentation von Fegert et al. (2024a) basiert auf falschen Prämissen und ist ohne Bezug zu aussagepsychologischer Literatur: Allen einschlägigen Lehrbüchern ist zu entnehmen, dass eine Glaubhaftigkeitsbegutachtung aus einem qualitativen Vorgehen unter Berücksichtigung fall- und personenspezifischer Besonderheiten besteht. Lege artis durchgeführte Glaubhaftigkeitsbegutachtungen verfahren also bereits aktuell so wie in der vorgelegten Expertise gefordert; die diesbezügliche Kritik läuft damit ins Leere. Auch wenn der Begriff der Nullhypothese zu Missverständnissen geführt hat, besteht bezüglich der systematischen Prüfung von Alternativhypothesen zur Beurteilung der Glaubhaftigkeit einer Aussage international wissenschaftlicher Konsens. Die unterschiedlichen rechtlichen Rahmenbedingungen mit ihren jeweiligen Beweisschwellen sind dabei zu berücksichtigen. Im Fazit bleibt eine lege artis durchgeführte aussagepsychologische Begutachtung aktuell die beste Methode zur Beurteilung der Glaubhaftigkeit einer Aussage; wobei eine kontinuierliche wissenschaftliche Weiterentwicklung und Qualitätssicherung stets erforderlich ist.
Article
Speech lie detection is a technique that analyzes speech signals in detail to determine whether a speaker is lying. It has significant application value and has attracted attention from various fields. However, existing speech lie detection algorithms still have certain limitations. These algorithms fail to fully explore manually extracted features based on prior knowledge and also neglect the dynamic characteristics of speech as well as the impact of temporal context, resulting in reduced detection accuracy and generalization. To address these issues, this paper proposes a multi-feature speech lie detection algorithm based on the dual-stream deep architecture (DDA-MSLD).This algorithm employs a dual-stream structure to learn different types of features simultaneously. Firstly, it combines a gated recurrent unit (GRU) network with the attention mechanism. This combination enables the network to more comprehensively capture the context of speech signals and focus on the parts that are more critical for lie detection. It can perform in-depth sequence pattern analysis on manually extracted static prosodic features and nonlinear dynamic features, obtaining high-order dynamic features related to lies. Secondly, the encoder part of the transformer is used to simultaneously capture the macroscopic structure and microscopic details of speech signals, specifically for high-precision feature extraction of Mel spectrogram features of speech signals, obtaining deep features related to lies. This dual-stream structure processes various features of speech simultaneously, describing the subjective state of speech signals from different perspectives and thereby improving detection accuracy and generalization. Experiments were conducted on the multi-person scenario lie detection dataset CSC, and the results show that this algorithm outperformed existing state-of-the-art algorithms in detection performance. Considering the significant differences in lie speech in different lying scenarios, and to further evaluate the algorithm’s generalization performance, a single-person scenario Chinese lie speech dataset Local was constructed, and experiments were conducted on it. The results indicate that the algorithm has a strong generalization ability in different scenarios.
Article
Full-text available
The strategic use of evidence (SUE) technique is effective in enhancing the differences between innocent and guilty mock suspects. However, research has shown that just world belief (JWB), a concept used to explain innocent suspects’ forthcomingness, is lower in Black compared to White individuals. We examined whether Black and White innocent suspects differed in their responses in a SUE interview as theory would predict (practice-related objective) and whether the data empirically supported the hypothesized link between JWB and innocent suspects’ forthcomingness (theory-related objective). We expected a lower JWB in Black participants, and, consequently, Black innocent suspects to be less forthcoming (i.e., less critical information volunteered and lower statement-evidence consistency) than White innocent suspects, with no difference predicted between Black and White guilty suspects. We additionally explored the role of trust in the police. In a hypothetical scenario study (N = 209) and a mock crime study (N = 131), significant differences in forthcomingness between innocent and guilty suspects were found. When comparing Black and White innocent suspects, the descriptive findings pointed in the expected direction, but the main analyses were not statistically significant, despite Black participants’ significantly lower personal JWB and trust in the police scores. Moreover, neither variable was related to innocent suspects’ forthcomingness. The results question the relevance of JWB in explaining innocent suspects’ honesty. Yet, we contend that suggesting the SUE technique can be successfully applied to Black suspects despite JWB differences is premature. The risk of biases accumulating underscores the urgent need for further research.
Article
Full-text available
Misinformation can have severe negative effects on people’s decisions, behaviors, and on society at large. This creates a need to develop and evaluate educational interventions that prepare people to recognize and respond to misinformation. We systematically review 107 articles describing educational interventions across various lines of research. In characterizing existing educational interventions, this review combines a theory-driven approach with a data-driven approach. The theory-driven approach uncovered that educational interventions differ in terms of how they define misinformation and regarding which misinformation characteristics they target. The data-driven approach uncovered that educational interventions have been addressed by research on the misinformation effect, lie detection, information literacy, and fraud trainings, with each line of research yielding different types of interventions. Furthermore, this article reviews evidence about the interventions’ effectiveness. Besides identifying several promising types of interventions, comparisons across different lines of research yield open questions that future research should address to identify ways to increase people's resilience towards misinformation.
Chapter
Governing Misinformation in Everyday Knowledge Commons delves into the complex issue of misinformation in our daily lives. The book synthesizes three scholarly traditions - everyday life, misinformation, and governing knowledge commons - to present 10 case studies of online and offline communities tackling diverse dilemmas regarding truth and information quality. The book highlights how communities manage issues of credibility, trust, and information quality continuously, to mitigate the impact of misinformation when possible. It also explores how social norms and intentional governance evolve to distinguish between problematic disinformation and little white lies. Through a coproduction of governance and (mis-)information, the book raises a set of ethical, economic, political, social, and technological questions that require systematic study and careful deliberation. This title is also available as Open Access on Cambridge Core.
Article
Full-text available
Extra cognitive loads can hinder challenging self-presentations by usurping needed cognitive resources but also may sometimes improve them by shifting attention away from negative self-preoccupation. In Study 1, extraverts and introverts participated in an interview in which they presented themselves as either extraverted or introverted. Congruent self-presentations, which should be cognitively nondemanding, were unaffected by a cognitive busyness manipulation (rehearsing an 8-digit number). However, incongruent self-presentations were affected by busyness. Busyness decreased the effectiveness of extraverts who tried to appear introverted but increased the effectiveness of introverts who tried to appear extraverted. Study 2 found that introverts, who also tend to be socially anxious, reported less public self-consciousness and fewer negative self-focused thoughts when they were busy than when they were not busy.
Article
Full-text available
Using a process model of emotion, a distinction between antecedent-focused and response-focused emotion regulation is proposed. To test this distinction, 120 participants were shown a disgusting film while their experiential, behavioral, and physiological responses were recorded. Participants were told to either (a) think about the film in such a way that they would feel nothing (reappraisal, a form of antecedent-focused emotion regulation), (b) behave in such a way that someone watching them would not know they were feeling anything (suppression, a form of response-focused emotion regulation), or (c) watch the film (a control condition). Compared with the control condition, both reappraisal and suppression were effective in reducing emotion-expressive behavior. However, reappraisal decreased disgust experience, whereas suppression increased sympathetic activation. These results suggest that these 2 emotion regulatory processes may have different adaptive consequences.
Article
Full-text available
Responsibility acts as a psychological adhesive that connects an actor to an event and to relevant prescriptions that should govern conduct. People are held responsible to the extent that (a) a clear, well-defined set of prescriptions is applicable to an event (prescription–event link); (b) the actor is perceived to be bound by the prescriptions by virtue of his or her identity (prescription–identity link); and (c) the actor is connected to the event, especially by virtue of appearing to have personal control over it (identity–event link). Studies supported the model, showing that attributions of responsibility are a direct function of the combined strengths of the 3 linkages (Study 1) and that, when judging responsibility, people seek out information that is relevant to the linkages (Study 2). The model clarifies prior multiple meanings of responsibility and provides a coherent framework for understanding social judgment.
Article
Full-text available
In this study several factors considered to be relevant in mediating stress arousal were experimentally manipulated. Ss selected for the coping styles anxiety denying, low anxiety, and high anxiety were confronted with both low- and high-arousal-inducing situations, using 2 different types of stressors (cognitive vs. emotional) in each case. Arousal reactions were measured in 3 response modalities: verbal report of subjective experience; nonverbal, nonvocal behavior; and physiological reactions. The results reveal complex interactions between type and degree of stress, coping style, and gender of Ss, confirming findings on vocal parameters of stress. These complex interactions are discussed with respect to the possibility that Ss' evaluation of situation characteristics may be influenced by coping styles and gender, resulting in differential reaction patterns.
Article
Full-text available
This review of research dealing with psychologically induced arousal and motor performance focuses on the hypothesized inverted-∪ function relating arousal to performance. The inverted-∪ hypothesis is supported only in a weak and psychologically trivial fashion: Subjects with incentive will outperform either those with none or those responding to a serious and plausible threat; the arousal level of the first group will be intermediate to those of the other two. However, debilitating states (e.g., anxiety) can occur at arousal levels equal to that optimal for performance (the state of being “psyched up”). The concept of arousal cannot distinguish between these and other states (e.g., anger, sexuality, and fear) because it is an excessively broad physiological construct artificially severed from its psychological context. More useful research in human motor performance would investigate discrete psychobiological states, which include affect and cognition as well as physiology. Examination of profound individual differences in response to incentive and threat suggests that psychobiological states have their genesis in response expectancies and hypnotic-like self-inductions. The cognitive and affective components of these states are highly interactive and perhaps not profitably separated. Because performance anxiety is a central problem in the motor realm, it is carefully delineated and the test anxiety literature is scrutinized. Psychophysiological test batteries and other investigations in the area are described, and guidelines for future research are provided.
Article
Full-text available
Do people know what kinds of impressions they convey to other people during particular social interactions? In a study designed to answer this question, subjects interacted individually with three partners on each of four different tasks. After each interaction, participants reported their impressions of the other person's likability and competence. They also postdicted the impressions they believed they conveyed to the other person along the same dimensions. Accuracy was computed as recommended by Cronbach (1955) and by Kenny's (1981) Social Relations Model. Subjects could tell to a significant degree how the impressions they conveyed to their partners changed over time (time accuracy) and how they changed over time in different ways with different partners (differential accuracy). They could also tell how their competence was differentially perceived by different partners (dyadic accuracy). However, they were not very accurate at discerning which partners perceived them as most competent or most likable across all interactions (person accuracy). Subjects believed that they conveyed similar impressions of themselves to all of their partners, although actually partners evidenced little agreement with each other in their impressions of a given subject. The implications of these findings for symbolic-interactionist theories of the development of the self and impression-management perspectives on social behavior are described.
Article
Full-text available
In reviewing the literature on the vocal expression of emotion, a discrepancy between reported high accuracy in vocal-auditory recognition and the lack of clear evidence for the acoustic differentiation of vocal expression is noted. The latter is explained by (a) a paucity of research on voice quality, (b) neglect of the social signaling functions of affect vocalization, and (c) insufficiently precise conceptualization of the underlying emotional states. A “component patterning” model of vocal affect expression is proposed that attempts to link the outcomes of antecedent event evaluation to biologically based response patterns. On the basis of a literature survey of acoustic-phonetic evidence, the likely phonatory and articulatory correlates of the physiological responses characterizing different emotional states are described in the form of three major voice types (narrow-wide, lax-tense, full-thin). Specific predictions are made as to the changes in acoustic parameters resulting from changing voice types. These predictions are compared with the pattern of empirical findings yielded by a comprehensive survey of the literature on vocal cues in emotional expression. Although the comparison is largely limited to the tense-lax voice type (because acoustic parameters relevant to the other voice types have not yet been systematically studied), a high degree of convergence is revealed. It is suggested that the model may help to stimulate hypothesis-guided research as well as provide a framework for the development of appropriate research paradigms.
Article
Full-text available
We tested the assumption that the act of inhibiting ongoing behavior requires physiological work. In a guilty knowledge test (GKT) paradigm, subjects were induced to attempt to deceive the experimenter on two separate occasions while electrodermal activity was measured. For 20 of the 30 subjects, overt behaviors (changes in eye movement and facial expression) were recorded during the second GKT. Results indicated that the incidence of behaviors decreased during their deceptive responses. This behavioral inhibition coincided with increases in skin conductance level. In addition to suggesting nonverbal correlates of deception, the results indicate that long-term behavioral inhibition may be a factor in psychosomatic disease.
Article
Full-text available
Male and female “senders” described their opinions on four controversial issues to target persons. Each sender expressed sincere agreement with the target on one of the issues and sincere disagreement on another (truthful messages), and also pretended to agree with the partner on one of the issues (an ingratiating lie) and pretended to disagree on another (a noningratiating lie). Groups of judges then rated the sincerity of each message on the basis of information available from one of four different channels: verbal (words only, in transcript form), audio (audiotape only), visual (videotape with no sound), and audiovisual (videotape with sound). Results showed that (a) lies told by women were more readily detected than lies told by men, (b) lies told to opposite-sex targets were more easily detected than lies told to same-sex targets, and (c) ingratiating lies were more successfully detected than were noningratiating lies, particularly when told to attractive targets. Furthermore, when senders talked to opposite-sex (relative to same-sex) targets, their lies were most easily detected from the three channels that included nonverbal cues. For ingratiating (relative to noningratiating) lies, detectability was greatest for the channels that included visual nonverbal cues. Senders addressing attractive targets were perceived as less sincere than senders addressing unattractive targets, both when lying and when telling the truth, and this difference in the degree of sincerity conveyed was especially pronounced in the channels that included nonverbal cues. Results are discussed in terms of the effects of motivation on verbal and nonverbal communicative success.
Article
Full-text available
Presents a self-presentation approach to the study of social anxiety that proposes that social anxiety arises when individuals are motivated to make a preferred impression on real or imagined audiences, but perceive or imagine unsatisfactory evaluative reactions from subjectively important audiences. The authors presume that specific situational and dispositional antecedents of social anxiety operate by influencing people's motivation to impress others and their expectations of satisfactorily doing so. In contrast to drive models of anxiety but consistent with social learning theory, it is argued that the cognitive state of the individual mediates both affective arousal and behavior. The traditional inverted-–U relation between anxiety and performance is reexamined in this light. Counseling implications are considered, including the recommendation that treatments be tailored to specific types of self-presentational problems. (142 ref)