Content uploaded by Maria Hartwig
Author content
All content in this area was uploaded by Maria Hartwig
Content may be subject to copyright.
Why Do Lie-Catchers Fail?
A Lens Model Meta-Analysis of Human Lie Judgments
Maria Hartwig
John Jay College of Criminal Justice, City University of
New York
Charles F. Bond Jr.
Texas Christian University
Decades of research has shown that people are poor at detecting lies. Two explanations for this finding
have been proposed. First, it has been suggested that lie detection is inaccurate because people rely on
invalid cues when judging deception. Second, it has been suggested that lack of valid cues to deception
limits accuracy. A series of 4 meta-analyses tested these hypotheses with the framework of Brunswik’s
(1952) lens model. Meta-Analysis 1 investigated perceived cues to deception by correlating 66 behavioral
cues in 153 samples with deception judgments. People strongly associate deception with impressions of
incompetence (r⫽.59) and ambivalence (r⫽.49). Contrary to self-reports, eye contact is only weakly
correlated with deception judgments (r⫽⫺.15). Cues to perceived deception were then compared with
cues to actual deception. The results show a substantial covariation between the 2 sets of cues (r⫽.59
in Meta-Analysis 2, r⫽.72 in Meta-Analysis 3). Finally, in Meta-Analysis 4, a lens model analysis
revealed a very strong matching between behaviorally based predictions of deception and behaviorally
based predictions of perceived deception. In conclusion, contrary to previous assumptions, people rarely
rely on the wrong cues. Instead, limitations in lie detection accuracy are mainly attributable to
weaknesses in behavioral cues to deception. The results suggest that intuitive notions about deception are
more accurate than explicit knowledge and that lie detection is more readily improved by increasing
behavioral differences between liars and truth tellers than by informing lie-catchers of valid cues to
deception.
Keywords: deception judgments, subjective cues to deception, Brunswik’s lens model
Supplemental materials: http://dx.doi.org/10.1037/a0023589.supp
Human deception and its detection have long been of interest to
psychologists. Social psychological research has established that
lying is a common feature of everyday social interactions (Cole,
2001; Jensen, Arnett, Feldman, & Cauffman, 2004; see Serota,
Levine, & Boster, 2010, for a qualification of this finding): People
tell both self-oriented lies (e.g., to enhance socially desirable traits
and to escape punishment for transgressions) and other-oriented
lies (e.g., to protect others’ feelings from being hurt and to protect
social relationships; DePaulo & Kashy, 1998; DePaulo, Kashy,
Kirkendol, Wyer, & Epstein, 1996). Lying is thus an important
interpersonal phenomenon that serves the purpose of regulating
social life (Vrij, 2008). Deception has also attracted the attention
of applied psychologists because interpersonal judgments of cred-
ibility play an important role in several domains, including the
legal system (Granhag & Strömwall, 2004; Vrij, 2008).
One of the major findings from this research is that people are
poor at detecting lies: A meta-analysis of 206 studies showed an
average hit rate of 54%, which is hardly impressive given that
chance performance is 50% (Bond & DePaulo, 2006). Why is lie
detection prone to error? In the literature, two explanations have
been proposed (e.g., Vrij, 2008). First, it has been suggested that
naı¨ve lie detection is inaccurate because people have a false
stereotype about the characteristics of deceptive behavior and
therefore base their judgments on cues that are invalid. This
hypothesis (which we may call the wrong subjective cue hypoth-
esis) implies that errors in lie judgments are attributable to limi-
tations in social perception and impression formation and that lie
detection would be improved if perceivers relied on a different set
of cues. Second, meta-analyses of cues to deception show that
behavioral differences between truth tellers and liars are minute at
best (DePaulo et al., 2003; see also Sporer & Schwandt, 2006,
2007). In other words, there is no Pinocchio’s nose—no behavioral
sign that always accompanies deception (Vrij, 2008). Because the
behavioral differences between liars and truth tellers are small,
perceivers have little diagnostic material to rely on when attempt-
ing to establish veracity. This view (which we may call the weak
objective cue hypothesis) suggests that the limitations of lie detec-
tion reside in the judgment task itself. In this article, we employ
Brunswik’s lens model to understand judgments of veracity
(Brunswik, 1952; Hursch, Hammond, & Hursch, 1964).
Brunswik’s (1952) lens model is a conceptual framework for
studying human predictions of criteria that are probabilistically
Maria Hartwig, Department of Psychology, John Jay College of Crim-
inal Justice, City University of New York; Charles F. Bond Jr., Department
of Psychology, Texas Christian University.
Thanks are due to Bella DePaulo, Joshua Freilich, Larry Heuer, Timothy
Luke, Jaume Masip, Steve Penrod, Sigi Sporer, Annelies Vredeveldt, and
Brian Wallace for comments on a draft of this article.
Correspondence concerning this article should be addressed to Maria
Hartwig, Department of Psychology, John Jay College of Criminal Justice,
445 West 59th Street, New York, NY 10019. E-mail: mhartwig@
jjay.cuny.edu
Psychological Bulletin © 2011 American Psychological Association
2011, Vol. 137, No. 4, 643–659 0033-2909/11/$12.00 DOI: 10.1037/a0023589
643
related to cues (e.g., a physician making an assessment of the
likelihood that a patient has cancer on the basis of the patient’s
symptoms, a teacher’s assessment of a student’s scholastic abilities
based on the student’s performances in class, or a manager’s
judgment of job candidates on the basis of their behavior; Karelaia
& Hogarth, 2008). We draw on the available empirical data to
conduct a series of meta-analyses of judgment achievement as
defined by the lens model equation (Kaufmann & Athanasou,
2009). As we shall see, the lens model offers an analytic frame-
work that lets us put the two hypothesized explanations to a
quantitative test, by allowing for a statistical decomposition of
inaccuracy in lie detection in two components reflecting (a) limi-
tations in the naı¨ve use of cues to deception and (b) lack of validity
of objective cues to deception. In order to fully develop the
rationale for the current study, we provide an overview of the main
features of research on deception, after which we turn to the
application of the lens model to deception judgments.
Major Findings in Deception Research
Most research on deception is laboratory-based. In this research,
participants, typically college students, provide truthful or delib-
erately false statements (e.g., by purposefully distorting their atti-
tudes or events that they have witnessed or participated in). The
statements are subjected to various analyses including coding of
verbal and nonverbal characteristics. This allows for the mapping
of objective cues to deception—behavioral characteristics that
differ as a function of veracity. Also, the videotaped statements are
typically shown to other participants serving as lie-catchers who
are asked to make judgments about the veracity of the statements
they have seen. Across hundreds of such studies, people average
54% correct judgments, when guessing would yield 50% correct.
Meta-analyses show that accuracy rates do not vary greatly from
one setting to another (Bond & DePaulo, 2006) and that individ-
uals barely differ from one another in the ability to detect deceit
(Bond & DePaulo, 2008). Contrary to common expectations (Gar-
rido, Masip, & Herrero, 2004), presumed lie experts who routinely
assess credibility in their professional life do not perform better
than lay judges do (Bond & DePaulo, 2006). In sum, that lie
detection is a near-chance enterprise is a robust finding emerging
from decades of systematic research.
Subjective Versus Objective Cues to Deception
What is the reason for the near-chance performance of human
lie detection? To explain lack of accuracy, researchers have at-
tempted to map the decision making of lie-catchers by studying
subjective cues to deception (Strömwall, Granhag, & Hartwig,
2004). These are behaviors that are perceived by observers as signs
of deception. The most commonly employed method to study
subjective cues to deception is the survey approach, in which
people are asked to self-report on their beliefs about deceptive
behavior (Akehurst, Köhnken, Vrij, & Bull, 1996; Strömwall &
Granhag, 2003; Vrij & Semin, 1996; for a different approach, see
Zuckerman, Koestner, & Driver, 1981). In most of these studies,
respondents were provided with a list of verbal and nonverbal
behaviors and asked how, if at all, these behaviors are related to
deception (e.g., L. H. Colwell, Miller, Miller, & Lyons, 2006;
Lakhani & Taylor, 2003; Taylor & Hick, 2007). In most studies,
people are provided with a list of common subjective and objective
cues to deception, to investigate whether people express support
for subjective cues and whether they reject objective cues. In
addition to this closed-ended approach, some studies have em-
ployed an open-ended approach in which respondents are asked
what behavioral cues they associate with deception. Another way
of mapping subjective cues to deception is to ask lie-catchers in
laboratory studies to self-report the basis for their veracity judg-
ment (e.g., “I thought the person was lying because she was
stuttering”; see Strömwall et al., 2004).
The results from self-report studies on subjective cues to decep-
tion are remarkably consistent. Most commonly, people report the
belief that gaze aversion is indicative of deception. A worldwide
study surveyed beliefs about cues to deception in 58 countries and
found that in 51 of these, the belief in a link between gaze behavior
and deception was the most frequently reported (Global Deception
Research Team, 2006). People also report that increased body
movements, fidgeting, and posture changes are associated with
deceit, as well as a higher pitched voice and speech errors. This
pattern suggests that people expect liars to experience nervousness
and discomfort and that this nervousness is evident in behavior
(Vrij & Semin, 1996). However, there is a methodological limita-
tion to these studies that prevents us from concluding that people
make lie judgments based on these criteria: We cannot be certain
that the behaviors people report explicitly are those that best
capture their actual decision-making strategies (Nisbett & Wilson,
1977). As impression formation is partly automatic and implicit
(Bargh & Chartrand, 1999; Fiske & Taylor, 2008), it is quite
possible that people are unaware of the basis for their veracity
assessments and that self-reports reflect an explicit, conscious
stereotype of deceptive behavior that has little impact on actual
decision making. As we shall see, applying the lens model to
deception judgments allows us to go beyond self-reports to assess
the actual behavioral criteria that predict judgments of deception.
Do liars behave consistently with people’s notions of deceptive
behavior? Expressed differently, is there an overlap between sub-
jective beliefs about deceptive behavior and actual objective cues
to deception? Analyses of verbal and nonverbal behavior of liars
and truth tellers show that cues to deception are scarce and that
many subjective cues are unrelated to deception. A meta-analysis
covering 120 studies and 158 cues to deception showed that most
behaviors are only weakly related to deception, if at all (DePaulo
et al., 2003; see also DePaulo & Morris, 2004). Gaze aversion is
not a valid indicator of deception. The simple heuristic that liars
are more nervous is not supported by the meta-analysis because
many indicators of nervousness, such as fidgeting, blushing or
speech disturbances, are not systematically linked to deception.
The meta-analysis does suggest that liars might be more tense,
possibly as a function of operating under a heavier self-regulatory
burden: Their pupils are more dilated and their pitch of voice is
higher (DePaulo et al., 2003). The results also suggest that there
might be some verbal differences between liars and truth tellers:
Liars talk for a shorter time and include fewer details, compared
with truth tellers. Also, liars’ stories make less sense in that their
stories are somewhat less plausible and less logically structured.
It is not our intention to provide a comprehensive overview of
the available research on deception and its detection. For such
overviews, we direct the reader to recent meta-analyses by
DePaulo et al. (2003) and Bond and DePaulo (2006) and the
644 HARTWIG AND BOND
comprehensive review by Vrij (2008). The important point is that
research suggests two plausible explanations for why lie-catching
often fails. First, self-reports suggest a mismatch between subjec-
tive and objective cues to deception, meaning that people consis-
tently report relying on behaviors that are unrelated to deception.
Second, behavioral coding of lies and truths in laboratory research
suggests that there is a scarcity of objective cues to deception,
making the judgment task intrinsically error prone. How do we
know which of these explanations fits the data best? The fact that
there is no answer to this question in the available literature
suggests that despite the vast body of empirical research, judg-
ments of deception are poorly understood. We aim to enhance
understanding by employing the lens model originally outlined by
Brunswik (1952), a method of analysis that has proven fruitful for
understanding human judgments in a wide range of areas (Hogarth
& Karelaia, 2007; Juslin, 2000; Karelaia & Hogarth, 2008; Kauf-
mann & Athanasou, 2009). In contrast to previous research in
which researchers have studied either the characteristics of decep-
tive and truthful behavior or the characteristics of judgments of
deception, employing the lens model allows us to study the inter-
play between the characteristics of the judgment task and perceiver
performance (e.g., Juslin, 2000). A few previous studies have
employed the lens model to study judgments of deception (Fiedler
& Walka, 1993; Sporer, 2007; Sporer & Kupper, 1995). We build
on and extend this work by offering a synthesis of the available
literature on deception judgments using the framework of the lens
model. We aim to address three main questions. First, what cues do
people use when judging deception (Meta-Analysis 1)? Second, is
there a lack of overlap between subjective and objective cues to
deception (Meta-analyses 2 and 3)? Third, is inaccuracy mainly
due to incorrect decision-making strategies or lack of valid cues to
deception (Meta-Analysis 4)?
Brunswik’s Lens Model
Within the theoretical framework of probabilistic functionalism,
Egon Brunswik (Brunswik, 1952; Petrinovich, 1979) proposed a
model to understand processes of human perception. The basic
assumption of probabilistic functionalism is that people exist in an
uncertain environment and that judgments and inferences about the
environment are therefore made on the basis of probabilistic data
(Brunswik, 1943, 1952; Hammond, 1996). Judgments of a crite-
rion are made on the basis of cues with different ecological
validities, where ecological validity is the correlation between the
cue and the distal variable to be predicted (Hursch, Hammond, &
Hursch, 1964). Also, cues differ in their use by a perceiver, where
cue utilization can be represented by the correlation between the
cue and the inference drawn by the perceiver. A person’s achieve-
ment or accuracy can be captured by the correlation between the
inference drawn and the distal variable. Since the lens model was
proposed, it has been expanded to capture not only perceptual
judgments but also a variety of cognitive processes including
learning (Summers & Hammond, 1966), clinical inference (Ham-
mond, Hursch, & Todd, 1964), interpersonal perception (Ham-
mond, Wilkins, & Todd, 1966), and personality attributions
(DeGroot & Gooty, 2009).
A main advantage of the lens model is its ability to model
judgment accuracy by taking into account both the decision maker
and the decision-making task. In the words of Karelaia and Hog-
arth (2008, p. 404), “The simple beauty of Brunswik’s lens model
lies in recognizing that the person’s judgment and the criterion
being predicted can be thought of as two separate functions of cues
available in the environment of the decision.” From this, it follows
that the accuracy of a person’s judgment will be a function of the
extent to which the criterion can be predicted from a set of cues,
as well as to what extent the cues used by a perceiver overlap with
the cues that predict the criterion. To illustrate this, consider a
musician who plays the same tune repeatedly but attempts to
convey different emotions (e.g., anger, sadness, happiness) each
time the song is played (see Juslin, 2000). How well can a listener
judge what emotion the musician is attempting to convey? The
judgment achievement of the listener (i.e., the correlation between
the performer’s intention and the listener’s judgment) will, accord-
ing to the lens model, be a function of the following: First, to what
extent are there valid cues to the performer’s intended emotion in
the tune being played? Second, to what extent can the perceiver’s
judgment be reliably predicted from cues? Third, to what extent
does the set of cues utilized by a perceiver to judge emotional
expression match those actually indicative of the performer’s emo-
tion? The lens model thus decomposes judgment inaccuracy in
components reflecting (a) lack of validity in objective cues to
emotions in the tune being played and (b) lack of overlap between
objective cues to emotion and subjective use of cues to predict
emotion on the basis of the tune being played. The lens model can
therefore provide both descriptive information to understand judg-
ment accuracy, and prescriptive information about how judgment
accuracy can be improved (Hogarth & Karelaia, 2007). For a
thorough discussion of the lens model, see Cooksey (1996).
A Lens Model of Deception Judgments
Let us now employ the reasoning outlined above to understand
accuracy in judgments of deception. In the current article, we do
not measure lie detection accuracy as percentage correct. Instead,
we measure accuracy in terms of a Pearson product–moment
correlation coefficient—the correlation between actual deception
and judgments of deception. For present purposes, this correla-
tional metric is superior to percentage correct. Unlike percentage
correct, it can accommodate results from the many studies of
deception in which participants render their judgments of truthful
and deceptive messages on Likert scales. The correlational metric
is also necessary for the implementation of a lens model of
deception judgments, as is now explained.
A lens model of judgments of deception incorporates a commu-
nicator, behavioral cues, and a judge (see Figure 1). The commu-
nicator appears at the left of the figure, and cues appear in the
middle. The communicator will either lie or tell the truth, and the
communicator’s behaviors may function as cues indicating his or
her deceptiveness. Atop the line going from the communicator to
each cue, we would hope to place a validity coefficient—a statis-
tical measure of the extent (and direction) of the relation between
the communicator’s deceptiveness and that cue. Suppose, for ex-
ample, that the cue at the top of the figure is the amount of detail
in the communicator’s message. We have some idea of the validity
of that measure as a cue to deceptiveness. An earlier meta-analysis
by DePaulo et al. (2003) reveals a correlation between deceptive-
ness and number of details of ⫺.20, with truthful messages being
more detailed than deceptive ones. The DePaulo et al. (2003)
645
A LENS MODEL META-ANALYSIS OF HUMAN LIE JUDGMENTS
meta-analysis provides correlation coefficients for 158 potential
cues to deception. We draw on these earlier meta-analytic data to
implement the left-hand side of our lens model.
The naı¨ve detection of deception involves not just the commu-
nicator, it also involves a judge. In attempting to uncover deceit,
judges attend to cues. From certain of those cues, they infer
deception; from others, veracity. This process of decoding com-
municator behavior appears on the right-hand side of Figure 1.
There we have a judge, as well as lines emanating from cues
toward that judge. Our goal is to place atop each line a utilization
coefficient—that is, a measure of the extent and direction of the
relation between a cue and a judges’ tendency to infer that the
communicator is being deceptive. Again, suppose that the cue at
the top of the figure is the number of details in a communicator’s
message. As reported below, perceivers tend to infer truthfulness
from detailed communications; in fact, the relevant rwith per-
ceived deceptiveness is ⫺.37. The similarity between this decod-
ing coefficient (of ⫺.37) and the corresponding encoding coeffi-
cient (of ⫺.20) would suggest that perceivers enhance their
accuracy in detecting deception insofar as they rely on message
details as a judgment cue. More generally, accuracies (and inac-
curacies) in naı¨ve lie detection reflect the correspondence (and
noncorrespondence) between the validity of particular deception
cues and their utilization by judges.
Within this lens model framework, accuracies in human lie
detection can be statistically decomposed. To explain the decom-
position, we must introduce some notation. Suppose that we have
data on a number of potential deception cues. Suppose we enter
those cue variables into a multiple regression equation and use
them to predict communicator deceptiveness. Call our measure of
deceptiveness D. The resulting regression equation would yield a
statistical prediction of deceptiveness for each communicator (call
the predictions D⬘), and these predictions would be more (or less)
accurate. One measure of their accuracy is the Pearson product–
moment correlation between actual deceptiveness and statistical
predictions of deceptiveness (that is, between Dand D⬘). Call this
correlation coefficient R
Dec
. It indicates the overall predictability
of deception from our set of behavioral cues.
Given appropriate data, we could set up a multiple regression
equation for predicting judgments of communicator deceptiveness
from the same behavioral cues. Let us call our measure of per-
ceived deceptiveness P. Our regression equation would yield a
Figure 1. The communicator (C) is displayed to the left, and the judge (J) is displayed to the right. Behavioral
cues (X) appear in the middle of the figure. Each cue is related to deception by a validity coefficient (r
v
) and to
deception judgments by a utilization coefficient (r
u
), each represented by a Pearson’s r. For example, assume that
the cue at the top of the figure, X
1
is the number of details in a communicator’s message. A previous
meta-analysis by B. M. DePaulo et al. (2003) revealed a correlation between deceptiveness and number of details
(r
v
)of⫺.20, with truthful messages being more detailed than deceptive ones. In Meta-Analysis 1, we find that
the number of details is associated with deception judgments with r
u
⫽⫺.37, suggesting that judges (correctly)
infer deception from a lack of details. Generally, the accuracy of the judge (i.e., the correlation between the
judgment of deception and actual deception, represented in the figure by r
acc
) will, according to the lens model,
be a function of the following: First, to what extent are there valid cues to deception (the left side of the figure)?
Second, to what extent can the perceiver’s judgment be reliably predicted from cues (the right side of the figure)?
Third, to what extent does the set of cues utilized by a perceiver to judge deception match those actually
indicative of deception (the matching between the left and right side of the figure)?
646 HARTWIG AND BOND
prediction of deception judgment for each communicator. Let us
call these predictions P⬘and measure their accuracy by their
correlation with actual judgments. The resulting correlation, which
we denote R
Per
, reflects the predictability of deception judgments
from a set of behavioral cues. Finally, it is of interest to compare
statistical predictions of deception with the corresponding predic-
tions of perceived deception. If behaviorally based predictions of
deception perfectly matched behaviorally based predictions of
deception judgment, the two sets of predictions would correlate
⫹1. If there was a perfect mismatch between the two sets of
predictions, they would correlate –1. More generally, a quantifi-
cation of accuracy in the lens model depends on the so-called
matching index—the correlation coefficient between cue-based
predictions of deception and cue-based predictions of deception
judgment. Call this matching index G.
For purposes of the lens model, we measure the accuracy of
deception judgments by a Pearson product–moment correlation
coefficient—the rbetween judgments of deception and actual
deception. Call this accuracy correlation r
acc
. If we can assume that
errors in predicting deception are uncorrelated with errors in
predicting deception judgment, lie detection accuracy can be ex-
pressed as the product of three factors (Tucker, 1964):
racc ⫽RDec ⫻RPer ⫻G. (1)
Thus, the accuracy of lie detection is the product of (a) the
predictability of a communicator’s deceptiveness from behavioral
cues, (b) the predictability of a communicator’s perceived decep-
tiveness from behavior cues, and (c) the matching of cue-based
predictions of deception with cue-based predictions of apparent
deception. To implement this lens model, we began by collecting
meta-analytic data on cues to deception judgment. These data are
of interest in their own right because there is no comprehensive
up-to-date synthesis of behavioral correlates of lie judgments in
the accumulated literature.
Meta-Analysis 1: Cues to Perceived Deception
The purpose of Meta-Analysis 1 is to identify behaviors that
covary with the degree to which a communicator is perceived as
deceptive. We do not assume that participants can accurately
report on the bases of their deception judgments—rather, the
accuracy of this reporting is a question to be empirically addressed.
For the identification of objective correlates of perceived decep-
tiveness, we consider studies in which people make judgments of
the veracity of a set of communicators and correlate a communi-
cator’s perceived deceptiveness with various aspects of the com-
municator’s demeanor, speech, or behavior. To this date, a number
of such reviews have been conducted. Here we consider several of
those reviews.
Zuckerman, DePaulo, and Rosenthal (1981) examined 13 stud-
ies on behaviors associated with perceived deception. These stud-
ies yielded data on the relation between deception judgments and
10 distinct behaviors that might be used to form those judgments.
Eight of the 10 behaviors in the studies were significantly related
to deception judgments. Deception was most strongly inferred
from high vocal pitch and from slow speech, each relation yielding
r⫽.32. Along with a companion meta-analysis, this review
indicated that behaviors are more strongly associated with per-
ceived deception than actual deception.
In an unpublished master’s thesis, Malone (2001) assessed re-
sults on cues to perceived deception from 69 independent samples.
These yielded data on the relation between deception judgments
and 136 potential judgment cues. Meta-analysis revealed that
many of the cues were in fact significantly related to deception
judgments. The strongest results indicated that judges attribute
deception to communicators who appear indifferent and unintelli-
gent, each relation yielding r⫽.56. More generally, hesitant,
fidgety communicators are judged to be deceptive; positive, con-
sistent, forthcoming communicators are judged to be truthful (Mal-
one, 2001). From a nonquantitative analysis, Malone concluded
that there is some overlap and some divergence between these cues
to deception judgment and cues to actual deception.
From a tabulation of significant and nonsignificant correlations
in 48 studies, Vrij (2008) drew conclusions about 26 behavioral
cues to perceived deception. Vrij (2008) concluded that people
infer deception from signs of nervousness, like speech errors,
pauses, and gaze aversion. They also infer deception from odd
behaviors, like excessive eye contact and abnormal response la-
tencies.
Although these earlier reviews have been informative, they do
not reflect all of the evidence on cues to perceived deception.
Malone’s (2001) thesis offers the most comprehensive literature
review to date. Unfortunately, his effort is unpublished, and it
draws conclusions from only 69 samples of senders. Here, we
identify cues to perceived deception from a larger database.
Method
Literature search procedures. To locate relevant studies,
we conducted computer-based searches of Psychological Ab-
stracts,PsycInfo,PsycLit,Communication Abstracts, Dissertation
Abstracts International,WorldCat, and Google using the key-
words deception,deceit, and lie detection. We searched the Social
Sciences Citation Index for articles that cited key references (e.g.,
B. M. DePaulo & Rosenthal, 1979), examined reference lists from
previous reviews (Bond & DePaulo, 2006; DePaulo et al., 2003;
Malone, 2001; Vrij, 2008), and reviewed the references cited in
every article we found.
Criteria for inclusion of studies. Our goal was to summarize
all English-language reports of original research on cues to judg-
ment of deception available prior to January 2011. To be included
in this review, a document had to report the relation between
judgments of deception and at least one cue. For purposes of
implementing this criterion, we construed judgments of deception
broadly, to include the percentage of receivers who inferred that a
sender was lying (rather than telling the truth), the rating of a
sender on a multipoint scale of deceptiveness, and the ratings of
the sender’s honesty, trustworthiness, and believability. However,
we did not include in this review judgments of affect, even if the
affect being judged had been falsified. Although we included
studies in which children served as senders of truthful and decep-
tive messages, we did not include studies in which people under 16
years old served as receivers—leaving to developmental psychol-
ogists the task of understanding children’s deception judgments.
As possible cues to deception judgment, we included any be-
havior of the person being judged, any impression of the person
conveyed, and any aspect of the person’s demeanor or physical
appearance. We did not consider situational factors as cues to
647
A LENS MODEL META-ANALYSIS OF HUMAN LIE JUDGMENTS
deception judgment—the impact of situational factors on decep-
tion judgments having recently been summarized by Bond and
DePaulo (2006). We uncovered 128 documents that satisfied our
inclusion criteria.
Several features of this literature deserve comment. First, a
number of these documents reported more than one study of cues
to deception judgment. Second, there were a number of cases in
which a given sample of senders was judged by more than one
sample of receivers. For purposes of the current meta-analysis, the
unit of aggregation is the sender sample. Our analyses extract one
set of cue–judgment correlations from each independent sample of
senders—aggregating across multiple groups of receivers, when
necessary. From this literature, we extracted 153 independent
sender samples.
In these studies, researchers reported results on the relation of
deception judgments to 81 different cues. Fifteen of the cues were
examined in only one sample of senders (information about these
are available from the first author). These were excluded from the
present study. The remaining 66 cues appear at the left of Table 1.
Seventy-five of the 81 cues appeared in an earlier meta-analysis of
cues to deception by DePaulo et al. (2003) and are more fully
described there. The six additional cues appear at the bottom of
Table 1.
Variables coded from each report. From each report, we
coded as many of the following variables as possible: (a) number
of senders, (b) number of receivers, (c) an accuracy correlation, (d)
at least one cue–judgment correlation, (e) an Nfor the cue–
judgment correlation, (f) the number of cue–judgment correlations,
and (g) a multiple-cue correlation for judgments. We coded the
number of senders and number of receivers from each document.
From each document that allowed it, we computed an accuracy
correlation—that is, a Pearson product–moment correlation coef-
ficient between deception and judgments of deception. We also
computed at least one cue–judgment correlation—that is, a Pear-
son product–moment correlation between deception judgments
and scores on a potential cue to deception judgment. Often, the
unit for the cue–judgment correlation was a statement. In this case,
a positive correlation implies that the more of the cue that was
exhibited during a statement, the more likely the statement was to
be judged deceptive. In other cases, each sender made multiple
statements, and sender was the unit of analysis for the cue–
judgment correlation. In this case, a positive correlation would
imply that the more of a cue the sender exhibited, the more
deceptive she or he was judged to be. We noted the number of
cases on which the judgment–cue correlation was based. This was
either the number of statements or the number of senders.
Results
Characteristics of the literature. We found 128 documents
that satisfied our criteria. Of these documents, 107 were published
and 21 were unpublished. The earliest document was dated 1964,
and the latest was dated 2010. Searching through these documents,
we found 153 independent sender samples. These documents in-
cluded a total of 4,638 senders and 18,837 receivers. In the median
study, 88 receivers judged the veracity of 16 senders. Researchers
reported 531 cue–judgment correlations—that is, Pearson product–
moment correlations between deception judgments and a cue to
those judgments. In 43 cases (that is, 8.1% of the 531), a researcher
stated that the relation between perceived deception and a cue was
not significant, without reporting anything more. We treated these
as r⫽0. In all other cases, we analyzed the reported correlation
coefficients.
In 57 of the 153 sender samples, receivers classified partici-
pants’ statements as either lies or truths; in 36 samples, receivers
rated veracity on multipoint scales; in 35 samples, participants
rated senders’ honesty; and in 25 samples, they rated senders on an
honesty-related attribute (e.g., trustworthiness). Senders were
treated in one of three ways. In deception experiments, senders
were required to lie or tell the truth on an experimenter-specified
topic. In cue experiments, senders were required to exhibit (or not
exhibit) a particular behavior. In observational studies, senders
received no experimental instructions before having their veracity
judged. Deception experiments, cue experiments, and observa-
tional studies contributed 72, 56, and 25 sender samples to the
current database, respectively.
Judgment cues. From these data, we abstracted 81 distinct
judgment cues, aggregated data for each cue within sender sample,
converted the 531 Pearson product–moment correlations to Fish-
er’s Ztransforms, and cumulated the Fisher’s Zs for each cue with
random-effects techniques
1
(Lipsey & Wilson, 2001). We coded
each judgment–cue correlation as positive if perceivers inferred
deception from more of the cue and coded it as negative if
perceivers inferred deception from less of the cue. Table 1 displays
relevant results for the 66 cues that had been studied in more than
one sample. Appearing on each line of the table are an identifica-
tion number for the cue from Appendix A in an earlier review by
DePaulo and colleagues (2003), the name of the cue, the number
of samples in which that cue was studied, a Pearson rcorrespond-
ing to the mean weighted Fisher’s Zfor the relation of that cue to
perceived deception, a 95% confidence interval (CI) for that mean
relation (expressed in terms of r), and a between-samples true
standard deviation in the population correlation coefficient for the
relation between the cue and perceived deception.
As is indicated in the table, 41 of 66 cues (that is, 62.12%) have
a statistically significant relation to perceived deception, at a
per-cue alpha-level of .05. In light of the large number of cues
being assessed, it should also be mentioned that 27 of 66 cues have
a statistically significant relation to perceived deception at a more
stringent per-cue alpha level of .001. Of the 66 cues, 21 have
relations with perceived deception that vary significantly across
samples at p⬍.001.
Deception judgments are more strongly related to some cues
than to others. Of the 66 cues in Table 1, two have a Pearson
product–moment correlation with a perceived deception that
equals or exceeds .50, in absolute value. As these strongest cor-
relations indicate, people who appear incompetent are judged to be
deceptive, as are people whose statements do not place events
within their context. Eleven other cues have relations with per-
ceived deception that yield absolute rs between .40 and .50. These
indicate that people are judged to be deceptive if they fidget with
objects, sound uncertain, and appear ambivalent or indifferent.
They are judged to be truthful if they sound immediate, if their face
1
We also conducted a fixed-effects meta-analysis on these data and
obtained similar results.
648 HARTWIG AND BOND
Table 1
Cues to Perceived Deception and Actual Deception
ID Cue name k
Per
N
Per
r
Per
95% CI
k
Dec
N
Dec
r
Dec
1 Response length 19 1,299 ⫺.12
ⴱ
⫺.22 ⫺.02 .18 49 1812 ⫺.04
4 Details 5 676 ⫺.37
ⴱⴱ
⫺.51 ⫺.21 .16 24 883 ⫺.20
5 Sensory information 3 340 ⫺.35 ⫺.67 .08 .38 4 135 ⫺.24
8 Block access to information (e.g., refusal to discuss certain
topics)
11 1,840 .33
ⴱⴱ
.18 .47 .25 5 219 .10
9 Response latency 16 1,002 .18
ⴱ
.06 .29 .20 32 1330 ⫺.02
10 Speech rate 15 745 ⫺.21
ⴱ
⫺.35 ⫺.06 .24 23 806 .07
12 Plausibility 11 1,103 ⫺.47
ⴱⴱ
⫺.57 ⫺.35 .21 9 395 ⫺.11
13 Logical structure 7 563 ⫺.34
ⴱⴱ
⫺.45 ⫺.22 .12 6 223 ⫺.16
14 Ambivalent (communication seems internally inconsistent or
discrepant)
7 502 .49
ⴱⴱ
.23 .69 .41 7 243 .19
15 Involved 5 622 ⫺.42
ⴱⴱ
⫺.59 ⫺.21 .23 6 214 .05
16 Verbal and vocal involvement 5 362 ⫺.33
ⴱⴱ
⫺.47 ⫺.18 .14 7 384 ⫺.09
17 Expressive face 6 701 ⫺.18
ⴱ
⫺.33 ⫺.02 .17 3 251 .04
18 Illustrators 9 430 .03 ⫺.23 .29 .36 16 834 ⫺.05
19 Verbal immediacy (e.g., the use of active voice and present
tense)
2 104 .11 ⫺.09 .30 .00 3 117 ⫺.16
22 Self references (e.g., use of personal pronouns) 11 648 ⫺.18
ⴱ
⫺.33 ⫺.03 .23 12 595 ⫺.01
23 Mutual references (references to themselves and others) 2 120 .22 ⫺.12 .51 .20 5 275 ⫺.11
24 Other references (references to others, e.g., use of third
person pronouns)
3 168 .24
ⴱ
.00 .45 .16 6 264 .09
25 Vocal immediacy (impressions of directness) 13 2,224 ⫺.44
ⴱⴱ
⫺.54 ⫺.33 .22 7 373 ⫺.30
27 Eye contact 19 1,178 ⫺.15
ⴱⴱ
⫺.21 ⫺.08 .05 32 1491 .00
28 Gaze aversion 5 202 .28
ⴱⴱ
.13 .41 .06 6 411 .05
31 Vocal uncertainty (impressions of uncertainty and insecurity,
lack of assertiveness)
10 826 .43
ⴱⴱ
.28 .56 .25 10 329 .14
34 Shrugging 6 382 ⫺.16
ⴱ
⫺.27 ⫺.04 .07 6 321 .02
35 Non-ah disturbances (e.g., stutters, grammatical errors, false
starts)
8 376 .09 ⫺.05 .22 .11 17 751 .01
37 Unfilled pauses (periods of silence) 13 718 .27
ⴱⴱ
.12 .40 .22 15 655 .01
38 Ah disturbances 14 692 .22
ⴱⴱ
.15 .29 .00 16 805 .03
40 Total disturbances (ah and non-ah speech disturbances) 11 832 .09
ⴱ
.02 .16 .02 7 283 ⫺.05
42 Non fluent (miscellaneous speech disturbances) 9 845 .25
ⴱⴱ
.10 .37 .19 8 144 .19
43 Active body 3 58 ⫺.10 ⫺.36 .18 .00 4 214 .02
44 Postural shifts 12 574 .09
ⴱ
.00 .18 .04 29 1214 .02
45 Head movements 9 417 ⫺.08 ⫺.23 .07 .16 14 536 ⫺.02
46 Hand gestures 10 452 ⫺.18
ⴱⴱ
⫺.28 ⫺.07 .07 29 951 ⫺.01
47 Arm movements 2 232 .37
ⴱⴱ
.26 .48 .00 3 52 ⫺.19
48 Foot/leg movements 5 138 .14 ⫺.04 .30 .00 28 857 ⫺.07
49 Friendly 13 987 ⫺.35
ⴱⴱ
⫺.46 ⫺.23 .00 6 216 ⫺.18
50 Cooperative 14 1,018 ⫺.41
ⴱⴱ
⫺.54 ⫺.25 .29 3 222 ⫺.32
51 Attractive 20 1,528 ⫺.25
ⴱⴱ
⫺.33 ⫺.16 .16 6 84 ⫺.02
52 Negative statements 9 496 .05 ⫺.16 .26 .29 9 397 .10
53 Pleasant voice 2 175 ⫺.31
ⴱⴱ
⫺.44 ⫺.17 .00 4 325 ⫺.04
54 Pleasant face 6 370 ⫺.44
ⴱⴱ
⫺.60 ⫺.25 .22 13 635 ⫺.05
55 Head nodding 5 291 ⫺.01 ⫺.13 .10 .00 16 752 .01
58 Smiling 21 1,422 ⫺.02 ⫺.15 .10 .26 27 1313 .00
61 Nervous 15 1,208 .30
ⴱⴱ
.17 .42 .24 16 571 .12
63 Pitch 5 298 .07 ⫺.16 .29 .22 12 294 .14
64 Relaxed posture 2 109 ⫺.22 ⫺.69 .38 .43 13 488 ⫺.15
66 Blinking 8 372 .14
ⴱ
.03 .25 .01 17 850 .03
67 Object fidgeting 2 130 .49 ⫺.24 .86 .52 5 420 ⫺.02
68 Self-fidgeting 11 630 .01 ⫺.13 .15 .15 18 991 .00
69 Facial fidgeting 3 164 .18 ⫺.19 .50 .27 7 444 .04
70 Fidgeting 9 489 .03 ⫺.12 .17 .16 14 495 .04
75 Self-deprecating (e.g., unfavorable, self-incriminating
details)
4 335 ⫺.08 ⫺.36 .22 .27 3 64 .07
76 Embedding (placing events within its spatial and temporal
context)
2 292 ⫺.50 ⫺.84 .12 .47 6 159 ⫺.23
84 Behavior segments (perceived number of behavioral units) 3 294 .00 ⫺.22 .21 .16 1 54 ⫺.23
87 Realistic 2 388 ⫺.47
ⴱⴱ
⫺.69 ⫺.18 .23 1 40 ⫺.21
90 Indifferent (speaker seems unconcerned) 3 127 .42
ⴱ
.26 .56 .00 2 100 .45
91 Not spontaneous (statement seems planned or rehearsed) 2 175 .48
ⴱ
.11 .74 .29 2 46 .19
92 Thinking hard 4 257 .31
ⴱⴱ
.19 .42 .00 1 8 .29
(table continues)
649
A LENS MODEL META-ANALYSIS OF HUMAN LIE JUDGMENTS
appears pleasant, if they are cooperative and involved, and if their
statements seem plausible, realistic, and spontaneous.
For purposes of establishing benchmarks for stronger and
weaker cues to deception judgment, we noted the absolute value of
the rcorresponding to each judgment–cue mean weighted Fisher’s
Z. Across all the cues in Table 1, the median absolute ris .25; the
absolute rs at the first and third quartile are .11 and 39.
Let us compare certain cues to deception judgment with peo-
ple’s self-reported beliefs about deception. As mentioned earlier,
the most commonly self-reported cue is gaze aversion.
Table 1 displays cue–judgment correlations for two variables
related to this belief. Consistent with the belief that liars “can’t
look you in the eye,” people are likely to be judged deceptive if
they avoid eye contact and avert gaze (for the relation of these two
variables to perceived deception, rs⫽⫺.15 and .28, respectively).
The modest size of these correlations is, however, noteworthy. Eye
contact has a weaker relation to deception judgments than most
of the cues in Table 1—the median cue yielding an absolute r⫽
.25. Although gaze aversion is a somewhat stronger cue to
deception judgments, it is still weaker than 30 of the judgment
cues in Table 1.
Meta-Analysis 2: Cues to Perceived and Actual
Deception
In Meta-Analysis 2, we sought to test the wrong subjective cue
hypothesis. From Meta-Analysis 1, we had data on a large number
of cues to perceived deception; in a second meta-analysis, we
sought to compare them with cues to actual deception. For data on
the latter, we turned to work by DePaulo et al. (2003). The wrong
subjective cue hypothesis would be discredited if we obtain a
strong positive correlation between the two sets of cues.
Method
For comparison with cues to perceived deception, we sought
actual cues to deception. Hereafter, we call the former judgment
cues and the latter deception cues. We were interested in any
variable that had been studied as a judgment cue in more than one
sample and that had also been studied as an actual deception cue
in more than one sample. We found 57 such cues. For purposes of
comparing judgment cues with deception cues, it was necessary
that the strength of the two types of cues be expressed in the same
statistical metric. In Meta-Analysis 1, we expressed the strength of
judgment cues in terms of Pearson product–moment correlations,
whereas in their earlier meta-analysis DePaulo et al. (2003) ex-
pressed the strength of deception cues in terms of a standardized
mean difference. DePaulo et al. (2003) graciously supplied us with
their study-by-study data. For the present work, we transformed
each standardized mean difference in the DePaulo et al. (2003)
database to a Pearson product–moment correlation coefficient. We
then transformed each rto a Fisher’s Z, cumulated the Zs with
standard methods, then back-transformed the weighted mean Fish-
er’s Zto an r—precisely as we had for judgment cues in Meta-
Analysis 1. For the resulting actual deception cue correlations, see
the rightmost column of Table 1. Again, these data were collected
by DePaulo et al. (2003). Positive correlations imply that people
display more of the cue when lying than when telling the truth.
2
2
The entries in Table 1 are simple correlation coefficients, not standard-
ized multiple regression coefficients. In examining the table, readers may
properly regard each r
Per
and each r
Dec
as utilization and validity coeffi-
cients for a lens model that predicts deception from a single cue. Thus, for
judging deception from response length (r
Per
⫽⫺.12 and r
Dec
⫽⫺.04).
These do not represent utilization and validity coefficients for response
length, in a lens model that predicts deception from all 66 cues in Table 1.
As meta-analysts, we cannot determine the latter multicue utilization and
validity coefficients because the required multiple regression results are not
reported in this literature. For some results on multicue lens models of
deception, see Meta-Analysis 4.
Table 1 (continued)
ID Cue name k
Per
N
Per
r
Per
95% CI
k
Dec
N
Dec
r
Dec
93 Serious (speaker seems formal) 4 175 ⫺.30 ⫺.55 .01 .27 4 23 .00
115 Competent 6 536 ⫺.59
ⴱⴱ
⫺.75 ⫺.36 .32 3 90 ⫺.02
116 Ingratiating 2 133 ⫺.26
ⴱ
⫺.41 ⫺.09 .00 4 64 .00
134 Admit responsibility 2 96 .09 ⫺.21 .37 .15 2 123 .16
Nonverbal deception pose (communicator enacts typical
nonverbal deception cues)
12 2,139 .38
ⴱⴱ
.34 .42 .04
Verb deception pose (communicator enacts typical verbal
deception cues)
10 1,885 .35
ⴱⴱ
.22 .46 .21
Messy clothes 2 102 .37 ⫺.11 .71 .32
Weird behaviors 7 293 .29
ⴱ
.05 .50 .25
Foreign language 5 184 ⫺.19 ⫺.57 .26 .45
Baby face 14 1,441 ⫺.37
ⴱ
⫺.49 ⫺.24 .23
Note. ID refers to the identification number in Appendix A of B. M. DePaulo et al. (2003). For a further description of the cues, see that appendix. Positive
entries imply that more of the cue is associated with deception or perceived deception. The six last cues in the Table were not included in the meta-analysis
by B. M. DePaulo et al. (2003), hence the missing data on ID number, k
Dec
,N
Dec
and r
Dec
.k
Per
⫽Number of studies that examined the association between
perceived deception and the cue; N
Per
⫽Number of lie-/truth-tellers in those studies; r
Per
⫽rcorresponding to the mean Fisher’s Zr for the association
between perceived deception and the cue; 95% CI ⫽a 95% confidence interval for the population correlation coefficient between perceived deception and
the cue;
⫽the square root of the true variance of the population correlation coefficient between perceived deception and the cue; k
Dec
⫽Number of
studies that examined the association between actual deception and the cue; N
Dec
⫽Number of lie-/truth-tellers in those studies; r
Dec
⫽rcorresponding
to the mean Fisher’s Zr for the association between actual deception and the cue.
ⴱ
p⬍.05 (at which relation differs significantly from 0).
ⴱⴱ
p⬍.001 (at which relation differs from 0).
650 HARTWIG AND BOND
Results
We were especially interested in those attributes that had been
examined in more than one sample as a cue to perceived deception
and in more than one sample as a cue to actual deception. For each
of those 57 cues, we compared a mean weighted Fisher’s Zfor the
relation of the cue to perceived deception with a mean weighted
Fisher’s Zfor the relation of that cue to deception. Correlating
these Fisher’s Zs across cues, it is evident that the relation of a cue
to deception is positively associated with its relation to perceived
deception (r⫽.59). Although the correlation between deception
cues and judgment cues is not perfect, it is positive and substantial
in size. The correlation between actual deception cues and judg-
ment cues is much larger, for example, than the correlation be-
tween deception judgments and deception itself. Again, the latter
typically yields r⫽.21. The wrong subjective cue hypothesis
would not have predicted such a close correspondence between
deception cues and judgment cues.
Kraut (1980) was the first to suggest that behaviors are more
strongly related to perceived deception than actual deception. To
assess this claim, we compared two absolute values for each of 57
cues—the absolute mean weighted Fisher’s Zr for the association
of that cue to deception and the absolute mean weighted Fisher’s
Zr for the cue’s association to perceived deception. Averaging
across the cues, the mean absolute Fisher’s Zr for the association
of cues to deception is .09, and the mean absolute Fisher’s Zr for
the association of cues to judgments of deception is .25. By an
unweighted test with cue as the unit of the analysis, these means
differ significantly, t(56) ⫽8.22, p⬍.001. Thus, it is true that
behaviors are more strongly related to judgments of deception than
to actual deception.
We compared the relation of each cue to deception with its
relation to perceived deception—comparing, in particular, the two
relevant weighted mean Fisher’s Zrs. We set a per-cue two-tailed
alpha-level of .05. Results show that for 22 of the 57 cues (that is,
38.59%), the two relations are significantly different. Inspecting
means that it is apparent that all 22 significant differences are ones
in which the judgment cue is stronger than the deception cue.
We examined data from the 22 cues that have a significantly
different relation to actual deception than to perceived deception.
Examination showed that 14 of those cues had the same directional
relation to actual deception and perceived deception. None of the
remaining eight cues had a statistically significant relation to
deception at p⬍.05. There is no evidence here that perceivers
infer deception from truth cues or infer truthfulness from deception
cues.
As noted above, there is a general tendency for cues to be more
strongly related to perceived deception than actual deception. In
fact, the results above indicate that the mean absolute Zr for
perceived deception is 2.77 times as large as the mean absolute Zr
for actual deception (those values being .25 and .09). We won-
dered whether this general size difference could explain the 22
statistically significant differences between cues to actual decep-
tion and to perceived deception. To assess this possibility, we
noted the mean weighted Fisher’s Zrs for the relevant 22
judgment–cue correlations and divided each of these values by
2.77 (the ratio of the mean absolute relation between cues to
perceived deception and cues to actual deception). We then tested
for the differences between the relation between a cue and decep-
tion with this deflated measure of the relation between that cue and
perceived deception—the deflation offsetting a general tendency
for judges’ cue utilization coefficients to exceed validity coeffi-
cients. Although we found 22 significant differences between cues
to actual deception and cues to perceived deception in the raw
analyses above, this second analysis revealed only one significant
difference at p⬍.05, for the cue arm movements. With this one
exception, differences between cues to deception and to perceived
deception are not cue-specific. Rather, they reflect a general ten-
dency for judges’ utilization coefficients to be larger than validity
coefficients.
Meta-Analysis 3: Within-Study Evidence
Meta-Analysis 2 revealed a strong positive correlation between
cues to actual deception and cues to perceived deception. This
correlation seems to discredit the wrong subjective cue hypothesis.
Before rejecting that hypothesis, however, we must acknowledge
one of the features of Meta-Analysis 2. It incorporated data from
all studies of actual deception cues and judgment cues. Many of
the studies of deception cues did not provide data on deception
judgments. Thus, our data on judgment cues came from one set of
studies, and our data on deception cues came from another set of
studies. The two sets of studies differ in unknown ways, and these
differences complicate any interpretation of the meta-analytic re-
sults we have reported. For a controlled comparison of actual
deception cues and judgment cues, we sought within-study evi-
dence—hoping to review all results to date from researchers who
had assessed both actual deception cues and judgment cues in the
very same study.
Method
We sought studies in which researchers had measured both cues
to actual deception and cues to deception judgment. Planning to
correlate the two sets of cues within each study, we restricted
attention to instances in which a researcher had reported correla-
tions among deception and three or more cues, as well as corre-
lations among perceived deception and those same cues on the
same set of senders. From the studies uncovered for Meta-Analysis
1, we found 25 such sender samples. They included a total of 1,422
senders and judgments of those senders made by a total of 2,250
individuals. From each of these samples, we converted each Pear-
son’s rfor a deception cue or judgment cue to a Fisher’s Zr. Then
we correlated the Zrs for actual deception cues with the Zrs for
judgment cues. This resulted in a cross-cue Pearson’s r. It assesses
the relation between actual deception cues and judgment cues
within a particular sample.
Results
Over the 25 sender samples, correlations for the relation be-
tween actual deception cues and judgment cues varied widely. The
maximum cross-cue rwas .97, and the minimum was ⫺.68.
Twenty-two of the 25 cross-cue correlations were positive. The
median correlation was .54. To combine these cross-cue correla-
tions, we began by converting each rto a Fisher’s Zr, then applied
standard random-effects meta-analytic methods. In these aggre-
gated within-study results, the more strongly a cue is associated
651
A LENS MODEL META-ANALYSIS OF HUMAN LIE JUDGMENTS
with deception, the more strongly it is associated with perceived
deception (mean weighted Zr ⫽.90). The corresponding Pearson’s
ris .72, 95% CI [.70, .74].
For 22 of these samples, we also had a measure of judges’
accuracy in discriminating lies from truths. We expressed judge
accuracy as a correlation coefficient then converted this rto a
Fisher’s Zr. The stronger the positive relation between deception
cues and judgment cues in a study, the greater is judges’ accuracy
in that study (for the correlation between the two sets of Zrs, r⫽
.60). As the lens model shows, the accuracy of a deception judg-
ment will increase if perceivers use cues that in fact reflect deceit.
Within each of 25 sender samples, we also noted the means of
two sets of absolute Fisher’s Zrs—one set indexing the relations of
various cues to actual deception and the other set indexing the
relations of those cues to perceived deception. Averaging across
the 25 sender samples, we find that the mean absolute Fisher’s Zr
for the relation of a behavior to deception is .17, and the mean
absolute Fisher’s Zr for the relation of a behavior to perceived
deception is .27. As was evident in the cross-study comparison,
these within-study means indicate that perceivers’ coefficients for
utilizing cues to deception are larger than the validity coefficients
for the cues, t(24) ⫽3.89, p⬍.005.
Perhaps our averaging of all cues to deception is misguided.
Perhaps perceivers intuit the behavior that is most strongly related
to deception in a particular situation and base their judgments in
that situation on this optimal cue. Averaging across 25 sender
samples, the mean of the maximum absolute Fisher’s Zr between
any cue and deception is .39, whereas the corresponding mean of
the maximum absolute Fisher’s Zr between any cue and perceived
deception is .61. By standard unweighted methods, this is a sig-
nificant difference, t(24) ⫽3.72, p⬍.005. Thus, the validity of
the optimal cue is lower than the largest utilization coefficient
of any cue. As usual, cues are more strongly related to judg-
ments of deception than to deception itself.
Meta-Analysis 4: Multiple Cues
The purpose of Meta-Analysis 4 is to investigate whether inac-
curacy in lie judgments is mainly due to incorrect decision-making
strategies or due to a lack of valid cues to deception and to
establish the matching of cue-based predictions of deception with
predictions of deception judgments. In the analyses discussed so
far, it is assumed that perceivers judge deception from a single cue.
Single-cue lens analyses are implicit in the correlation coefficients
of Table 1. Perhaps perceivers do not judge deception from a
single cue. Perhaps they judge it from multiple cues, and deception
gives rise to multiple cues. In that case, the correlation coefficients
of Table 1 would not be appropriate validity and utilization coef-
ficients.
As mentioned above, lens model analysis reveals that the cor-
relation coefficient between actual deception and perceived decep-
tion is the sum of two terms, one of which involves the correlation
between errors in predicting a sender’s deceptiveness and that
same sender’s perceived deceptiveness. Assuming that these error
terms are uncorrelated, the correlation between actual deception
and perceived deception is the product of three factors: R
Dec
,R
Per
,
and G, where R
Dec
is the multiple Rfor predicting deception from
cues, R
Per
is the multiple Rfor predicting perceived deception from
those same cues, and Gis the correlation between predictions of
senders’ deception from cues and predictions of their perceived
deception from those same cues. In order to use the multiple-cue
lens model for deception, it is necessary to estimate these three
factors: R
Dec
,R
Per
, and G. Let us do so.
Method
We sought studies in which deception had been predicted from
two or more cues. We searched for statistical analyses that made
these predictions and reported a statistic correlating predicted
deception with actual deception. Some authors reported discrimi-
nant analyses; others reported logistic regressions, and still others
reported ordinary multiple regressions. We sought results from all
three kinds of analyses, as long as two conditions were met. First,
we required that the variables used to predict deception be chosen
on a priori nonstatistical grounds. We did not use results from
stepwise analyses or analyses that chose as predictors of deception
only those variables that had shown a significant univariate rela-
tion to deception. Such analyses would overstate the relation
between deception and deception cues, as Thompson (1995) ex-
plains. Second, we required that the researcher report (or that we
could determine) an adjusted (or shrunken) multiple correlation
coefficient for the predictability of deception from cues. We used
these same criteria in searching for analyses that predicted per-
ceived deception from two or more cues.
We found 59 multiple-cue predictions of deception that satisfied
our criteria. These represented predictions of deception by 3,428
senders. From each of the 59 sender samples, we coded a multiple
correlation coefficient (an R) for predicting deception from two or
more cues—defining each R
Dec
as the square root of an adjusted
(or shrunken) R
2
.
We found 30 multiple-cue predictions of perceived deception
that satisfied the criteria. These represented data from 1,178 send-
ers and 3,497 judges. From each of the 30 sender samples, we
again coded R
Per
as the square root of an adjusted (or shrunken) R
2
.
Results and Discussion
For predicting actual deception from multiple cues, the median
R
Dec
is .46. The interquartile range is .24 to .65. The number of
cues entering into these Rs ranges from 2 to 38. Across the 59
multiple correlation coefficients, there is no relation between the
magnitude of an R
Dec
and the number of cues entering into it (r⫽
.03, ns).
For a meta-analytic approach to combining multiple correlation
coefficients, we used methods suggested by Konishi (1981). We
began by applying a Fisher’s Zr transformation to each R
Dec
and
weighting it by N-p-1, where Nis the number of senders and pis
the number of cues from which deception was predicted. We then
computed a mean inverse-variance weighted Fisher’s Zr and back-
transformed it to an R. For predicting deception from two or more
cues, the Rcorresponding to the mean of 59 weighted Fisher’s Zrs
is .36, 95% CI [.33, .38].
We were also interested in the prediction of perceived deception
from multiple cues. The median R
Per
is .61. The interquartile range
is .46 to .67. The number of cues entering into these multiple Rs
ranges from 2 to 16 and is uncorrelated with the magnitude of the
Rs(r⫽⫺.03, ns).
652 HARTWIG AND BOND
As in the analysis above, we converted each R
Per
to a Fisher’s Zr
and weighted it by N-p-1. For predicting perceived deception from
two or more cues, the Rcorresponding to the mean of 30 weighted
Fisher’s Zrs is .63, 95% CI [.60, .65]. From multiple cues, it is
easier to predict perceived deception than deception. This is ap-
parent in the multiple correlation coefficients we have reported.
Moreover, this difference in multiple correlation coefficients is
consistent with some results reported above, in which individual
behaviors correlate more strongly with perceived deception than
actual deception.
The typical relation between actual deception and perceived
deception yields an accuracy of r⫽.21 (Bond & DePaulo, 2006).
As shown by the meta-analytic estimates above, deception can be
predicted from two or more cues to a degree that typically yields
R
Dec
⫽.34. Multiple-cue predictions of deception judgment typ-
ically yield R
Per
⫽.61. We cannot calculate Gfrom individual
studies. However, by manipulating Equation 1, above, in the
manner suggested by Stenson (1974), we infer that
G⫽racc/关共RDec兲共RPer兲兴 ⫽.21/关共.36兲共.63兲兴 ⫽.93. (2)
Thus, behaviorally based predictions of deception are very
strongly correlated with behaviorally based predictions of per-
ceived deception (r⫽.93),
3
and the accuracy of deception judg-
ments can be quantitatively decomposed as
racc ⫽RDec ⫻RPer ⫻G共from above兲. (3)
.21 ⫽.36 ⫻.63 ⫻.93.
As we can see in this equation, the accuracy of deception judg-
ments is most constrained by the lack of valid behavioral cues to
deception, less constrained by judges’ unreliability in using those
cues, and unconstrained by the matching of behaviorally based
predictions of deception with predictions of deception judgment.
For purposes of comparison, it may be useful to describe the
results of lens model analyses in other domains. Karelaia and
Hogarth (2008) summarized lens model analyses of human judg-
ments of many attributes other than deception. Across 249 studies,
they found an average accuracy coefficient of .56, much higher
than the .21 accuracy correlation in judgments of deception. They
found that environmental criteria could be predicted by cues with
an average multiple of R⫽.80 and that human judgments of the
criterion could be predicted by those cues with an average multiple
of R⫽.81. The first is much higher than the .36 predictability of
deception, and the second is somewhat higher than the .63 pre-
dictability of perceived deception. Finally, Karelaia and Hogarth
(2008) found that statistical predictions of environmental criteria
correlated .80 with statistical predictions of judgments of those
criteria. Thus, the matching of deception with deception judgments
(r⫽.93) is higher than the matching of other criteria with human
judgments of those criteria.
General Discussion
The purpose of this work was to shed new light on deception
and its detection by analyzing judgments of veracity using
Brunswik’s (1952) lens model. In particular, we test the validity of
the hypotheses that (a) lie judgments are often inaccurate due to
incorrect cue reliance of lie-catchers and (b) lie judgments are
often inaccurate due to lack of valid cues to deception. Our goal
was to generate new knowledge about naı¨ve lie detection in two
ways. First, by analyzing judgments of deception using the lens
model, we offer new descriptive information about the character-
istics of lie judgments and why they often fail. This is a question
of basic importance for the theoretical understanding of interper-
sonal perception in general and deception judgments in particular.
Second, by quantifying the constraints on accuracy imposed by the
strategies of the perceiver and by the difficulty of the judgment
task, we can offer some prescriptive information about how accu-
racy in deception detection can be increased. This is a question of
importance for applied psychology because veracity assessments
are critical in a number of settings.
Cues to Deception Judgments
As discussed earlier, the available research typically employs
self-reports to tap the decision-making strategies people employ
when attempting to establish veracity (Global Deception Research
Team, 2006). Deception scholars have noted that self-reports
might not offer entirely valid information about actual decision
making because people may have limited insight into their own
cognitive processes (e.g., Strömwall, Granhag, & Hartwig, 2004).
Still, only a few studies have attempted to go beyond self-reports
to establish the actual correlates of deception judgments (Bond et
al., 1992; Desforges & Lee, 1995; Ruback & Hopper, 1986; Vrij,
1993), and to this date, there is no quantitative overview of these
studies. The prevailing notion in the literature is that false stereo-
types about deceptive behavior are main contributors to the failure
of lie judgments to reach hit rates substantially above chance levels
(Park, Levine, McCornack, Morrison, & Ferrara, 2002).
The results of this meta-analysis contrast with past research in
important ways. In general, the analysis shows that actual corre-
lates of deception judgments differ from those that people report.
In particular, the robust finding from surveys that people associate
deception with lack of eye contact receives little support. Eye
contact is a weaker judgment cue than most of the 66 cues in
Meta-Analysis 1, and gaze aversion is weaker than 30 cues in the
same meta-analysis. Other common self-reported cues to deception
are body movements and fidgeting (Akehurst et al., 1996; Ström-
wall & Granhag, 2003). Similar to the findings on eye behavior, it
seems that the link between these behaviors and deception judg-
ments is weaker than previously thought. Although the relation
3
With this estimate, it is assumed that there is no correlation between
two error terms—the error from a model predicting deception from cues
and the error from a model predicting deception judgments from those
same cues. It is possible that these errors are correlated. To our knowledge,
no correlation between these error terms has ever been reported in the
literature on deception judgment. In the absence of any information about
the correlation between error terms in judgments of deception, let us draw
on a meta-analysis of 204 lens model studies by Karelaia and Hogarth
(2008). There, a 95% CI for the mean estimate of the correlation among
lens-model error terms was .02–.06. Plugging these values into the relevant
lens model equation (along with values we computed from the deception
detection literature), one would infer that cue-based predictions of decep-
tion correlate positively with cue-based predictions of deception judgment,
with a value of the correlation coefficient between .73 and .86. We urge
future researchers to fit lens models to their data on deception judgments
and to report correlations between cue-based predictions of deception and
cue-based predictions of deception judgment.
653
A LENS MODEL META-ANALYSIS OF HUMAN LIE JUDGMENTS
between arm movements and deception judgments is moderate, the
relation to postural shifts is weak, as is the link to head, hand, and
foot/leg movements and fidgeting. Among the strongest correlates
of deception judgments emerging from this meta-analysis are that
people are judged as deceptive when they appear incompetent and
ambivalent, and when the statement is implausible and lacks
spontaneity. These cues are not commonly reported in studies
employing the self-report method (Strömwall, Granhag, &
Hartwig, 2004). These results suggest that the behaviors people
actually rely on when judging veracity differ markedly from the
stereotype previously thought to influence much of lie-catchers’
decision making.
Even though these results might be surprising, they are consis-
tent with research on cognitive processes demonstrating that self-
knowledge about beliefs, motives, and judgments is often inaccu-
rate (Fiske & Taylor, 2008; G. A. Miller, 1962; Neisser, 1967;
Nisbett & Ross, 1980). This research suggests that when asked to
account for their attributions and judgments, people may rely on a
priori theories rather than actual insight into their thought pro-
cesses simply because these processes might be inaccessible (Nis-
bett & Wilson, 1977). Such an a priori theory could be the
common sense notion that liars experience guilt, shame, and ner-
vousness and that these emotions are evident in nonverbal behav-
iors such as gaze aversion and fidgeting (Bond & DePaulo, 2006).
It is plausible that this naı¨ve theory is a product of deliberate
reasoning produced in response to a question about one’s beliefs
but that actual decision making is driven by intuitive, implicit
cognitive processes that lie partly outside the realm of conscious
awareness (Gigerenzer, 2007).
If people do not base their judgments on the explicit stereotype
of liars as nervous and guilt-stricken, what are the implicit theories
that actually lie behind judgments of deception? Inspecting the
meta-analytic pattern on cues to deception judgments, it seems that
people are likely to judge communicators as deceptive if they
provide implausible, illogical accounts with few details, particu-
larly few sensory details. This is similar to predictions from
theoretical frameworks on self-experienced versus imagined
events that have been employed to study verbal differences be-
tween fabricated and truthful accounts (Johnson & Raye, 1981;
Sporer, 2004). One way to interpret our finding is that people
might be intuitively in tune with what these frameworks call
“reality criteria.” Speculatively, a lifetime of exposure to state-
ments (most of them likely to be truthful) might serve to create an
intuitive feeling for the characteristics of self-experienced events.
The results suggest that people seem deceptive if they sound
uncertain and appear indifferent and ambivalent. This fits with one
of the main predictions from the self-presentational perspective (De-
Paulo, 1992; DePaulo et al., 2003), which states that deceptive ac-
counts might be less embraced by communicators than are truthful
ones for several reasons: Liars might lack familiarity with the domain
they are describing, and they might have less emotional investment in
the claims they are making. Also, awareness of the risk of being
disproven might give rise to ambiguous and vague statements
(Vrij, 2008). In sum, with regard to both verbal content and
nonverbal behavior, people’s cue reliance seems reasonably in line
with what actually characterizes deception.
The overall pattern, that implicit notions about deceptive behav-
ior are more accurate than explicit ones, is supported by research
on what is referred to as indirect deception detection (Vrij, Edw-
ard, & Bull, 2001). This research shows that people might perceive
and process deceptive and truthful statements differently in ways
that explicit measurements may not pick up on (Anderson, 1999;
DePaulo & Morris, 2004; Anderson, DePaulo, Ansfield, Tickle, &
Green, 1999; DePaulo, Jordan, Irvine, & Laser, 1982; DePaulo,
Rosenthal, Green, & Rosenkrantz, 1982; Hurd & Noller, 1988).
For example, a meta-analysis on the relation between accuracy and
confidence in deception judgments showed that people were more
confident when they saw a truthful statement than when they saw
a deceptive one, regardless of what explicit veracity judgment they
made (DePaulo et al., 1997). A study on deception detection in
close relations showed that even though explicit veracity judg-
ments were not accurate, judges reported feeling more suspicious
when watching a deceptive statement than when watching a truth-
ful statement (Anderson, DePaulo, & Ansfield, 2002). It might be
fruitful in the future for researchers to further explore the relation
between explicit and implicit processes in deception judgments.
For example, research could investigate the effects of increasing
perceivers’ reliance on intuitive impressions of veracity (e.g., by
asking judges to make decisions under cognitive load), with the
expectation that this might increase judgment accuracy (see Al-
brechtsen, Meissner, & Susa, 2009). However, it is not likely that
the improvement would be substantial, given our finding that
weaknesses in the validity of cues to deception constrain accuracy.
It is worth noting that the strongest cues to deception judgments
are not single behaviors but global impressions, such as ambiva-
lence. It is plausible that such impressions consist of a variety of
more minute behavioral changes, possibly on both verbal and
nonverbal levels. However, to our knowledge, no study has at-
tempted to examine the components of these impressions. Given
the finding from DePaulo et al. (2003) that global impressions are
stronger cues to deception than individual cues, and our finding
that such impressions are the best predictors of deception judg-
ment, we encourage future research to explore the behavioral
components of these impressions further.
It is possible that cue reliance is not uniform across all groups
and settings. For example, legal professionals (e.g., customs and
police officers) and lay people might differ in the cues on which
they base their judgments. On the basis of previous research (e.g.,
Bond & DePaulo, 2006), we do not expect that legal professionals
will outperform lay judges. Speculatively, due to the salience of
deception judgments in the life of legal professionals, the con-
scious stereotype of liars as fidgety and gaze aversive may become
chronically accessible (Fiske & Taylor, 2008). If this is true, legal
professionals’ cue reliance should be more similar to the pattern
obtained using the self-report method (and less accurate). As we
did not investigate moderator variables in the current study, we
encourage future research to explore this and other variables
possibly moderating cue reliance.
Judgment Cues Versus Deception Cues: Testing the
Wrong Subjective Cue Hypothesis
The first meta-analysis does not provide support for the wrong
subjective cue hypothesis. To subject the wrong subjective cue
hypothesis to a quantitative test, we conducted a second meta-
analysis in which we compared cues to deception judgments
(judgment cues) with behavioral cues to deception (deception
cues). The wrong subjective cue hypothesis predicts a discrepancy
654 HARTWIG AND BOND
between judgment cues and deception cues, implying that people
rely on cues that are either unrelated to deception or related to
deception in the opposite direction of their expectations. Meta-
Analysis 2 does not provide support for this prediction. In fact, at
least four interrelated pieces of evidence counter to the wrong
subjective cue hypothesis emerged from our analyses. First, we
found a strong positive correlation between judgment cues and
deception cues. The more strongly a cue is related to deception, the
more likely people are to rely on that cue when judging veracity.
The relation is not perfect but is much stronger than would be
predicted from the premise of the wrong subjective cue hypothesis.
Second, of the more than 50 cues investigated, only slightly more
than a third showed significant discrepancies between judgment
cues and deception cues. Third, when judgment cues did differ
from deception cues, it was typically the case that the judgment
cue matched the deception cue in its directional relation to decep-
tion but that the magnitude of the judgment cue was larger than
that of the deception cue. For only eight out of 57 cues did people
rely on a behavior that was unrelated to deception, and for no
behavior did we observe a directional error (i.e., that judges
associated more of a particular behavior with deception when the
case was that communicators displayed less of it when lying, or the
other way around). This suggests that people are rarely inaccurate
about the relation between a given behavior and veracity—in cases
in which judgment cues differed from deception cues, the error
was most frequently due to judges’ overestimation of the magni-
tude of the relation rather than outright misconceptions about the
relation between a behavior and deception. Fourth, a lens model
analysis revealed a very strong matching (r⫽.93) between be-
haviorally based predictions of deception and behaviorally based
predictions of perceived deception.
In light of these findings, we believe the argument that people
are misinformed about cues to deception ought to be revised. The
claim is true in the sense that people’s explicit notions about
deception are largely inaccurate and reflects a stereotype not
supported by research on objective behavioral differences. How-
ever, when mapping the behaviors that actually covary with judg-
ments of deception, a different picture is revealed. People seem
intuitively in tune with the characteristics of deceptive behavior.
Rarely do people overestimate the extent of an individual behav-
ioral link to deception and even more rarely do people rely on cues
that are unrelated to veracity. In conclusion, it seems people’s
intuitive notions about cues to deception are far less flawed than
previously thought.
Implications
Our analysis provides new information about why lie-catchers
often fail. In explaining the lack of accuracy, deception scholars
have operated on the assumption that reliance on incorrect heuris-
tics about deceptive processes limits judgment accuracy. In line
with this assumption, a common recommendation on how to
improve judgment accuracy has been that observers ought to shift
their reliance on invalid cues such as gaze aversion, fidgeting and
posture shifts to cues that have been found to be more valid based
on the scientific literature (Vrij, 2008). Our results suggest that
both the descriptive and prescriptive conclusions about judgment
performance ought to be qualified. Starting with the descriptive
aspect, the analyses of cues to deception judgments in Meta-
Analyses 1–3 show that observers do not in general rely on the
wrong cues to deception. The discrepancy between our results on
subjective cues to deception is interesting for two reasons. First,
the results indicate that self-reports do not offer valid information
about the true nature of lie-catchers’ decision making. This implies
that if researchers wish to map lie-catchers’ judgments, they ought
to study actual performance, not self-reports about performance.
Second, the discrepancy between self-reported judgment cues and
actual judgment cues informs our basic understanding of processes
of deception detection by suggesting that deception judgments are
largely driven by intuitions that may be inaccessible to the con-
scious mind. People do not seem to know what behaviors they use
when judging veracity. The behaviors they claim to use are largely
inaccurate, but the behaviors they actually rely on show a substan-
tial overlap with objective cues to deception. Simply put, intuition
outperforms explicit notions about deception.
With regard to prescriptive implications, the results provide new
information about how judgment achievement can be improved.
The results from Meta-Analysis 4 suggest that lack of validity in
cues to deception degrade judgment performance more strongly
than improper cue reliance. This suggests that the best way to
improve judgment achievement is to increase behavioral differ-
ences between liars and truth tellers rather than to educate per-
ceivers about actual objective cues to deception. To be fair, in
explaining deception detection inaccuracy, scholars have consis-
tently highlighted the stable finding that cues to deception are
scarce and weak. Nevertheless, attempts to improve deception
detection performance have until recently almost exclusively fo-
cused on altering the strategies used by perceivers to detect de-
ception. A number of studies have been conducted with the pur-
pose of training observers to make more accurate judgments by
either informing them of actual cues to deception, by providing
outcome feedback about their performance, or both (Frank &
Feeley, 2003). Such attempts to improve judgment accuracy have
shown either no effects of training or only minor improvement.
Our results provide a new explanation for why such training
programs have largely failed: Informing lie-catchers of objective
cues to deception might be ineffective not because judges are
immune to education but also because their use of cues to decep-
tion already largely overlaps with actual cues to deception. Feed-
back could be a way to improve intuitive cue reliance further, but
our results indicate that in order to substantially improve perfor-
mance, it might be more fruitful to manipulate the decision-making
task than to manipulate the decision-making strategies of lie-
catchers. In line with our claim that training observers to rely on
different cues might not be the optimal way to increase perfor-
mance, one study showed that perceivers’ performance was
slightly enhanced by both bogus training (in cues that are not
actually related to deception) and training in actual cues to decep-
tion (Levine, Feeley, McCornack, Hughes, & Harms, 2005). This
suggests that to the extent that training in valid cues to deception
is effective at all, it might be the act of training itself rather than
its content that is responsible for improvements in performance,
possibly by creating more motivated lie-catchers. Future research
aiming to improve judgments through cue information should first
establish empirically that judges indeed rely on the wrong cues. On
the basis of our results, such incorrect cue reliance seems quite
unlikely.
655
A LENS MODEL META-ANALYSIS OF HUMAN LIE JUDGMENTS
Eliciting Valid Cues to Deception
If lie detection accuracy is more strongly degraded by limita-
tions in behavioral indicators of deception, it makes sense to
attempt to increase behavioral differences between liars and truth
tellers in order to improve the chances of accurate deception
judgments. Recently, several lines of research have taken on this
task with promising results. This research has largely focused on
how to elicit valid cues to deception in interactional settings, most
typically interviews (Levine, Shaw, & Shulman, 2010; Vrij et al.,
2009). First, researchers drawing on the theoretical notion that
lying might be more cognitively demanding than truth telling have
attempted to increase cues to deception by imposing further cog-
nitive load on targets. The idea is that liars would be more
hampered by such cognitively demanding tasks because their
resources are already preoccupied with the cognitive challenge of
lying. In one study, liars and truth tellers were asked to tell the
story in reverse order. Cues to deception were more pronounced
when the story was told in reverse order, and lie-catchers were
more accurate when judging these statements, compared with the
control condition (in which the statement was told in chronological
order, Vrij et al., 2008).
Based on similar premises postulating cognitive differences
between liars and truth tellers, a second line of research has
focused on the possibility of eliciting cues to deception by using
the available case information strategically. This research capital-
izes on the fact that liars and truth tellers have different verbal
strategies, in particular when they are unaware of the information
held by the lie-catcher (Hartwig, Granhag, & Strömwall, 2007).
Using a mock theft paradigm in which both liars and truth tellers
touched a briefcase (from which liars then stole a wallet), Hartwig,
Granhag, Strömwall, and Vrij (2005) found that when the infor-
mation that their fingerprints had been found on the briefcase was
disclosed in the beginning of the interview, liars and truth tellers
both gave plausible explanations for being in contact with the
briefcase (e.g., that they just moved it while looking for some-
thing). Lie-catchers could not distinguish between these true and
false denials. In contrast, when the information was strategically
withheld and the interviewer posed questions about it (e.g., ‘Did
you see or touch a briefcase?’), verbal cues such as implausibility
appeared as liars often proposed denials that violated the known
information (e.g., by saying they were not close to the briefcase).
A third approach drawing on cognitive differences between liars
and truth tellers is the ACID approach, outlined and studied by K.
Colwell, Hiscock-Anisman, Memon, Taylor, and Prewett (2007).
In this approach, targets are questioned using an interview style
inspired by the Cognitive Interview (CI). CI was developed to
improve the accuracy and completeness of memory reports and
uses mnemonic techniques based on cognitive psychology. The
ACID approach uses such mnemonics with the assumption that
they will enhance verbal differences between liars and truth tellers.
Mnemonics may provide richer details from truth tellers (for
whom they serve as cues to recall) by probing for specific details
while increasing the challenges for liars who might have difficul-
ties fabricating (or lack the willingness to fabricate) information in
response to these probes. In line with expectations, research has
shown that cues to deception become more pronounced when liars
and truth tellers are questioned with this approach.
Our results have implications for lie detection in the real world.
The findings suggest that people who wish to make accurate
judgments of credibility in everyday life or as part of their pro-
fessions (e.g., customs officers, police officers and other legal
professionals) may benefit from learning about methods to in-
crease cues to deception through interviewing methods such as
those outlined above. That is, rather than learning about the char-
acteristics of deceptive behavior, real world lie-catchers may want
to educate themselves about methods to actively elicit cues to
deception.
Concluding Remarks
We believe this study offers new and intriguing information
about deception of both theoretical and practical importance. First,
our results suggest the novel conclusion that lie-catchers’ intuitive
notions about cues to deception are reasonably accurate. People’s
explicit theories about deceptive behavior seem to exert little
influence over actual decision making, suggesting that implicit
processes not only play a role but might even be dominant in
forming impressions about veracity. This finding fits with a wave
of social cognition research showing that processing of social
information is often driven by automatic rather than controlled
processes (Bargh, 1997; Bargh & Chartrand, 1999). Such research
shows that automaticity in processing might emerge as a function
of practice, also referred to as proceduralization (Fiske & Taylor,
2008). Given the frequency of honesty judgments in everyday life
(DePaulo & Kashy, 1998), automaticity in veracity assessments
makes theoretical sense.
How is it possible that intuitive notions about cues to deception
overlap to such a large extent with actual behavioral cues to
deception? Previous research has suggested that lack of feedback
about the actual veracity of communicators might prevent learning
proper rules from experience (Granhag, Andersson, Strömwall, &
Hartwig, 2004; Hartwig, Granhag, Strömwall, & Andersson, 2004;
Hogarth, 2001; Vrij & Semin, 1996). Given that the current study
shows that actual decision making is less flawed than previously
thought, we might have to reinterpret the role of feedback in
deception judgments. Speculatively, people might receive feed-
back about veracity often enough (perhaps through sources of
information not often captured by the typical laboratory paradigm,
see Park et al., 2002) to shape proper intuitions about deceptive
demeanor. Whether such feedback is indeed the explanation for
judges’ cue reliance is a question for future research.
We do not challenge the robust conclusion that deception de-
tection is often inaccurate. However, we challenge the explanation
proposed in previous research by demonstrating that the case for
the wrong subjective cue hypothesis in the accumulated literature
is quite weak. Lens model analyses show that the strongest con-
straint on performance is the lack of valid behavioral indicators of
deception rather than incorrect cue reliance. That behavioral dif-
ferences between liars and truth tellers are minute is not surprising
given two factors: First, people lie frequently in everyday life and
are therefore likely to be skilled as a function of practice (Vrij,
2008). Second, as emphasized by the self-presentational perspec-
tive on deception, convincing another that one is telling the truth
entails similar tasks for deceptive and truthful communicators.
Both share the motivation to create a credible impression and both
will engage in deliberate efforts to create such an impression
656 HARTWIG AND BOND
(DePaulo et al., 2003). However, that cues to deception are scarce
is not necessarily a universal fact. Perhaps liars in the majority of
the laboratory research conducted so far are not facing enough
of a challenge to give rise to valid behavioral differences. In most
of these studies, people are asked to provide a statement with no
risk of being challenged about particular details and no risk of
being disproven by external information. New research has shown
that it is possible to increase cues to deception by interventions
based on the theoretical assumption that under certain circum-
stances, deceptive statements might be more cognitively demand-
ing to produce. Our results support these efforts by suggesting that
creating stronger behavioral cues to deception is the key to im-
prove the accuracy of lie judgments.
References
References marked with an asterisk indicate studies included in the
meta-analyses that are discussed in the text. For a complete list, go to
http://dx.doi.org/10.1037/a0023589.supp
Akehurst, L., Köhnken, G., Vrij, A., & Bull, R. (1996). Lay persons’ and
police officers’ beliefs regarding deceptive behavior. Applied Cognitive
Psychology, 10, 461–471. doi:10.1002/(SICI)1099-0720(199612)10:
6⬍461::AID-ACP413⬎3.0.CO;2-2
Albrechtsen, J. S., Meissner, C. A., & Susa, K. J. (2009). Can intuition
improve deception detection performance? Journal of Experimental
Social Psychology, 45, 1052–1055. doi:10.1016/j.jesp.2009.05.017
Anderson, D. E. (1999). Cognitive and motivational processes underlying
truth bias (Doctoral dissertation). Available from ProQuest Dissertations
and Theses database. (UMI No. 9935030)
Anderson, D. E., DePaulo, B. M., & Ansfield, M. E. (2002). The devel-
opment of deception detection skill: A longitudinal study of same-sex
friends. Personality and Social Psychology Bulletin, 28, 536–545. doi:
10.1177/0146167202287010
Anderson, D. E., DePaulo, B. M., Ansfield, M. E., Tickle, J. J., & Green,
E. (1999). Beliefs about cues to deception: Mindless stereotypes or
untapped wisdom? Journal of Nonverbal Behavior, 23, 67–89. doi:
10.1023/A:1021387326192
Bargh, J. A. (1997). The automaticity of everyday life. In R. S. Wyer Jr.
(Ed.), The automaticity of everyday life: Advances in social cognition
(pp. 1–61). Mahwah, NJ: Erlbaum.
Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of
being. American Psychologist, 54, 462–479. doi:10.1037/0003-
066X.54.7.462
Bond, C. F., Jr., & DePaulo, B. M. (2006). Accuracy of deception judg-
ments. Personality and Social Psychology Review, 10, 214–234. doi:
10.1207/s15327957pspr1003_2
Bond, C. F., Jr., & DePaulo, B. M. (2008). Individual differences in
judging deception: Accuracy and bias. Psychological Bulletin, 134,
477–492. doi:10.1037/0033-2909.134.4.477
ⴱ
Bond, C. F., Jr., Omar, A., Pitre, U., Lashley, B. R., Skaggs, L. M., &
Kirk, C. T. (1992). Fishy-looking liars: Deception judgment from ex-
pectancy violation. Journal of Personality and Social Psychology, 63,
969–977. doi:10.1037/0022-3514.63.6.969
Brunswik, E. (1943). Organismic achievement and environmental proba-
bility. Psychological Review, 50, 255–272. doi:10.1037/h0060889
Brunswik, E. (1952). The conceptual framework of psychology. Chicago,
IL: University of Chicago Press.
Cole, T. (2001). Lying to the one you love: The use of deception in
romantic relationships. Journal of Social and Personal Relationships,
18, 107–129. doi:10.1177/0265407501181005
Colwell, K., Hiscock-Anisman, C., Memon, A., Taylor, L., & Prewett, J.
(2007). Assessment Criteria Indicative of Deception (ACID): An inte-
grated system of investigative interviewing and detecting deception.
Journal of Investigative Psychology and Offender Profiling, 4, 167–180.
doi:10.1002/jip.73
Colwell, L. H., Miller, H. A., Miller, R. S., & Lyons, P. M., Jr. (2006). U.S.
police officers’ knowledge regarding behaviors indicative of deception:
Implications for eradicating erroneous beliefs through training. Psychol-
ogy, Crime & Law, 12, 489–503. doi:10.1080/10683160500254839
Cooksey, R. W. (1996). Judgment analysis: Theory, methods, and appli-
cations. New York, NY: Academic Press.
DeGroot, T., & Gooty, J. (2009). Can nonverbal cues be used to make
meaningful personality attributions in employment interviews? Journal
of Business and Psychology, 24, 179–192. doi:10.1007/s10869-009-
9098-0
DePaulo, B. M. (1992). Nonverbal behavior and self-presentation. Psycho-
logical Bulletin, 111, 203–243. doi:10.1037/0033-2909.111.2.203
DePaulo, B. M., Charlton, K., Cooper, H., Lindsay, J. J., & Muhlenbruck,
L. (1997). The accuracy-confidence correlation in the detection of de-
ception. Personality and Social Psychology Review, 1, 346–357. doi:
10.1207/s15327957pspr0104_5
DePaulo, B. M., Jordan, A., Irvine, A., & Laser, P. S. (1982). Age changes
in the detection of deception. Child Development, 53, 701–709. doi:
10.2307/1129383
DePaulo, B. M., & Kashy, D. A. (1998). Everyday lies in close and casual
relationships. Journal of Personality and Social Psychology, 74, 63–79.
doi:10.1037/0022-3514.74.1.63
DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein,
J. A. (1996). Lying in everyday life. Journal of Personality and Social
Psychology, 70, 979–995. doi:10.1037/0022-3514.70.5.979
DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton,
K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin,
129, 74–118. doi:10.1037/0033-2909.129.1.74
DePaulo, B. M., & Morris, W. L. (2004). Discerning lies from truths:
Behavioral cues to deception and the indirect pathway of intuition. In
P. A. Granhag & L. A. Strömwall (Eds.), The detection of deception in
forensic contexts (pp. 15–40). New York, NY: Cambridge University
Press. doi:10.1017/CBO9780511490071.002
DePaulo, B. M., & Rosenthal, R. (1979). Telling lies. Journal of Person-
ality and Social Psychology, 37, 1713–1722. doi:10.1037/0022-
3514.37.10.1713
DePaulo, B. M., Rosenthal, R., Green, C. R., & Rosenkrantz, J. (1982).
Diagnosing deceptive and mixed messages from verbal and nonverbal
cues. Journal of Experimental Social Psychology, 18, 433–446. doi:
10.1016/0022-1031(82)90064-6
*Desforges, D. M., & Lee, T. C. (1995). Detecting deception is not as easy
as it looks. Teaching of Psychology, 22, 128–130. doi:10.1207/
s15328023top2202_10
*Fiedler, K., & Walka, I. (1993). Training lie detectors to use nonverbal
cues instead of global heuristics. Human Communication Research, 20,
199–223. doi:10.1111/j.1468-2958.1993.tb00321.x
Fiske, S. T., & Taylor, S. E. (2008). Social cognition: From brains to
culture. Boston, MA: McGraw-Hill.
Frank, M. G., & Feeley, T. H. (2003). To catch a liar: Challenges for
research in lie detection training. Journal of Applied Communication
Research, 31, 58–75. doi:10.1080/00909880305377
Garrido, E., Masip, J., & Herrero, C. (2004). Police officers’ credibility
judgments: Accuracy and estimated ability. International Journal of
Psychology, 39, 254–275. doi:10.1080/00207590344000411
Gigerenzer, G. (2007). Gut feelings: The intelligence of the unconscious.
New York, NY: Viking.
Global Deception Research Team. (2006). A world of lies. Journal of
Cross-Cultural Psychology, 37, 60–74. doi:10.1177/0022022105282295
Granhag, P. A., Andersson, L. O., Strömwall, L. A., & Hartwig, M. (2004).
Imprisoned knowledge: Criminals’ beliefs about deception. Legal
and Criminological Psychology, 9, 103–119. doi:10.1348/
135532504322776889
657
A LENS MODEL META-ANALYSIS OF HUMAN LIE JUDGMENTS
Granhag, P. A., & Strömwall, L. A. (Eds.). (2004). The detection of
deception in forensic contexts. New York, NY: Cambridge University
Press. doi:10.1017/CBO9780511490071
Hammond, K. R. (1996). Upon reflection. Thinking & Reasoning, 2,
239–248. doi:10.1080/135467896394537
Hammond, K. R., Hursch, C. J., & Todd, F. J. (1964). Analyzing the
components of clinical inference. Psychological Review, 71, 438–456.
doi:10.1037/h0040736
Hammond, K. R., Wilkins, M. M., & Todd, F. J. (1966). A research
paradigm for the study of interpersonal learning. Psychological Bulletin,
65, 221–232. doi:10.1037/h0023103
Hartwig, M., Granhag, P. A., & Strömwall, L. A. (2007). Guilty and
innocent suspects’ strategies during interrogations. Psychology, Crime &
Law, 13, 213–227. doi:10.1080/10683160600750264
Hartwig, M., Granhag, P. A., Strömwall, L. A., & Andersson, L. O. (2004).
Suspicious minds: Criminals’ ability to detect deception. Psychology,
Crime & Law, 10, 83–95. doi:10.1080/1068316031000095485
ⴱ
Hartwig, M., Granhag, P. A., Strömwall, L. A., & Vrij, A. (2005).
Detecting deception via strategic disclosure of evidence. Law and Hu-
man Behavior, 29, 469–484. doi:10.1007/s10979-005-5521-x
Hogarth, R. M. (2001). Educating intuition. Chicago, IL: The University of
Chicago Press.
Hogarth, R. M., & Karelaia, N. (2007). Heuristic and linear models of
judgment: Matching rules and environments. Psychological Review,
114, 733–758. doi:10.1037/0033-295X.114.3.733
Hurd, K., & Noller, P. (1988). Decoding deception: A look at the process.
Journal of Nonverbal Behavior, 12, 217–233. doi:10.1007/BF00987489
Hursch, C. J., Hammond, K. R., & Hursch, J. L. (1964). Some method-
ological considerations in multiple-cue probability studies. Psychologi-
cal Review, 71, 42–60. doi:10.1037/h0041729
Jensen, L. A., Arnett, J. J., Feldman, S. S., & Cauffman, E. (2004). The
right to do wrong: Lying to parents among adolescents and emerging
adults. Journal of Youth and Adolescence, 33, 101–112. doi:10.1023/B:
JOYO.0000013422.48100.5a
Johnson, M. K., & Raye, C. L. (1981). Reality monitoring. Psychological
Review, 88, 67–85. doi:10.1037/0033-295X.88.1.67
Juslin, P. N. (2000). Cue utilization in communication of emotion in music
performance: Relating performance to perception. Journal of Experi-
mental Psychology: Human Perception and Performance, 26, 1797–
1812. doi:10.1037/0096-1523.26.6.1797
Karelaia, N., & Hogarth, R. M. (2008). Determinants of linear judgment:
A meta-analysis of lens model studies. Psychological Bulletin, 134,
404–426. doi:10.1037/0033-2909.134.3.404
Kaufmann, E., & Athanasou, J. A. (2009). A meta-analysis of judgment
achievement as defined by the lens model equation. Swiss Journal of
Psychology, 68, 99–112. doi:10.1024/1421-0185.68.2.99
Konishi, S. (1981). Normalizing transformations of some statistics in
multivariate analysis. Biometrika, 68, 647–651. doi:10.1093/biomet/
68.3.647
Kraut, R. E. (1980). Humans as lie detectors: Some second thoughts.
Journal of Communication, 30, 209–216.
Lakhani, M., & Taylor, R. (2003). Beliefs about the cues to deception in
high-and low-stake situations. Psychology, Crime & Law, 9, 357–368.
doi:10.1080/1068316031000093441
Levine, T. R., Feeley, T. H., McCornack, S. A., Hughes, M., & Harms,
C. H. (2005). Testing the effects of nonverbal behavior training on
accuracy in deception detection with the inclusion of a bogus training
control group. Western Journal of Communication, 69, 203–217. doi:
10.1080/10570310500202355
Levine, T. R., Shaw, A., & Shulman, H. C. (2010). Increasing deception
detection accuracy with strategic questioning. Human Communication
Research, 36, 216–231. doi:10.1111/j.1468-2958.2010.01374.x
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand
Oaks, CA: Sage.
Malone, B. E. (2001). Perceived cues to deception: A meta-analytic review
(Unpublished master’s thesis). University of Virginia, Charlottesville,
VA.
Miller, G. A. (1962). Psychology: The science of mental life. New York,
NY: Harper & Row.
Neisser, U. (1967). Cognitive psychology. New York, NY: Appleton-
Century-Crofts.
Nisbett, R. E., & Ross, L. (1980). Human inference: Strategies and
shortcomings of social judgment. Englewood Cliffs, NJ: Prentice Hall.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know:
Verbal reports on mental processes. Psychological Review, 84, 231–259.
doi:10.1037/0033-295X.84.3.231
Park, H. S., Levine, T. R., McCornack, S. A., Morrison, K., & Ferrara, M.
(2002). How people really detect lies. Communication Monographs, 69,
144–157. doi:10.1080/714041710
Petrinovich, L. (1979). Probabilistic functionalism: A conception of re-
search method. American Psychologist, 34, 373–390. doi:10.1037/0003-
066X.34.5.373
ⴱ
Ruback, R. B., & Hopper, C. H. (1986). Decision making by parole
interviewers: The effect of case and interview factors. Law and Human
Behavior, 10, 203–214. doi:10.1007/BF01046210
Serota, K. B., Levine, T. R., & Boster, F. J. (2010). The prevalence of lying
in America: Three studies of self-reported lies. Human Communication
Research, 36, 2–25. doi:10.1111/j.1468-2958.2009.01366.x
Sporer, S. L. (2004). Reality monitoring and the detection of deception. In
P. A. Granhag & L. A. Strömwall (Eds.), The detection of deception in
forensic contexts (pp. 64–102). New York, NY: Cambridge University
Press. doi:10.1017/CBO9780511490071.004
Sporer, S. L. (2007). Evaluating witness evidence: The fallacies of intu-
ition. In C. Engel & F. Strack (Eds.), The impact of court procedure on
the psychology of judicial decision making (pp. 111–150). Baden-Baden,
Germany: Nomos Verlag.
Sporer, S. L., & Kupper, B. (1995). Realita¨tsueberwachung und die
Beurteilung des Wahrheitsgeshaltes von Erzaehlungen: Eine experimen-
telle Studie [Reality monitoring and the judgment of the truthfulness of
accounts: An experimental study]. Zeitschrift fuer Sozialpsychologie, 26,
173–193.
Sporer, S. L., & Schwandt, B. (2006). Paraverbal indicators of deception:
A meta-analytic synthesis. Applied Cognitive Psychology, 20, 421–446.
doi:10.1002/acp.1190
Sporer, S. L., & Schwandt, B. (2007). Moderators of nonverbal indicators
of deception: A meta-analytic synthesis. Psychology, Public Policy, and
Law, 13, 1–34. doi:10.1037/1076-8971.13.1.1
Stenson, H. H. (1974). The lens model with unknown cue structure.
Psychological Review, 81, 257–264. doi:10.1037/h0036334
Strömwall, L. A., & Granhag, P. A. (2003). How to detect deception?
Arresting the beliefs of police officers, prosecutors, and judges. Psy-
chology, Crime & Law, 9, 19–36. doi:10.1080/10683160308138
Strömwall, L. A., Granhag, P. A., & Hartwig, M. (2004). Practitioners’
beliefs about deception. In P. A. Granhag & L. A. Strömwall (Eds.),
The detection of deception in forensic contexts (pp. 229–250).
New York, NY: Cambridge University Press. doi:10.1017/
CBO9780511490071.010
Summers, D. A., & Hammond, K. R. (1966). Inference behavior in
multiple-cue tasks involving both linear and nonlinear relations. Journal
of Experimental Psychology, 71, 751–757. doi:10.1037/h0023122
Taylor, R., & Hick, R. F. (2007). Believed cues to deception: Judgments in
self-generated trivial and serious situations. Legal and Criminological
Psychology, 12, 321–331. doi:10.1348/135532506X116101
Thompson, B. (1995). Stepwise regression and stepwise discriminant
analysis need not apply here: A guidelines editorial. Educational and
Psychological Measurement, 55, 525–534. doi:10.1177/
0013164495055004001
Tucker, L. R. (1964). A suggested alternative formulation in the develop-
658 HARTWIG AND BOND
ments of Hursch, Hammond, and Hursch and by Hammond, Hursch, and
Todd. Psychological Review, 71, 528–530. doi:10.1037/h0047061
*Vrij, A. (1993). Credibility judgments of detectives: The impact of
nonverbal behavior, social skills and physical characteristics on impres-
sion formation. The Journal of Social Psychology, 133, 601–610. doi:
10.1080/00224545.1993.9713915
Vrij, A. (2008). Detecting lies and deceit: Pitfalls and opportunities (2nd
ed.). New York, NY: Wiley.
Vrij, A., Edward, K., & Bull, R. (2001). Police officers’ ability to detect
deceit: The benefit of indirect deception detection measures. Legal and
Criminological Psychology, 6, 185–196. doi:10.1348/135532501168271
Vrij, A., Leal, S., Granhag, P. A., Fisher, R. P., Sperry, K., Hillman, J., &
Mann, S. (2009). Outsmarting the liars: The benefit of asking unantic-
ipated questions. Law and Human Behavior, 33, 159–166. doi:10.1007/
s10979-008-9143-y
Vrij, A., Mann, S., Fisher, R. P., Leal, S., Milne, R., & Bull, R. (2008).
Increasing cognitive load to facilitate lie detection: The benefit of
recalling an event in reverse order. Law and Human Behavior, 32,
253–265. doi:10.1007/s10979-007-9103-y
Vrij, A., & Semin, G. R. (1996). Lie experts’ beliefs about nonverbal
indicators of deception. Journal of Nonverbal Behavior, 20, 65–80.
doi:10.1007/BF02248715
Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and
nonverbal communication of deception. In L. Berkowitz (Ed.), Advances
in experimental social psychology, Vol. 14 (pp. 1–57). New York, NY:
Academic Press.
Zuckerman, M., Koestner, R., & Driver, R. (1981). Beliefs about cues
associated with deception. Journal of Nonverbal Behavior, 6, 105–114.
doi:10.1007/BF00987286
Received February 18, 2010
Revision received February 3, 2011
Accepted February 24, 2011 䡲
659
A LENS MODEL META-ANALYSIS OF HUMAN LIE JUDGMENTS
A preview of this full-text is provided by American Psychological Association.
Content available from Psychological Bulletin
This content is subject to copyright. Terms and conditions apply.