ArticlePDF Available

Truth-Default Theory (TDT): A Theory of Human Deception and Deception Detection

Authors:

Abstract

Truth-Default Theory (TDT) is a new theory of deception and deception detection. This article offers an initial sketch of, and brief introduction to, TDT. The theory seeks to provide an elegant explanation of previous findings as well as point to new directions for future research. Unlike previous theories of deception detection, TDT emphasizes contextualized communication content in deception detection over nonverbal behaviors associated with emotions, arousal, strategic self-presentation, or cognitive effort. The central premises of TDT are that people tend to believe others and that this "truth-default" is adaptive. Key definitions are provided. TDT modules and propositions are briefly explicated. Finally, research consistent with TDT is summarized.
http://jls.sagepub.com/
Psychology
Journal of Language and Social
http://jls.sagepub.com/content/early/2014/05/19/0261927X14535916
The online version of this article can be found at:
DOI: 10.1177/0261927X14535916
published online 23 May 2014Journal of Language and Social Psychology
Timothy R. Levine
Detection
Truth-Default Theory (TDT): A Theory of Human Deception and Deception
Published by:
http://www.sagepublications.com
at:
can be foundJournal of Language and Social PsychologyAdditional services and information for
http://jls.sagepub.com/cgi/alertsEmail Alerts:
http://jls.sagepub.com/subscriptionsSubscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://jls.sagepub.com/content/early/2014/05/19/0261927X14535916.refs.htmlCitations:
What is This?
- May 23, 2014OnlineFirst Version of Record >>
by guest on May 28, 2014jls.sagepub.comDownloaded from by guest on May 28, 2014jls.sagepub.comDownloaded from
Journal of Language and Social Psychology
1 –15
© 2014 SAGE Publications
DOI: 10.1177/0261927X14535916
jls.sagepub.com
Article
Truth-Default Theory
(TDT): A Theory of Human
Deception and Deception
Detection
Timothy R. Levine
1
Abstract
Truth-Default Theory (TDT) is a new theory of deception and deception detection.
This article offers an initial sketch of, and brief introduction to, TDT. The theory
seeks to provide an elegant explanation of previous findings as well as point to new
directions for future research. Unlike previous theories of deception detection, TDT
emphasizes contextualized communication content in deception detection over
nonverbal behaviors associated with emotions, arousal, strategic self-presentation,
or cognitive effort. The central premises of TDT are that people tend to believe
others and that this “truth-default” is adaptive. Key definitions are provided. TDT
modules and propositions are briefly explicated. Finally, research consistent with
TDT is summarized.
Keywords
truth-bias, deception, lying
Truth-Default Theory (TDT) is a new theory of deception and deception detection. As
the name of the theory implies, the key idea is that when humans communicate with
other humans, we tend to operate on a default presumption that what the other person
says is basically honest. The idea that people are typically “truth-biased” is far from
new (cf. McCornack & Parks, 1986; Zuckerman, DePaulo, & Rosenthal, 1981). What
is new is that this presumption of honesty is seen as highly adaptive both for the indi-
vidual and the species. The truth-default enables efficient communication and
1
Korea University, Seoul, Republic of Korea
Corresponding Author:
Timothy R. Levine, School of Media and Communication, Korea University, Media Hall 606, Seoul,
Republic of Korea.
Email: levinet111@gmail.com
535916JLSXXX10.1177/0261927X14535916Journal of Language and Social PsychologyLevine
research-article2014
by guest on May 28, 2014jls.sagepub.comDownloaded from
2 Journal of Language and Social Psychology
cooperation, and the presumption of honesty typically leads to correct belief states
because most communication is honest most of the time. However, the presumption of
honesty makes humans vulnerable to occasional deceit. There are times and situations
when people abandon the presumption of honesty, and the theory describes when peo-
ple are expected to suspect a lie, when people conclude that a lie was told, and the
conditions under which people make truth and lie judgments correctly and incorrectly.
The theory also specifies the conditions under which people are typically honest and
the conditions under which people are likely to engage in deception. TDT is logically
compatible with Information Manipulation Theory 2 (IMT2; McCornack, Morrison,
Paik, Wiser, & Zhu, 2014). However, whereas IMT2 is primarily a theory of deceptive
discourse production, TDT is focused more on credibility assessment and deception
detection accuracy and inaccuracy.
The approach guiding the formation of TDT might be described as abductive sci-
ence. The propositions are all data based, and the explanations were initially articu-
lated so as to offer a coherent account of the existing scientific data. The theory was
not made public until original research supported and replicated every major claim.
Theory-data correspondence is considered paramount, and the theory strives for a high
degree of verisimilitude.
TDT is not only about accurate prediction and post hoc explanation. Good theory
must also be generative. A theory needs to lead to new predictions that no one would
think to make absent the theory. In line with Imre Lakatos (1980), TDT aims to be out
in front of the data, not always chasing data from behind and trying to catch up.
A final notable feature of TDT theory is that it is modular. TDT is a collection of
quasi-independent mini-theories, models, or effects that are joined by an overarching
logic.
This article offers an article-length sketch of TDT. First, key concepts are defined.
Next, TDT modules are briefly explicated. TDT propositions are then explained.
Finally, data consistent with TDT is briefly summarized.
Definitions
Table 1 provides a full listing of the key constructs which populate TDT and a concep-
tual definition for each construct. Several of the key definitions are briefly discussed
here.
Deception is defined as intentionally, knowingly, and/or purposely misleading
another person. Consistent with IMT2 (McCornack et al., 2014), McNally and Jackson
(2013), and Trivers (2011), deception need not require conscious forethought. While
some deception clearly involves preplanning, a sender may only recognize the decep-
tive nature of their communication after completing the deceptive utterance (see
IMT2, Proposition IS2). In line with Trivers, TDT does not preclude other deception
that also involves self-deception so long as the message has a deception purpose or
function, even if the purpose is unconscious. Thus, deceptive messages involve intent,
awareness, and/or purpose to mislead. Absent deceptive intent, awareness, or purpose,
a message is considered honest.
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 3
Lies are a subtype of deception that involves deceiving though saying information
known to be false. Other forms of deception include omission, evasion, equivocation,
Table 1. Key TDT Concepts and Definitions.
Deception is intentionally, knowingly, or purposefully misleading another person.
A lie is a subtype of deception that involves outright falsehood, which is consciously
known to be false by the teller, and is not signaled as false to the message recipient.
Honest communication lacks deceptive purpose, intent, or awareness. Honest
communication need not be fully accurate, true, or involve full disclosure.
The Truth-Lie Base-rate refers to the proportion of any set of messages that are honest
and deceptive. It is the relative prevalence of deception and nondeception in some defined
environment.
Truth-Bias is the tendency to actively believe or passively presume that another person’s
communication is honest independent of actual honesty.
The Truth-default involves a passive presumption of honesty due to a failure to actively
consider the possibility of deceit at all or as a fall back cognitive state after a failure to
obtain sufficient affirmative evidence for deception.
Honesty judgment involves the belief state that a communication is honest. Honesty
judgments can be passive (truth-default) stemming from a failure to consider the
possibility of deceit, a reversion to truth-default stemming from a failure to meet the
threshold for a deception judgment, or active decisions based on exculpatory evidence.
Deception judgment is an inference that a communication is deceptive or a lie. Unlike
honesty judgments, most deception judgments are active and have an evidentiary basis.
Demeanor refers to a constellation of inter-correlated behaviors that function as a gestalt,
relating to how people present themselves, the image they convey to others, and how
they are perceived by others.
Honest demeanor, a subtype of demeanor, is the tendency to be seen as honest independent
of actual honesty. People vary in the extent to which they have an honest demeanor.
Suspicion is a state of suspended judgment and uncertainty regarding the honesty or
deceptive nature of a communication. It is an intermediate cognitive state between the
passive truth-default and a firm judgment of deceit.
Communication content refers to the substance of what is said, and can be contrasted with
demeanor which involves how something is said.
Communication context refers to the situation in which the communication occurs, the
situation(s) relevant to the communication content, and to the communication as a
whole. Understanding communication content often requires knowledge of context and
communication content presented without its context can be misleading or uninformative.
Transparency refers to the extent to which the honest and/or deceptive nature of some
communication is apparent to others.
Diagnostically useful information is the extent to some information can be used to arrive at
a correct inference about the honest and/or deceptive nature of some communication.
Coherence involves the logical consistency of communication content.
Correspondence involves the consistency between communication content and external
evidence or knowledge.
Deception detection accuracy refers to correctly distinguishing honest and deceptive
communication.
by guest on May 28, 2014jls.sagepub.comDownloaded from
4 Journal of Language and Social Psychology
and generating false conclusions with objectively true information. The specific lin-
guistic structure of deceptive utterances is considered under the purview of IMT
(McCornack, 1992) and IMT2 (McCornack et al., 2014), and not critical to TDT.
Thus, while it is recognized that lying and deception are not synonymous, different
forms of deception are functionally transposable in TDT and therefore the words lying
and deception are sometimes used interchangeably.
The theory’s namesake and most central idea is the truth-default state. The truth-
default involves a passive presumption of honesty due either to (a) a failure to actively
consider the possibility of deceit at all or (b) as a fallback cognitive state after a failure
to obtain sufficient affirmative evidence for deception. The idea is that as a default,
people presume without conscious reflection that others’ communication is honest.
Because it is a default, it is a passive starting place for making inferences about com-
munication. The possibility that a message might be deception often does not come to
mind unless suspicion is actively triggered. The idea of the truth-default is consistent
with Dan Gilbert’s (1991) Spinozan model of belief in which incoming information is
believed unless subsequently and actively disbelieved. The truth-default is also consis-
tent with Grice’s (1989) logic of conversation wherein people generally presume com-
munication as fundamentally cooperative. That is, people typically make sense of
what others say based on the premise that they are trying to be understood.
A closely related idea is truth-bias, which is defined as the tendency to believe that
another person’s communication is honest independent of its actual honesty (Levine,
Park, & McCornack, 1999; McCornack & Parks, 1986). Truth-bias is empirically quan-
tified as the proportion of messages judged as honest in some defined setting. The truth-
default offers one explanation for the empirical observation of truth-bias, but the
concepts are not interchangeable since truth-bias need not be a cognitive default, and at
least as measured in deception detection experiments, it typically involves a prompted,
active assessment of honesty. In fact, if TDT is correct, truth-bias rates (i.e., the propor-
tion of messages believed) would be much higher in research if the possibility of decep-
tion was not primed by the research setting and measurement instruments. Knowing
that one is in a deception detection experiment and requiring truth-deception assess-
ments as part of the research protocol should create an active assessment of honesty and
deceit that may often not occur in communication outside the deception lab.
While most prior theoretical perspectives acknowledge the empirical existence of
truth-bias, truth-bias in pre-TDT theory is typically viewed as an error or bias reflect-
ing flawed judgment. Truth-bias is often depicted as a distorted perceptual state that is
maladaptive and interferes with deception detection accuracy (e.g., Buller & Burgoon,
1996; McCornack & Levine, 1990; McCornack & Parks, 1986). What is new in TDT
is the argument that both the truth-default and the truth-bias that results are functional,
adaptive, and facilitate accuracy in most nonresearch settings.
The reason that the truth-default and truth-bias typically lead to improved accuracy
involves the truth-lie base-rate. The truth-lie base-rate is a key variable that is cur-
rently unique to TDT. The base-rate refers to the relative prevalence of deception and
honesty in some defined environment. In most deception detection experiments, mes-
sage judges are equally likely to be exposed to an honest message as a lie. In TDT, the
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 5
base-rate matters and accuracy of judgments vary predictably based on base-rates as
modeled by the Park–Levine Probability Model (Park & Levine, 2001). TDT specifies
that outside the deception lab, the prevalence of deception is much lower than the
prevalence of honest communication and therefore presuming honesty leads to belief
states that are typically correct.
A third noteworthy departure of TDT from most prior deception theory regards the
relative utility of observable nonverbal behaviors and communication content in decep-
tion detection accuracy. Most prior deception theories (e.g., Buller & Burgoon, 1996;
Ekman, 2009; Ekman & Friesen, 1969; Vrij, Granhag, & Porter, 2010; Zuckerman et
al., 1981) specify that deception can be detected, at least under some conditions (e.g.,
high stakes), through the observation of sender demeanor. That is, prior theories specify
that liars leak emotional states through facial expressions, liars exhibit or can be induced
to exhibit various nonverbal indications of cognitive effort or arousal, and/or liars
engage in various other strategic and nonstrategic behaviors indicative of lying. Careful
attention to these behaviors provides a path to lie detection. TDT, in contrast, specifies
that reliance on demeanor and nonverbal performance tends to push detection accuracy
down toward chance, and that improved accuracy rests on attention to contextualized
communication content. Most lies are detected either through comparing what is said to
what is or what can be known, or thorough solicitation of a confession.
Demeanor refers to a constellation of intercorrelated behaviors that function as a
gestalt, relating to how people presents themselves, the image they convey to others,
and how they are perceived by others. Honest demeanor, a subtype of demeanor, is the
tendency to be seen as honest independent of actual honesty. People vary in the extent
to which they have an honest demeanor, and honest demeanor is often unrelated to
actual honesty. Communication content refers to the substance of what is said, and can
be contrasted with demeanor which involves how something is said. Communication
context refers to the situation in which the communication occurs, the situation(s)
relevant to the communication content, and to the communication event as a whole.
Understanding communication content often requires knowledge of context; and com-
munication content presented without its context can be misleading or uninformative.
Diagnostically useful information is the extent to which some information can be used
to arrive at a correct inference about the honest and/or deceptive nature of some com-
munication. Honest demeanor is specified to have little diagnostic utility. Alternatively,
correspondence information is highly diagnostic. Correspondence involves the consis-
tency between communication content and external evidence or message receiver
knowledge.
TDT Modules
As previously mentioned, TDT is composed of several free-standing but logically con-
sistent effects, models, and mini-theories. TDT modules are listed in Table 2. Each of
the modules is (or will be) described in detail in published journal articles or chapters.
Here, each module is briefly summarized and the reader is directed to the work con-
taining the full explication.
by guest on May 28, 2014jls.sagepub.comDownloaded from
6 Journal of Language and Social Psychology
Table 2. TDT Modules.
A Few Prolific Liars (or “Outliars,” Serota, Levine, & Boster, 2010)—The prevalence of lying is not
normally or evenly distributed across the population. Most people are honest most of time. There
are a few people, however, that lie often. Most lies are told by a few prolific liars.
Deception Motives (Levine, Kim, & Hamel, 2010)—People lie for a reason, but the motives behind
truthful and deceptive communication are the same. When the truth is consistent with a person’s
goals, they will almost always communicate honestly. Deception becomes probable when the truth
makes honest communication difficult or inefficient.
The Projected Motive Model (Levine, Kim, & Blair, 2010)—People know that others lie for a reason
and are more likely to suspect deception when they think a person has a reason to lie.
The Veracity Effect (Levine et al., 1999)—People tend to be truth-biased and are more likely to believe
people than to think that others are lying. Because of this bias, accuracy is usually higher for truths
than lies. Consequently, the honesty (i.e., veracity) of communication predicts if the message will be
judged correctly. Honest messages produce higher accuracy than lies.
The Park–Levine Probability Model (Park & Levine, 2001)—Because honest messages yield higher
accuracy than lies (i.e., the veracity effect), the proportion of truths and lies affects accuracy. So long
as people are truth-biased, as the proportion of messages that is honest increases, so does average
detection accuracy. This relationship is linear and predicted as the accuracy for truths times the
proportion of messages that are true plus the accuracy for lies times the proportion of messages
that are lies.
How People Really Detect Lies (Park, Levine, McCornack, Morrison, & Ferrerra, 2002)—Outside the
deception lab in everyday life, most lies are detected after-the-fact based on either confessions or the
discovery of some evidence showing that what was said was false. Very few lies are detected in real
time based only on the passive observation of sender nonverbal behavior.
A Few Transparent Liars (Levine, 2010)—The reason that accuracy in typical deception detection
experiments is slightly above chance is that some small proportion of the population are really bad
liars who usually give themselves away. The reason accuracy is not higher is that most people are
pretty good liars and that honest demeanor is uncorrelated with actual honesty for most people.
Sender Honest Demeanor (Levine, Serota, et al., 2011)—There are large individual differences in
believability. Some people come off as honest. Other people are doubted more often. These
differences in how honest different people are the result of a combination of 11 different behaviors
and impressions that function together. Honest demeanor has little to do with actual honesty, and
this explains poor accuracy in deception detection experiments.
Content in Context (Blair, Levine, & Shaw, 2010)—Understanding communication requires listening
to what is said and taking that in context. Knowing about the context in which the communication
occurs can help detect lies.
Diagnostic Utility (Levine, Blair, & Clare, 2014)—Some aspects of communication are more
useful than others in detecting deception and some aspects of communication can be misleading
producing systematic errors. Diagnostic utility involves prompting and using useful information
while avoiding useless and misleading behaviors.
Correspondence and Coherence (Reimer, Blair, & Levine, 2014)—Correspondence and coherence are
two types of consistency information that may be used in deception detection. Correspondence has
to do with comparing what is said to known facts and evidence. It involves fact checking. Coherence
involves the logical consistency of communication. Generally speaking, correspondence is more
useful than coherence in deception detection.
Question Effects (Levine, Blair, & Clare, 2014; Levine, Shaw, & Shulman, 2010)—Question effects
involves asking the right questions to yield diagnostically useful information that improves deception
detection accuracy.
Expert Questioning (Levine, Clare, et al., 2014)—Expertise in deception is highly context dependent
and involves knowing how to prompt diagnostically useful information rather than detection by
passive observation of nonverbal communication.
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 7
The Few Prolific Liars Model (Serota et al., 2010) makes two key claims. The first
is that deception, relative to honesty, is infrequent. That is, most people are honest
most of the time. Second, the prevalence of lying is not normally or evenly distributed
across the population. The prevalence of lying is positively skewed. Most lies are told
by a few prolific liars.
A second module focuses when and why people lie. The Deception Motives Module
(Levine, Kim, & Hamel, 2010) specifies that people lie for a reason, but the motives
behind truthful and deceptive communication are the same. When the truth is consis-
tent with people’s goals, they will almost always communicate honesty. Deception
becomes probable when the truth makes honest communication difficult or inefficient.
TDT’s view of deception motives is an area of theoretical overlap with IMT2
(McCornack et al., 2014).
On the message recipient side, the Projected Motive Model (Levine, Kim, & Blair,
2010) specifies that people know that others lie for a reason and are more likely to
suspect deception when they think a person has a reason to lie. A projected motive
provides a trigger that can kick people out of the truth-default state.
The Veracity Effect (Levine et al., 1999) refers to the empirical finding that the
veracity of the message judged predicts the accuracy of the judgment. In most decep-
tion detection experiments, accuracy is higher for truths than lies. The veracity effect
stems from truth-bias, and when the truth-default is in place, the veracity effect is
predicted to be especially large. The passive presumption of honesty leads people to
correctly believe honest communication, but lies go unnoticed as long as no trigger
event leads to the abandonment of the truth-default.
The Park–Levine Probability Model (Park & Levine, 2001) allows for predicting
the implications of the veracity effect on deception detection accuracy for different
truth-lie base-rates. So long as people are truth-biased, as the proportion of messages
that is honest increases, so does average detection accuracy. This relationship is linear
and predicted as the accuracy for truths times the proportion of messages that are true
plus the accuracy for lies times the proportion of messages that are lies.
Prior deception detection research has found that people are statistically better than
chance at distinguishing truths from lies, but are seldom much better than chance
(Bond & DePaulo, 2006). This is demonstrated by the well-known and often-cited
54% accuracy level reported by meta-analysis (Bond & DePaulo, 2006). Three mod-
ules in TDT seek to explain the slightly-better-than-chance accuracy findings that are
so well documented in the literature.
The A Few Transparent Liars (Levine, 2010) module speculates that the reason that
accuracy in typical deception detection experiments is slightly above chance is that
some small proportion of the population are really bad liars who usually give them-
selves away. That is, most people are good liars and people generally cannot tell if they
are honest or not. But, a few people cannot lie well. The transparent liars ensure that
accuracy is just above chance because people tend to catch the lies of these poor liars.
Alternatively, the Sender Honest Demeanor module (Levine, Serota, et al., 2011)
explains the accuracy ceiling observed in the literature (i.e., why accuracy is not much
by guest on May 28, 2014jls.sagepub.comDownloaded from
8 Journal of Language and Social Psychology
better than chance). There are large individual differences in believability. Some peo-
ple come off as honest. Other people are doubted more often. These differences in
honesty impressions are a function of a combination of 11 different behaviors that
function as a gestalt. Honest demeanor has little to do with actual honesty, and this
explains poor accuracy in deception detection experiments. In short, reliance on
demeanor ensures a small signal-to-noise ratio, and near-chance detection accuracy.
Third, the How People Really Detect Lies module (Park et al., 2002) holds that
outside the deception lab in everyday life, most lies are detected well after-the-fact—
based on either confessions or the discovery of some evidence showing that what was
said was false. Very few lies are detected in real time based only on the passive obser-
vation of sender nonverbal behavior. This partially explains poor accuracy in decep-
tion detection experiments as being the result of requiring subjects to detect deception
in ways other than how lies are typically detected. Park et al. (2002) also point to how
deception detection accuracy might be improved, namely, the solicitation of confes-
sions and the application of evidence.
Five additional modules focus on how deception can be accurately detected. These
include Content in Context (Blair et al., 2010), Diagnostic Utility (Levine, Blair, et al.,
2014), Correspondence and Coherence (Reimer et al., 2014), Question Effects (Levine,
Blair, et al., 2014; Levine, Shaw, et al., 2010), and Expert Questioning (Levine, Clare,
et al., 2014). These modules emphasize the use of evidence, the reliance on contextual-
ized communication content, and the active prompting of diagnostic communication
content through strategic questioning of a potential liar.
Logical Structure
TDT provides an overarching logical structure that ties together the various models
into a coherent theoretical package. Table 3 provides the 14 propositions that reflect
the key predictions of the theory and the theory’s logical flow. This section provides a
brief narrative description of the logical structure of TDT.
Humans are a social species, and our individual and collective survival requires
coordination, cooperation, and communication (at least within important in-groups).
Efficient communication requires a presumption of honesty. If the veracity of all
incoming messages need be scrutinized and questioned, communication would lose
efficiency and efficacy for coordination. The presumption of honest communication,
however, comes at a cost. It makes us vulnerable, at least in the short term, to decep-
tion and exploitation. But, at the core of TDT is the view that the tradeoff between
efficient communication and vulnerability to occasional deceit is more than worth it.
That is, the benefits gained through efficient communication and in-group cooperation
vastly outweigh the costs of occasional deception both for the individual and the
collective.
Many evolutionary perspectives on human deception assert that because humans
have evolved the ability to deceive others, humans also must have evolved the ability
to detect lies. There is, however, a more efficient solution—deterrence. It is proposed
that all human cultures develop prohibitions against deception, at least within
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 9
Table 3. TDT Propositions.
1. Most communication by most people is honest most of the time. While deception can and does
occur, in comparison to honest messages, deception is relatively infrequent, and outright lies are
more infrequent still. In fact, deception must be infrequent to be effective.
2. The prevalence of deception is not normally distributed across the population. Most lies are told by
a few prolific liars.
3. Most people believe most of what is said by most other people most of the time. That is, most people
can be said to be truth-biased most of the time. Truth-bias results from, in part, a default cognitive
state. The truth-default state is pervasive but it is not an inescapable cognitive state. Truth-bias and the
truth-default are adaptive both for the individual and the species. They enable efficient communication.
4. Furthermore, because of Proposition 1, the presumption of honesty specified in Proposition 3 is
usually correct. Truth bias, however, makes people vulnerable to occasional deception.
5. Deception is purposive. Absent psychopathology, people lie for a reason. Deception, however, is
usually not the ultimate goal, but instead a means to some other ends. That is, deception is typically
tactical. Specifically, most people are honest unless the truth thwarts some desired goal or goals. The
motives or desired goals achieved through communication are the same for honest and deceptive
communications, and deception is reserved for situations where honesty would be ineffectual,
inefficient, and/or counterproductive in goal attainment.
6. People understand that other’s deception is usually purposive, and are more likely to consider a
message as potentially or actually deceptive under conditions where the truth may be inconsistent
with a communicator’s desired outcomes. That is, people project motive states on others and this
affects suspicion and judgments of honesty and deceit.
7. The truth-default state requires a trigger event to abandon it. Trigger events include, but are not
limited to (a) a projected motive for deception, (b) behavioral displays associated with dishonest
demeanor, (c) a lack of coherence in message content, (d) a lack of correspondence between
communication content and some knowledge of reality, or (e) information from a third party
warning of potential deception.
8. If a trigger or set of triggers is sufficiently potent, a threshold is crossed, suspicion is generated, the
truth-default is at least temporarily abandoned, the communication is scrutinized, and evidence is
cognitively retrieved and/or sought to assess honesty-deceit.
9. Based on information of a variety of types, an evidentiary threshold may be crossed and a message
may be actively judged to be deceptive. The information used to assess honesty and deceit includes,
but is not limited to (a) communication context and motive, (b) sender demeanor, (c) information
from third parties, (d) communication coherence, and (e) correspondence information. If the
evidentiary threshold for a lie judgment is not crossed, an individual will may continue to harbor
suspicion or revert to the truth-default. If exculpatory evidence emerges, active judgments of
honesty are made.
10. Triggers and deception judgments need not occur at the time of the deception. Many deceptions are
suspected and detected well after the fact.
11. With the exception of a few transparent liars, deception is not accurately detected, at the time in
which it occurs, through the passive observation of sender demeanor. Honest-looking and deceptive-
looking communication performances are largely independent of actual honesty and deceit for most
people and hence usually do not provide diagnostically useful information. Consequently, demeanor
based deception detection is, on average, only slightly better than chance due to a few transparent
liars, but typically not much above chance due to the fallible nature of demeanor-based judgments.
12. In contrast, deception is most accurately detected through either (a) subsequent confession by
the deceiver or (b) by comparison of the contextualized communication content to some external
evidence or preexisting knowledge.
13. Both confessions and diagnostically informative communication content can be produced by effective
context-sensitive questioning of a potentially deceptive sender. Ill-conceived questioning, however,
can backfire and produce below-chance accuracy.
14. Expertise in deception detection rests on knowing how to prompt diagnostically useful information
rather than skill in the passive observation of sender behavior.
by guest on May 28, 2014jls.sagepub.comDownloaded from
10 Journal of Language and Social Psychology
important in-groups. Parents everywhere teach their children not to lie. Every major
world religion prohibits deception; as do most legal systems. Furthermore, recent evo-
lutionary perspectives on the development of human deception note that deception
must be infrequent to evolve (McNally & Jackson, 2013; Trivers, 2011) and that
deception coevolves with cooperation (McNally & Jackson, 2013).
This line of reasoning leads to the first four propositions. These propositions hold
that lying is much less prevalent than honesty, that most lies are told by a few prolific
liars, that people tend to believe others, and that presuming honesty makes sense
because most communication is honest. The catch is that the presumption of honesty
makes humans vulnerable to occasional deceit.
Because deception is discouraged, people need a reason to lie (Proposition 5).
People are generally honest unless the truth thwarts a goal state. Others know that
people lie for a reason (Proposition 6) and thus a projected motive for deceit is one
type of trigger event that can lead people to abandon the truth-default.
So, people tend to presume that others are honest. However, the truth-default state
is not inescapable. Proposition 7 holds that trigger events of various sorts can lead
people to abandon the truth-default state. Trigger events include, but are not limited to,
(a) a projected motive for deception, (b) behavioral displays associated with dishonest
demeanor, (c) a lack of coherence in message content, (d) a lack of correspondence
between communication content and some knowledge of reality, or (e) information
from a third party warning of potential deception. Proposition 8 specifies that if a trig-
ger or set of triggers is sufficiently potent, a threshold is crossed, suspicion is gener-
ated, the truth-default is at least temporarily abandoned, the communication is
scrutinized, and evidence is cognitively retrieved and/or sought to assess honesty-
deceit. Proposition 9 states that based on information of a variety of types, an eviden-
tiary threshold may be crossed and a message may be actively judged to be deceptive.
The information used to assess honesty and deceit includes, but is not limited to, (a)
communication context and motive, (b) sender demeanor, (c) information from third
parties, (d) communication coherence, and (e) correspondence information. If the evi-
dentiary threshold for a lie judgment is not crossed, an individual may continue to
harbor suspicion or revert to the truth-default. If exculpatory evidence emerges, active
judgments of honesty are made.
Propositions 8 and 9 specify two thresholds: one for abandoning the truth-default
and the second for actively inferring deception. It is presumed that the threshold for
triggering the abandonment of the truth-default is more sensitive than the threshold for
inferring deceit. In between the two thresholds, suspicion of deception exists. Suspicion
is viewed as a state of uncertainty where the possibility of deception is entertained. It
is a state of suspended belief. The suspicion state will not be retained indefinitely, and
either evidence is obtained sufficient to cross the second threshold and infer deceit, or
the person will eventually revert to the truth-default.
In line with Park et al. (2002), Proposition 10 adds the qualification that triggers
and deception judgments need not occur at the time of the deception. Many deceptions
are suspected and detected well after-the-fact.
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 11
Based on Park et al. (2002), Levine (2010), and Levine, Serota, et al. (2011),
Proposition 11 states that with the exception of a few transparent liars, deception is not
accurately detected, at the time at which it occurs, through the passive observation of
sender demeanor. Honest-looking and deceptive-looking communication perfor-
mances are largely independent of actual honesty and deceit for most people, and
hence usually do not provide diagnostically useful information. Consequently,
demeanor based deception detection is, on average, only slightly better than chance
due to a few transparent liars, but typically not much above chance due to the fallible
nature of demeanor-based judgments.
The final set of three propositions specifies the conditions under which deception
can be detected accurately. According to Proposition 12, deception is most accurately
detected through either (a) subsequent confession by the deceiver or (b) by compari-
son of the contextualized communication content to some external evidence or preex-
isting knowledge. Proposition 13 extends this line of thinking by specifying that both
confessions and diagnostically informative communication content can be produced
by effective context-sensitive questioning of a potentially deceptive sender. Ill-
conceived questioning, however, can backfire and produce below-chance accuracy.
Finally, the last proposition holds that expertise in deception detection rests on know-
ing how to prompt diagnostically useful information; rather than skill in the passive
observation of sender behavior.
Summary of Empirical Evidence
Clare (2013; Clare & Levine, 2014) provided evidence consistent with core premises
of TDT regarding the existence and pervasiveness of a truth-default state. Clare
exposed participants to true and false, plausible and implausible message content in
either face-to-face interaction or videotaped interviews. At times participants were
asked to make explicit veracity judgments as is typical in deception detection experi-
ments. Other times participants were asked to thought-list what they were thinking.
Order was experimentally varied, so that some participants did the thought listing first,
while others were asked about veracity first, priming the possibility of deceit. Although
participants demonstrated truth-bias in all experimental conditions, unprimed partici-
pants were substantially less likely to explicitly mention honesty or deception in the
unprimed conditions. In the unprimed conditions, less than 5% of participants explic-
itly mentioned considering veracity or deception. These findings are consistent with
Proposition 3 specifying the existence of truth-bias and the truth-default state and
Proposition 7 stating that a trigger event is required to abandon the truth-default.
Serota et al. (2010) reported three studies consistent with Propositions 1 and 2. In a
N = 1,000 representative nation-wide sample, the distribution of reported lies was
highly positively skewed with most people reporting few lies (mode was zero in past
24 hours) and a few prolific liars telling the most lies. These findings were replicated
with a college student sample and a reanalysis of previously published diary studies.
The results have subsequently been further replicated in the United Kingdom (Serota
by guest on May 28, 2014jls.sagepub.comDownloaded from
12 Journal of Language and Social Psychology
& Levine, 2014), The Netherlands (Halevy, Shalvi, & Verschuere, 2014), and with a
sample of U.S. high school students (Levine, Serota, Carey, & Messer, 2013).
Truth-bias (Proposition 3) is very well established. It is evidenced in meta-analysis
(Bond & DePaulo, 2006) as well as in primary experimental evidence (Levine et al.,
1999). Consistent with Proposition 4, research also shows that as the proportion of
messages that are honest increases, detection accuracy increases proportionally
(Levine et al., 1999; Levine, Kim, Park, & Hughes, 2006; Levine, Clare, Green,
Serota, & Park, 2014).
Data consistent with Proposition 5 are provided in three experiments reported by
Levine, Kim, and Hamel (2010). When the truth is in line with communicative goals,
honesty is nearly universal. Deception occurs frequently, but is not universal, when the
truth makes goal attainment difficult. Levine, Kim, and Hamel (2010) also show that
the pursuit of the same communicative goals guide both honest and deceptive mes-
sages. People are honest when the truth aligns with a speakers goals and deceptive
when the truth interferes with goal attainment. Thus, deceptive message production
does not arise for goals unique to honesty or deception.
Levine, Kim, and Blair (2010) provide evidence from three experiments that are in
line with Proposition 6. Operating from a projected motive model, it was predicted and
found that confessions tend to be almost universally believed, whereas denials of
transgression are more often doubted. There is no obvious motive to falsely confess to
a transgression, but there is motive to lie when denying a transgression.
A series of studies provide evidence consistent with Propositions 7 to 9. McCornack
and Levine (1990) and Kim and Levine (2011) show that third party prompting of
suspicion reduces truth-bias. Levine, Kim, and Blair (2010) show that truth-bias is
exceptionally strong in the absence of apparent motive but is reduced substantially
when a motive is apparent. Levine, Serota, et al. (2011) show that honest-dishonest
demeanor is strongly and predictably related to the attribution of truth and honesty.
Park et al. (2002) find that outside the lab, most discovered deception involves confes-
sions or comparison of communication content with external evidence.
Consistent with Proposition 10, Park et al. (2002) found that lies are frequently
detected well after the fact. Circumstantial evidence for the few transparent liars claim
in Proposition 10 is summarized in Levine (2010). Evidence for slightly-better-than-
chance demeanor-based detection is well documented in meta-analysis (e.g., Bond &
DePaulo, 2006). Evidence for the rest of Proposition 11 was consistently obtained in a
series of eight experiments reported by Levine, Serota, et al. (2011). Sender demeanor
was found to vary substantially across individuals, to be highly predictive of honesty-
deception judgments across student, nonstudent, and cross-cultural replications, and to
be largely independent of actual honesty.
Evidence for Proposition 12 was initially obtained by Park et al. (2002) who
reported that the vast majority of lies are detected either though confession or through
the application of evidence. Experimental evidence was produced in a series of 10
studies by Blair et al. (2010), documenting substantially improved accuracy using the
content in context approach to lie detection.
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 13
Initial experimental evidence for Proposition 13 was reported by Levine, Shaw, et
al. (2010). Those findings were subsequently replicated and extended in a series of six
experiments by Levine, Blair, et al. (2014).
Data consistent with Proposition 14 are reported by Levine, Clare, et al. (2014).
When experts were allowed to freely question potential cheaters, the experts obtained
accuracy of more than 90%.
Conclusion
The central idea behind truth-default theory is that people tend to presume that other
people communicate honestly most of the time. The presumption of honesty enables
efficient communication and cooperation. Furthermore, since most people are honest
most of time, believing others usually results in correct belief states. However, people
sometimes try to deceive others. People may become suspicious of others when others
have an obvious motive for deception, when they lack an honest demeanor, when they
are primed to expect deception by third parties, or when the communication content
appears either self-contradictory or inconsistent with known facts. When people rely
on demeanor to infer deception, accuracy is typically poor and slightly better than
chance. However, reliance on content in context improves accuracy substantially.
Accuracy can be further improved with strategic questioning that prompts diagnosti-
cally useful information.
Acknowledgment
David Clare, Rachel Kim, J. Pete Blair, Steve McCornack, Torsten Reimer, Kim Serota, and
Hee Sun Park made substantial and valuable contributions to the development and testing of
Truth-Default Theory.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interests with respect to the authorship and/or
publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship,
and/or publication of this article: The National Science Foundation and the Federal Bureau of
Investigations provided financial support for the research leading to and testing Truth-default
Theory.
References
Blair, J. P., Levine, T. R., & Shaw, A. J. (2010). Content in context improves deception detec-
tion accuracy. Human Communication Research, 36, 423-442.
Bond, C. F., Jr., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and
Social Psychology Review, 10, 214-234.
by guest on May 28, 2014jls.sagepub.comDownloaded from
14 Journal of Language and Social Psychology
Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory. Communication Theory,
6, 203-242.
Clare, D. (2013). Spontaneous, unprompted deception detection judgments (Unpublished pre-
liminary paper). East Lansing: Michigan State University.
Clare, D., & Levine, T. R. (2014). Spontaneous, unprompted deception detection judgments
(Manuscript in preparation). East Lansing: Michigan State University.
Ekman, P. (2009). Telling lies. New York, NY: W. W. Norton.
Ekman, P., & Friesen, W. V. (1969). Nonverbal leakage and clues to deception. Psychiatry, 32,
88-106.
Gilbert, D. (1991). How mental systems believe. American Psychologist, 46, 107-119.
Grice, H. P. (1989). Studies in the way of words. Cambridge, MA: Harvard University Press.
Halevy, R., Shalvi, S., & Verschuere, B. (2014). Being honest about dishonesty: Correlating
self-reports and actual lying. Human Communication Research, 40, 54-72.
Kim, R. K., & Levine, T. R. (2011). The effect of suspicion on deception detection accuracy:
Optimal level or opposing effects? Communication Reports, 24, 51-62.
Lakatos, I. (1980). The methodology of scientific research programmes. Cambridge, England:
Cambridge University Press.
Levine, T. R. (2010). A few transparent liars. In C. Salmon (Ed.), Communication Yearbook 34
(pp. 41-62). Thousand Oaks, CA: Sage.
Levine, T. R., Blair, J. P., & Clare, D. (2011). Expertise in deception detection involves actively
prompting diagnostic information rather than passive behavioral observation. Paper pre-
sented at the annual meeting of the National Communication Association, New Orleans,
LA.
Levine, T. R., Blair, J. P., & Clare, D. (2014). Diagnostic utility: Experimental demonstrations
and replications of powerful question effects and smaller question by experience interac-
tions in high stake deception detection. Human Communication Research, 40, 262-289.
Levine, T. R., Clare, D., Blair, J. P., McCornack, S. A., Morrison, K., & Park, H. S. (2014).
Expertise in deception detection involves actively prompting diagnostic information rather
than passive behavioral observation. Human Communication Research. In press.
Levine, T. R., Clare, D. D., Green, T., Serota, K. B., & Park, H. S. (2014). The effects of truth-
lie base rate on interactive deception detection accuracy. Human Communication Research.
(online 1st, 4-29-14).
Levine, T. R., Kim, R. K., & Blair, J. P. (2010). (In)accuracy at detecting true and false confes-
sions and denials: An initial test of a projected motive model of veracity judgments. Human
Communication Research, 36, 81-101.
Levine, T. R., Kim, R. K., & Hamel, L. M. (2010). People lie for a reason: An experimental test
of the principle of veracity. Communication Research Reports, 27, 271-285.
Levine, T. R., Kim, R. K., Park, H. S., & Hughes, M. (2006). Deception detection accuracy
is a predictable linear function of message veracity base-rate: A formal test of Park and
Levine’s probability model. Communication Monographs, 73, 243-260.
Levine, T. R., Park, H. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies:
Documenting the “veracity effect.” Communication Monographs, 66, 125-144.
Levine, T. R., Serota, K. B., Carey, F, & Messer, D. (2013). Teenagers lie a lot: A further
investigation into the prevalence of lying. Communication Research Reports, 30, 211-220.
Levine, T. R., Serota, K. B., Shulman, H., Clare, D. D., Park, H. S., Shaw, A. S., Shim, J. C.,
& Lee, J. H. (2011). Sender demeanor: Individual differences in sender believability have
a powerful impact on deception detection judgments. Human Communication Research,
37, 377-403.
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 15
Levine, T. R., Shaw, A., & Shulman, H. (2010). Increasing deception detection accuracy with
strategic questioning. Human Communication Research, 36, 216-231.
McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59,
1-16.
McCornack, S. A., & Levine, T. R. (1990). When lovers become leery: The relationship between
suspicion and accuracy in detecting deception. Communication Monographs, 57, 219-230.
McCornack, S. A., Morrison, K., Paik, J. E., Wiser, A. M., & Zhu, X. (2014). Information
manipulation theory 2: A propositional theory of deceptive discourse production. Journal
of Language and Social Psychology.
McCornack, S. A., & Parks, M. R. (1986). Deception detection and relationship development:
The other side of trust. In M. L. McLaughlin (Ed.), Communication Yearbook 9 (pp. 377-
389). Beverly Hills, CA: Sage.
McNally, L., & Jackson, A. L. (2013). Cooperation creates selection for tactical deception.
Proceedings of the Royal Society B, 280, 1-7.
Park, H. S., & Levine, T. R. (2001). A probability model of accuracy in deception detection
experiments. Communication Monographs, 68, 201-210.
Park, H. S., Levine, T. R., McCornack, S. A., Morrison, K., & Ferrerra, M. (2002). How people
really detect lies. Communication Monographs, 69, 144-157.
Reimer, T., Blair, J. P., & Levine, T. R. (2014). The role of consistency in detecting decep-
tion: The superiority of correspondence over coherence (Unpublished manuscript). Purdue
University, West Layette.
Serota, K. B., & Levine, T. R. (2014). A few prolific liars: Variation in the prevalence of lying.
Journal of Language and Social Psychology.
Serota, K. B., Levine, T. R., & Boster, F. J. (2010). The prevalence of lying in America: Three
studies of reported deception. Human Communication Research, 36, 1-24.
Trivers, R. (2011). The folly of fools: The logic of deceit and self-deception in human life. New
York, NY: Basic.
Vrij, A., Granhag, P. A., & Porter, S. B. (2010). Pitfalls and opportunities in nonverbal and
verbal lie detection. Psychological Science in the Public Interest, 11, 89-121.
Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and nonverbal communication
of deception. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 14,
pp. 1-59). New York, NY: Academic Press.
Author Biography
Timothy R. Levine is a professor in the School of Media and Communication, Korea University,
Seoul, South Korea. He has published extensively on the topics of deception, credibility assess-
ment, and interpersonal communication.
by guest on May 28, 2014jls.sagepub.comDownloaded from
... First, we hypothesized that for CI users, the deficit of perception of speaker intentions would be particularly salient for the perception of insincere (sarcasm, teasing) relative to sincere intentions. Besides the "truth bias", where we often assume that speakers are telling the truth [30], the insincere intentions may be particularly difficult for CI users. They must decode the possibly more complex and variable prosody in the insincere utterances to determine if they are incongruent with content [6,7]. ...
... The effect was more prominent when the stimulus did not provide a verbal context. One reason for this finding could be a so-called "truth bias" [30]. It posits that by default, we assume that conversation partners tell the truth, i.e., that they mean what they said. ...
Article
Understanding insincere language (sarcasm and teasing) is a fundamental part of communication and crucial for maintaining social relationships. This can be a challenging task for cochlear implant (CIs) users who receive degraded suprasegmental information important for perceiving a speaker's attitude. We measured the perception of speaker sincerity (literal positive, literal negative, sarcasm, and teasing) in 16 adults with CIs using an established video inventory. Participants were presented with audio-only and audio-visual social interactions between two people with and without supporting verbal context. They were instructed to describe the content of the conversation and answer whether the speakers meant what they said. Results showed that subjects could not always identify speaker sincerity, even when the content of the conversation was perfectly understood. This deficit was greater for perceiving insincere relative to sincere utterances. Performance improved when additional visual cues or verbal context cues were provided. Subjects who were better at perceiving the content of the interactions in the audio-only condition benefited more from having additional visual cues for judging the speaker's sincerity, suggesting that the two modalities compete for cognitive recourses. Perception of content also did not correlate with perception of speaker sincerity, suggesting that what was said vs. how it was said were perceived using unrelated segmental versus suprasegmental cues. Our results further showed that subjects who had access to lower-order resolved harmonic information provided by hearing aids in the contralateral ear identified speaker sincerity better than those who used implants alone. These results suggest that measuring speech recognition alone in CI users does not fully describe the outcome. Our findings stress the importance of measuring social communication functions in people with CIs.
... The effect of stretch goals on destructive leadership can be explained by deception theory (Levine, 2014). The concept of deception is defined as a message consciously conveyed by the sender to help create false beliefs or conclusions in the recipient (Buller & Burgoon, 1996). ...
... Individuals who have not succeeded in achieving stretch goals try to present information in the form of reports that are different from the truth. According to Levine (2014), when people have information that they consider too problematic to disclose, they commit fraud. This occurs because the leader perceives the contextual demands of "putting them in place" to reveal the unpleasant truth. ...
Article
Full-text available
Introduction/Main Objectives: This study aims to examine the effect of stretch goals on destructive leadership with burnout as the mediating variable­­­ and then the effect of destructive leadership on counterproductive work behavior of employees with psychological capital as a moderating variable. Background Problems: The phenomenon of irregularities that occur in SOE in Indonesia is interesting to study. Deviations committed by SOE leaders in Indonesia include fraud, gratification, and data manipulation. The increase in the number of irregularities has a negative effect on organizational performance because it causes several counterproductive work behaviors in employees. Novelty: Empirical research on destructive leadership is still rare because previous research has focused only on the conceptual side. Research Methods: The design of this study used a survey with a questionnaire completed by 724 respondents who were leaders and employees. The hypothesis testing used Structural Equation Modeling (SEM). Finding/Results: The findings of this study show a positive influence of stretch goals on burnout and a positive influence of stretch goals on destructive leadership, but burnout has no mediating role in the effect of stretch goals on destructive leadership. There is no effect of perceived destructive leadership on employees’ counterproductive work behavior, but psychological capital has a moderating role in the effect of perceived destructive leadership on employees’ counterproductive work behavior. Conclusion: The practical implication of this study is that a stretch goal that is not balanced with resources can cause individuals to behave destructively even though they are at a managerial level.
... Previous research into the psychology of communication suggests that people have a 'truth bias' -an inclination to believe and trust others, even though this makes them vulnerable to deception (McCornack & Parks, 1986; but see also, Masip et al., 2009). This is an adaptive strategy in environments where people are honest most of the time, as it facilitates efficient communication, social learning, and cooperation (Baier, 1986;Boseovski, 2010;Hardin, 1993;Levine, 2014). Similar reasoning can be applied to beliefs about others' expertise. ...
Thesis
When trying to form accurate beliefs and make good choices, people often turn to one another for information and advice. But deciding whom to listen to can be a challenging task. While people may be motivated to receive information from accurate sources, in many circumstances it can be difficult to estimate others’ task-relevant expertise. Moreover, evidence suggests that perceptions of others’ attributes are influenced by irrelevant factors, such as facial appearances and one’s own beliefs about the world. In this thesis, I present six studies that investigate whether messenger characteristics that are unrelated to the domain in question interfere with the ability to learn about others’ expertise and, consequently, lead people to make suboptimal social learning decisions. Studies one and two explored whether (dis)similarity in political views affects perceptions of others’ expertise in a non-political shape categorisation task. The findings suggest that people are biased to believe that messengers who share their political opinions are better at tasks that have nothing to do with politics than those who do not, even when they have all the information needed to accurately assess expertise. Consequently, they are more likely to seek information from, and are more influenced by, politically similar than dissimilar sources. Studies three and four aimed to formalise this learning bias using computational models and explore whether it generalises to a messenger characteristic other than political similarity. Surprisingly, in contrast to the results of studies one and two, in these studies there was no effect of observed generosity or political similarity on expertise learning, information-seeking choices, or belief updating. Studies five and six were then conducted to reconcile these conflicting results and investigate the boundary conditions of the learning bias observed in studies one and two. Here, we found that, under the right conditions, non-politics-based similarities can influence expertise learning and whom people choose to hear from; that asking people to predict how others will answer questions enhances learning from observed outcomes; and that it is unlikely that inattentiveness explains why we observed null effects in studies three and four.
Article
Purpose The purpose of this study is to explore the views of practicing negotiators on their experiences of deception and their strategies for detecting deceptive behavior. A thematic analysis of interview data complements the existing experimental literature on deception and negotiation. The authors compare the experiences of practicing negotiators with the results found in experimental studies and provide practical recommendations for negotiators and managers regarding the detection of deception. Design/methodology/approach Data was collected from 19 practicing commercial negotiators in France by way of semi-structured interviews. The transcribed data was analyzed by way of thematic analysis using the software NVivo 12. Experiences and behaviors identified in the negotiation literature as key factors for the detection of deception acted as a coding framework. Findings A thematic analysis of the data revealed four themes related to the experience of deception that negotiators perceived as particularly important: the frequency, form, interpretation and consequences of deception. Further, the analysis revealed four factors that negotiators believed influenced their ability to detect deceptive communication: physical cues, such as body language and micro-expressions, and verbal cues, including contradictions and inconsistencies, emotional cues and environmental cues. Finally, the strategies described by negotiators to detect deception could be classified according to six themes: careful listening, asking questions, emotional intelligence, intuition, checking consistency and requesting evidence. Research limitations/implications This study elicited the views of commercial negotiators without collecting information from their negotiation counterparts. Hence, it was not possible to verify whether the reported detection of deceptive communication was accurate. Because of optimism bias, the participants in the sample were likely to overrate their ability to detect deception. In part, this was helpful because the negotiators spoke freely about their strategies for dealing with deceptive counterparts allowing the identification of techniques to improve the efficacy of detecting deceptive communication. Practical implications Participants overwhelmingly expressed that there is a lack of training on deception in negotiation. It is suggested that the results of this study inform the development of training courses on the detection of deception. In particular, it is recommended that training courses should cover the following topics: how to anticipate and avoid deceptive behavior; how to effectively respond to deceptive behavior; the role of emotional intelligence in detecting deceptive behavior; careful listening and asking questions; and the role of intuition in detecting deception. Originality/value Prior empirical studies on the detection of deception have not specifically investigated the range of self-reported strategies used by practicing negotiators to detect deceptive communication. This study addresses this gap. This study complements existing experimental works by widening the spectrum of potential variables that play a role in the effective detection of deceptive communication.
Article
Truth-default theory offers an account of human deceptive communication where people are honest unless they have a motive to deceive and people passively believe others unless suspicion and doubt are actively triggered. The theory is argued to account for wide swings in vulnerability to deception in different types of situations in and out of the lab. Three moderators are advanced to account for differential vulnerability to political misinformation and disinformation. Own belief congruity, social congruence, and message repetition are argued to combine to affect the probability that implausible and refutable false information is accepted as true.
Article
Maliciously false information (disinformation) can influence people's beliefs and behaviors with significant social and economic implications. In this study, we examine news articles on crowd‐sourced digital platforms for financial markets. Assembling a unique dataset of financial news articles that were investigated and prosecuted by the Securities and Exchange Commission, along with the propagation data of such articles on digital platforms and the financial performance data of the focal firm, we develop a well‐justified machine learning system to detect financial disinformation published on social media platforms. Our system design is rooted in the Truth Default Theory, which argues that communication context and motive, coherence, information correspondence, propagation, and sender demeanor are major constructs to assess deceptive communication. Extensive analyses are conducted to evaluate the performance and efficacy of the proposed system. We further discuss this study's theoretical implications and its practical values. This article is protected by copyright. All rights reserved
Article
Behaviors such as gaze aversion and repetitive movements are commonly believed to be signs of deception and low credibility; however, they may also be characteristic of individuals with developmental or mental health conditions. We examined the effect of five behaviors that are common among autistic individuals—gaze aversion, repetitive movements, misinterpretation of figurative language, monologues, and flat affect—on observers' evaluations of deception and credibility. This study focused on judgments made in everyday social situations which contrasts with most previous studies which have examined such judgments in contexts (e.g., legal proceedings) where they are of primary importance. In three experiments, we presented participants with video segments of individuals being interviewed about biographical information and participants then indicated their perception of the individuals' truthfulness and credibility. Overall, individuals were perceived as more deceptive and less credible when they displayed autistic behaviors than when they did not; however, the effect sizes detected were weak. This article is protected by copyright. All rights reserved.
Article
Full-text available
This piece was the first in history to posit the notion of "truth-bias," which has now become foundational within the field of deception. It also posits what has come to be known as The McCornack-Parks Model of Deception Detection; namely, that as relational intimacy increases, detection confidence increases, truth-bias increases, and detection accuracy decreases.
Article
Full-text available
Article
Full-text available
The question of whether discernible differences exist between liars and truth tellers has interested professional lie detectors and laypersons for centuries. In this article we discuss whether people can detect lies when observing someone's nonverbal behavior or analyzing someone's speech. An article about detecting lies by observing nonverbal and verbal cues is overdue. Scientific journals regularly publish overviews of research articles regarding nonverbal and verbal cues to deception, but they offer no explicit guidance about what lie detectors should do and should avoid doing to catch liars. We will present such guidance in the present article. The article consists of two parts. The first section focuses on pitfalls to avoid and outlines the major factors that lead to failures in catching liars. Sixteen reasons are clustered into three categories: (a) a lack of motivation to detect lies (because accepting a fabrication might sometimes be more tolerable or pleasant than understanding the truth), (b) difficulties associated with lie detection, and (c) common errors made by lie detectors. We will argue that the absence of nonverbal and verbal cues uniquely related to deceit (akin Pinocchio's growing nose), the existence of typically small differences between truth tellers and liars, and the fact that liars actively try to appear credible contribute to making lie detection a difficult task. Other factors that add to difficulty is that lies are often embedded in truths, that lie detectors often do not receive adequate feedback about their judgments and therefore cannot learn from their mistakes, and that some methods to detect lies violate conversation rules and are therefore difficult to apply in real life. The final factor to be discussed in this category is that some people are just very good liars. The common errors lie detectors make that we have identified are examining the wrong cues (in part, because professionals are taught these wrong cues); placing too great an emphasis on nonverbal cues (in part, because training encourages such emphasis); tending to too-readily interpret certain behaviors, particularly signs of nervousness, as diagnostic of deception; placing too great an emphasis on simplistic rules of thumb; and neglecting inter- and intrapersonal differences. We also discuss two final errors: that many interview strategies advocated by police manuals can impair lie detection, and that professionals tend to overestimate their ability to detect deceit. The second section of this article discusses opportunities for maximizing one's chances of detecting lies and elaborates strategies for improving one's lie-detection skills. Within this section, we first provide five recommendations for avoiding the common errors in detecting lies that we identified earlier in the article. Next, we discuss a relatively recent wave of innovative lie-detection research that goes one step further and introduces novel interview styles aimed at eliciting and enhancing verbal and nonverbal differences between liars and truth tellers by exploiting their different psychological states. In this part of the article, we encourage lie detectors to use an information-gathering approach rather than an accusatory approach and to ask liars questions that they have not anticipated. We also encourage lie detectors to ask temporal questions-questions related to the particular time the interviewee claims to have been at a certain location-when a scripted answer (e.g., "I went to the gym") is expected. For attempts to detect lying about opinions, we introduce the devil's advocate approach, in which investigators first ask interviewees to argue in favor of their personal view and then ask them to argue against their personal view. The technique is based on the principle that it is easier for people to come up with arguments in favor than against their personal view. For situations in which investigators possess potentially incriminating information about a suspect, the "strategic use of evidence" technique is introduced. In this technique, interviewees are encouraged to discuss their activities, including those related to the incriminating information, while being unaware that the interviewer possesses this information. The final technique we discuss is the "imposing cognitive load" approach. Here, the assumption is that lying is often more difficult than truth telling. Investigators could increase the differences in cognitive load that truth tellers and liars experience by introducing mentally taxing interventions that impose additional cognitive demand. If people normally require more cognitive resources to lie than to tell the truth, they will have fewer cognitive resources left over to address these mentally taxing interventions when lying than when truth telling. We discuss two ways to impose cognitive load on interviewees during interviews: asking them to tell their stories in reverse order and asking them to maintain eye contact with the interviewer. We conclude the article by outlining future research directions. We argue that research is needed that examines (a) the differences between truth tellers and liars when they discuss their future activities (intentions) rather than their past activities, (b) lies told by actual suspects in high-stakes situations rather than by university students in laboratory settings, and (c) lies told by a group of suspects (networks) rather than individuals. An additional line of fruitful and important research is to examine the strategies used by truth tellers and liars when they are interviewed. As we will argue in the present article, effective lie-detection interview techniques take advantage of the distinctive psychological processes of truth tellers and liars, and obtaining insight into these processes is thus vital for developing effective lie-detection interview tools.
Article
Full-text available
Information Manipulation Theory 2 (IMT2) is a propositional theory of deceptive discourse production that conceptually frames deception as involving the covert manipulation of information along multiple dimensions and as a contextual problem-solving activity driven by the desire for quick, efficient, and viable communicative solutions. IMT2 is rooted in linguistics, cognitive neuroscience, speech production, and artificial intelligence. Synthesizing these literatures, IMT2 posits a central premise with regard to deceptive discourse production and 11 empirically testable (that is, falsifiable) propositions deriving from this premise. These propositions are grouped into three propositional sets: intentional states (IS), cognitive load (CL), and information manipulation (IM). The IS propositions pertain to the nature and temporal placement of deceptive volition, in relation to speech production. The CL propositions clarify the interrelationship between load, discourse, and context. The IM propositions identify the specific conditions under which various forms of information manipulation will (and will not) occur.
Article
Full-text available
Although it is commonly believed that lying is ubiquitous, recent findings show large, individual differences in lying, and that the proclivity to lie varies by age. This research surveyed 58 high school students, who were asked how often they had lied in the past 24 hr. It was predicted that high school students would report lying with greater frequency than previous surveys with college student and adult samples, but that the distribution of reported lies by high school students would exhibit a strongly and positively skewed distribution similar to that observed with college student and adult samples. The data were consistent with both predictions. High school students in the sample reported telling, on average, 4.1 lies in the past 24 hr—a rate that is 75% higher than that reported by college students and 150% higher than that reported by a nationwide sample of adults. The data were also skewed, replicating the “few prolific liar” effect previously documented in college student and adult samples.
Article
Inconsistency is often considered an indication of deceit. The conceptualization of consistency used in deception research, however, has not made a clear distinction between two concepts long differentiated by philosophers: coherence and correspondence. The existing literature suggests that coherence is not generally useful for deception detection. Correspondence, however, appears to be quite useful. The present research developed a model of how correspondence is utilized to make judgments, and this article reports on four studies designed to elaborate on the model. The results suggest that judges attend strongly to correspondence and that they do so in an additive fashion. As noncorrespondent information accumulates, an increasingly smaller proportion of judges make truthful assessments of guilty suspects. This work provides a basic framework for examining how information is utilized to make deception judgments and forms the correspondence and coherence module of truth-default theory.
Article
: Research relevant to psychotherapy regarding facial expression and body movement, has shown that the kind of information which can be gleaned from the patients words - information about affects, attitudes, interpersonal styles, psychodynamics - can also be derived from his concomitant nonverbal behavior. The study explores the interaction situation, and considers how within deception interactions differences in neuroanatomy and cultural influences combine to produce specific types of body movements and facial expressions which escape efforts to deceive and emerge as leakage or deception clues.