ArticlePDF Available

Truth-Default Theory (TDT): A Theory of Human Deception and Deception Detection

Authors:

Abstract

Truth-Default Theory (TDT) is a new theory of deception and deception detection. This article offers an initial sketch of, and brief introduction to, TDT. The theory seeks to provide an elegant explanation of previous findings as well as point to new directions for future research. Unlike previous theories of deception detection, TDT emphasizes contextualized communication content in deception detection over nonverbal behaviors associated with emotions, arousal, strategic self-presentation, or cognitive effort. The central premises of TDT are that people tend to believe others and that this "truth-default" is adaptive. Key definitions are provided. TDT modules and propositions are briefly explicated. Finally, research consistent with TDT is summarized.
http://jls.sagepub.com/
Psychology
Journal of Language and Social
http://jls.sagepub.com/content/early/2014/05/19/0261927X14535916
The online version of this article can be found at:
DOI: 10.1177/0261927X14535916
published online 23 May 2014Journal of Language and Social Psychology
Timothy R. Levine
Detection
Truth-Default Theory (TDT): A Theory of Human Deception and Deception
Published by:
http://www.sagepublications.com
at: can be foundJournal of Language and Social PsychologyAdditional services and information for
http://jls.sagepub.com/cgi/alertsEmail Alerts:
http://jls.sagepub.com/subscriptionsSubscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://jls.sagepub.com/content/early/2014/05/19/0261927X14535916.refs.htmlCitations:
What is This?
- May 23, 2014OnlineFirst Version of Record >>
by guest on May 28, 2014jls.sagepub.comDownloaded from by guest on May 28, 2014jls.sagepub.comDownloaded from
Journal of Language and Social Psychology
1 –15
© 2014 SAGE Publications
DOI: 10.1177/0261927X14535916
jls.sagepub.com
Article
Truth-Default Theory
(TDT): A Theory of Human
Deception and Deception
Detection
Timothy R. Levine1
Abstract
Truth-Default Theory (TDT) is a new theory of deception and deception detection.
This article offers an initial sketch of, and brief introduction to, TDT. The theory
seeks to provide an elegant explanation of previous findings as well as point to new
directions for future research. Unlike previous theories of deception detection, TDT
emphasizes contextualized communication content in deception detection over
nonverbal behaviors associated with emotions, arousal, strategic self-presentation,
or cognitive effort. The central premises of TDT are that people tend to believe
others and that this “truth-default” is adaptive. Key definitions are provided. TDT
modules and propositions are briefly explicated. Finally, research consistent with
TDT is summarized.
Keywords
truth-bias, deception, lying
Truth-Default Theory (TDT) is a new theory of deception and deception detection. As
the name of the theory implies, the key idea is that when humans communicate with
other humans, we tend to operate on a default presumption that what the other person
says is basically honest. The idea that people are typically “truth-biased” is far from
new (cf. McCornack & Parks, 1986; Zuckerman, DePaulo, & Rosenthal, 1981). What
is new is that this presumption of honesty is seen as highly adaptive both for the indi-
vidual and the species. The truth-default enables efficient communication and
1Korea University, Seoul, Republic of Korea
Corresponding Author:
Timothy R. Levine, School of Media and Communication, Korea University, Media Hall 606, Seoul,
Republic of Korea.
Email: levinet111@gmail.com
535916JLSXXX10.1177/0261927X14535916Journal of Language and Social PsychologyLevine
research-article2014
by guest on May 28, 2014jls.sagepub.comDownloaded from
2 Journal of Language and Social Psychology
cooperation, and the presumption of honesty typically leads to correct belief states
because most communication is honest most of the time. However, the presumption of
honesty makes humans vulnerable to occasional deceit. There are times and situations
when people abandon the presumption of honesty, and the theory describes when peo-
ple are expected to suspect a lie, when people conclude that a lie was told, and the
conditions under which people make truth and lie judgments correctly and incorrectly.
The theory also specifies the conditions under which people are typically honest and
the conditions under which people are likely to engage in deception. TDT is logically
compatible with Information Manipulation Theory 2 (IMT2; McCornack, Morrison,
Paik, Wiser, & Zhu, 2014). However, whereas IMT2 is primarily a theory of deceptive
discourse production, TDT is focused more on credibility assessment and deception
detection accuracy and inaccuracy.
The approach guiding the formation of TDT might be described as abductive sci-
ence. The propositions are all data based, and the explanations were initially articu-
lated so as to offer a coherent account of the existing scientific data. The theory was
not made public until original research supported and replicated every major claim.
Theory-data correspondence is considered paramount, and the theory strives for a high
degree of verisimilitude.
TDT is not only about accurate prediction and post hoc explanation. Good theory
must also be generative. A theory needs to lead to new predictions that no one would
think to make absent the theory. In line with Imre Lakatos (1980), TDT aims to be out
in front of the data, not always chasing data from behind and trying to catch up.
A final notable feature of TDT theory is that it is modular. TDT is a collection of
quasi-independent mini-theories, models, or effects that are joined by an overarching
logic.
This article offers an article-length sketch of TDT. First, key concepts are defined.
Next, TDT modules are briefly explicated. TDT propositions are then explained.
Finally, data consistent with TDT is briefly summarized.
Definitions
Table 1 provides a full listing of the key constructs which populate TDT and a concep-
tual definition for each construct. Several of the key definitions are briefly discussed
here.
Deception is defined as intentionally, knowingly, and/or purposely misleading
another person. Consistent with IMT2 (McCornack et al., 2014), McNally and Jackson
(2013), and Trivers (2011), deception need not require conscious forethought. While
some deception clearly involves preplanning, a sender may only recognize the decep-
tive nature of their communication after completing the deceptive utterance (see
IMT2, Proposition IS2). In line with Trivers, TDT does not preclude other deception
that also involves self-deception so long as the message has a deception purpose or
function, even if the purpose is unconscious. Thus, deceptive messages involve intent,
awareness, and/or purpose to mislead. Absent deceptive intent, awareness, or purpose,
a message is considered honest.
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 3
Lies are a subtype of deception that involves deceiving though saying information
known to be false. Other forms of deception include omission, evasion, equivocation,
Table 1. Key TDT Concepts and Definitions.
Deception is intentionally, knowingly, or purposefully misleading another person.
A lie is a subtype of deception that involves outright falsehood, which is consciously
known to be false by the teller, and is not signaled as false to the message recipient.
Honest communication lacks deceptive purpose, intent, or awareness. Honest
communication need not be fully accurate, true, or involve full disclosure.
The Truth-Lie Base-rate refers to the proportion of any set of messages that are honest
and deceptive. It is the relative prevalence of deception and nondeception in some defined
environment.
Truth-Bias is the tendency to actively believe or passively presume that another person’s
communication is honest independent of actual honesty.
The Truth-default involves a passive presumption of honesty due to a failure to actively
consider the possibility of deceit at all or as a fall back cognitive state after a failure to
obtain sufficient affirmative evidence for deception.
Honesty judgment involves the belief state that a communication is honest. Honesty
judgments can be passive (truth-default) stemming from a failure to consider the
possibility of deceit, a reversion to truth-default stemming from a failure to meet the
threshold for a deception judgment, or active decisions based on exculpatory evidence.
Deception judgment is an inference that a communication is deceptive or a lie. Unlike
honesty judgments, most deception judgments are active and have an evidentiary basis.
Demeanor refers to a constellation of inter-correlated behaviors that function as a gestalt,
relating to how people present themselves, the image they convey to others, and how
they are perceived by others.
Honest demeanor, a subtype of demeanor, is the tendency to be seen as honest independent
of actual honesty. People vary in the extent to which they have an honest demeanor.
Suspicion is a state of suspended judgment and uncertainty regarding the honesty or
deceptive nature of a communication. It is an intermediate cognitive state between the
passive truth-default and a firm judgment of deceit.
Communication content refers to the substance of what is said, and can be contrasted with
demeanor which involves how something is said.
Communication context refers to the situation in which the communication occurs, the
situation(s) relevant to the communication content, and to the communication as a
whole. Understanding communication content often requires knowledge of context and
communication content presented without its context can be misleading or uninformative.
Transparency refers to the extent to which the honest and/or deceptive nature of some
communication is apparent to others.
Diagnostically useful information is the extent to some information can be used to arrive at
a correct inference about the honest and/or deceptive nature of some communication.
Coherence involves the logical consistency of communication content.
Correspondence involves the consistency between communication content and external
evidence or knowledge.
Deception detection accuracy refers to correctly distinguishing honest and deceptive
communication.
by guest on May 28, 2014jls.sagepub.comDownloaded from
4 Journal of Language and Social Psychology
and generating false conclusions with objectively true information. The specific lin-
guistic structure of deceptive utterances is considered under the purview of IMT
(McCornack, 1992) and IMT2 (McCornack et al., 2014), and not critical to TDT.
Thus, while it is recognized that lying and deception are not synonymous, different
forms of deception are functionally transposable in TDT and therefore the words lying
and deception are sometimes used interchangeably.
The theory’s namesake and most central idea is the truth-default state. The truth-
default involves a passive presumption of honesty due either to (a) a failure to actively
consider the possibility of deceit at all or (b) as a fallback cognitive state after a failure
to obtain sufficient affirmative evidence for deception. The idea is that as a default,
people presume without conscious reflection that others’ communication is honest.
Because it is a default, it is a passive starting place for making inferences about com-
munication. The possibility that a message might be deception often does not come to
mind unless suspicion is actively triggered. The idea of the truth-default is consistent
with Dan Gilbert’s (1991) Spinozan model of belief in which incoming information is
believed unless subsequently and actively disbelieved. The truth-default is also consis-
tent with Grice’s (1989) logic of conversation wherein people generally presume com-
munication as fundamentally cooperative. That is, people typically make sense of
what others say based on the premise that they are trying to be understood.
A closely related idea is truth-bias, which is defined as the tendency to believe that
another person’s communication is honest independent of its actual honesty (Levine,
Park, & McCornack, 1999; McCornack & Parks, 1986). Truth-bias is empirically quan-
tified as the proportion of messages judged as honest in some defined setting. The truth-
default offers one explanation for the empirical observation of truth-bias, but the
concepts are not interchangeable since truth-bias need not be a cognitive default, and at
least as measured in deception detection experiments, it typically involves a prompted,
active assessment of honesty. In fact, if TDT is correct, truth-bias rates (i.e., the propor-
tion of messages believed) would be much higher in research if the possibility of decep-
tion was not primed by the research setting and measurement instruments. Knowing
that one is in a deception detection experiment and requiring truth-deception assess-
ments as part of the research protocol should create an active assessment of honesty and
deceit that may often not occur in communication outside the deception lab.
While most prior theoretical perspectives acknowledge the empirical existence of
truth-bias, truth-bias in pre-TDT theory is typically viewed as an error or bias reflect-
ing flawed judgment. Truth-bias is often depicted as a distorted perceptual state that is
maladaptive and interferes with deception detection accuracy (e.g., Buller & Burgoon,
1996; McCornack & Levine, 1990; McCornack & Parks, 1986). What is new in TDT
is the argument that both the truth-default and the truth-bias that results are functional,
adaptive, and facilitate accuracy in most nonresearch settings.
The reason that the truth-default and truth-bias typically lead to improved accuracy
involves the truth-lie base-rate. The truth-lie base-rate is a key variable that is cur-
rently unique to TDT. The base-rate refers to the relative prevalence of deception and
honesty in some defined environment. In most deception detection experiments, mes-
sage judges are equally likely to be exposed to an honest message as a lie. In TDT, the
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 5
base-rate matters and accuracy of judgments vary predictably based on base-rates as
modeled by the Park–Levine Probability Model (Park & Levine, 2001). TDT specifies
that outside the deception lab, the prevalence of deception is much lower than the
prevalence of honest communication and therefore presuming honesty leads to belief
states that are typically correct.
A third noteworthy departure of TDT from most prior deception theory regards the
relative utility of observable nonverbal behaviors and communication content in decep-
tion detection accuracy. Most prior deception theories (e.g., Buller & Burgoon, 1996;
Ekman, 2009; Ekman & Friesen, 1969; Vrij, Granhag, & Porter, 2010; Zuckerman et
al., 1981) specify that deception can be detected, at least under some conditions (e.g.,
high stakes), through the observation of sender demeanor. That is, prior theories specify
that liars leak emotional states through facial expressions, liars exhibit or can be induced
to exhibit various nonverbal indications of cognitive effort or arousal, and/or liars
engage in various other strategic and nonstrategic behaviors indicative of lying. Careful
attention to these behaviors provides a path to lie detection. TDT, in contrast, specifies
that reliance on demeanor and nonverbal performance tends to push detection accuracy
down toward chance, and that improved accuracy rests on attention to contextualized
communication content. Most lies are detected either through comparing what is said to
what is or what can be known, or thorough solicitation of a confession.
Demeanor refers to a constellation of intercorrelated behaviors that function as a
gestalt, relating to how people presents themselves, the image they convey to others,
and how they are perceived by others. Honest demeanor, a subtype of demeanor, is the
tendency to be seen as honest independent of actual honesty. People vary in the extent
to which they have an honest demeanor, and honest demeanor is often unrelated to
actual honesty. Communication content refers to the substance of what is said, and can
be contrasted with demeanor which involves how something is said. Communication
context refers to the situation in which the communication occurs, the situation(s)
relevant to the communication content, and to the communication event as a whole.
Understanding communication content often requires knowledge of context; and com-
munication content presented without its context can be misleading or uninformative.
Diagnostically useful information is the extent to which some information can be used
to arrive at a correct inference about the honest and/or deceptive nature of some com-
munication. Honest demeanor is specified to have little diagnostic utility. Alternatively,
correspondence information is highly diagnostic. Correspondence involves the consis-
tency between communication content and external evidence or message receiver
knowledge.
TDT Modules
As previously mentioned, TDT is composed of several free-standing but logically con-
sistent effects, models, and mini-theories. TDT modules are listed in Table 2. Each of
the modules is (or will be) described in detail in published journal articles or chapters.
Here, each module is briefly summarized and the reader is directed to the work con-
taining the full explication.
by guest on May 28, 2014jls.sagepub.comDownloaded from
6 Journal of Language and Social Psychology
Table 2. TDT Modules.
A Few Prolific Liars (or “Outliars,” Serota, Levine, & Boster, 2010)—The prevalence of lying is not
normally or evenly distributed across the population. Most people are honest most of time. There
are a few people, however, that lie often. Most lies are told by a few prolific liars.
Deception Motives (Levine, Kim, & Hamel, 2010)—People lie for a reason, but the motives behind
truthful and deceptive communication are the same. When the truth is consistent with a person’s
goals, they will almost always communicate honestly. Deception becomes probable when the truth
makes honest communication difficult or inefficient.
The Projected Motive Model (Levine, Kim, & Blair, 2010)—People know that others lie for a reason
and are more likely to suspect deception when they think a person has a reason to lie.
The Veracity Effect (Levine et al., 1999)—People tend to be truth-biased and are more likely to believe
people than to think that others are lying. Because of this bias, accuracy is usually higher for truths
than lies. Consequently, the honesty (i.e., veracity) of communication predicts if the message will be
judged correctly. Honest messages produce higher accuracy than lies.
The Park–Levine Probability Model (Park & Levine, 2001)—Because honest messages yield higher
accuracy than lies (i.e., the veracity effect), the proportion of truths and lies affects accuracy. So long
as people are truth-biased, as the proportion of messages that is honest increases, so does average
detection accuracy. This relationship is linear and predicted as the accuracy for truths times the
proportion of messages that are true plus the accuracy for lies times the proportion of messages
that are lies.
How People Really Detect Lies (Park, Levine, McCornack, Morrison, & Ferrerra, 2002)—Outside the
deception lab in everyday life, most lies are detected after-the-fact based on either confessions or the
discovery of some evidence showing that what was said was false. Very few lies are detected in real
time based only on the passive observation of sender nonverbal behavior.
A Few Transparent Liars (Levine, 2010)—The reason that accuracy in typical deception detection
experiments is slightly above chance is that some small proportion of the population are really bad
liars who usually give themselves away. The reason accuracy is not higher is that most people are
pretty good liars and that honest demeanor is uncorrelated with actual honesty for most people.
Sender Honest Demeanor (Levine, Serota, et al., 2011)—There are large individual differences in
believability. Some people come off as honest. Other people are doubted more often. These
differences in how honest different people are the result of a combination of 11 different behaviors
and impressions that function together. Honest demeanor has little to do with actual honesty, and
this explains poor accuracy in deception detection experiments.
Content in Context (Blair, Levine, & Shaw, 2010)—Understanding communication requires listening
to what is said and taking that in context. Knowing about the context in which the communication
occurs can help detect lies.
Diagnostic Utility (Levine, Blair, & Clare, 2014)—Some aspects of communication are more
useful than others in detecting deception and some aspects of communication can be misleading
producing systematic errors. Diagnostic utility involves prompting and using useful information
while avoiding useless and misleading behaviors.
Correspondence and Coherence (Reimer, Blair, & Levine, 2014)—Correspondence and coherence are
two types of consistency information that may be used in deception detection. Correspondence has
to do with comparing what is said to known facts and evidence. It involves fact checking. Coherence
involves the logical consistency of communication. Generally speaking, correspondence is more
useful than coherence in deception detection.
Question Effects (Levine, Blair, & Clare, 2014; Levine, Shaw, & Shulman, 2010)—Question effects
involves asking the right questions to yield diagnostically useful information that improves deception
detection accuracy.
Expert Questioning (Levine, Clare, et al., 2014)—Expertise in deception is highly context dependent
and involves knowing how to prompt diagnostically useful information rather than detection by
passive observation of nonverbal communication.
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 7
The Few Prolific Liars Model (Serota et al., 2010) makes two key claims. The first
is that deception, relative to honesty, is infrequent. That is, most people are honest
most of the time. Second, the prevalence of lying is not normally or evenly distributed
across the population. The prevalence of lying is positively skewed. Most lies are told
by a few prolific liars.
A second module focuses when and why people lie. The Deception Motives Module
(Levine, Kim, & Hamel, 2010) specifies that people lie for a reason, but the motives
behind truthful and deceptive communication are the same. When the truth is consis-
tent with people’s goals, they will almost always communicate honesty. Deception
becomes probable when the truth makes honest communication difficult or inefficient.
TDT’s view of deception motives is an area of theoretical overlap with IMT2
(McCornack et al., 2014).
On the message recipient side, the Projected Motive Model (Levine, Kim, & Blair,
2010) specifies that people know that others lie for a reason and are more likely to
suspect deception when they think a person has a reason to lie. A projected motive
provides a trigger that can kick people out of the truth-default state.
The Veracity Effect (Levine et al., 1999) refers to the empirical finding that the
veracity of the message judged predicts the accuracy of the judgment. In most decep-
tion detection experiments, accuracy is higher for truths than lies. The veracity effect
stems from truth-bias, and when the truth-default is in place, the veracity effect is
predicted to be especially large. The passive presumption of honesty leads people to
correctly believe honest communication, but lies go unnoticed as long as no trigger
event leads to the abandonment of the truth-default.
The Park–Levine Probability Model (Park & Levine, 2001) allows for predicting
the implications of the veracity effect on deception detection accuracy for different
truth-lie base-rates. So long as people are truth-biased, as the proportion of messages
that is honest increases, so does average detection accuracy. This relationship is linear
and predicted as the accuracy for truths times the proportion of messages that are true
plus the accuracy for lies times the proportion of messages that are lies.
Prior deception detection research has found that people are statistically better than
chance at distinguishing truths from lies, but are seldom much better than chance
(Bond & DePaulo, 2006). This is demonstrated by the well-known and often-cited
54% accuracy level reported by meta-analysis (Bond & DePaulo, 2006). Three mod-
ules in TDT seek to explain the slightly-better-than-chance accuracy findings that are
so well documented in the literature.
The A Few Transparent Liars (Levine, 2010) module speculates that the reason that
accuracy in typical deception detection experiments is slightly above chance is that
some small proportion of the population are really bad liars who usually give them-
selves away. That is, most people are good liars and people generally cannot tell if they
are honest or not. But, a few people cannot lie well. The transparent liars ensure that
accuracy is just above chance because people tend to catch the lies of these poor liars.
Alternatively, the Sender Honest Demeanor module (Levine, Serota, et al., 2011)
explains the accuracy ceiling observed in the literature (i.e., why accuracy is not much
by guest on May 28, 2014jls.sagepub.comDownloaded from
8 Journal of Language and Social Psychology
better than chance). There are large individual differences in believability. Some peo-
ple come off as honest. Other people are doubted more often. These differences in
honesty impressions are a function of a combination of 11 different behaviors that
function as a gestalt. Honest demeanor has little to do with actual honesty, and this
explains poor accuracy in deception detection experiments. In short, reliance on
demeanor ensures a small signal-to-noise ratio, and near-chance detection accuracy.
Third, the How People Really Detect Lies module (Park et al., 2002) holds that
outside the deception lab in everyday life, most lies are detected well after-the-fact—
based on either confessions or the discovery of some evidence showing that what was
said was false. Very few lies are detected in real time based only on the passive obser-
vation of sender nonverbal behavior. This partially explains poor accuracy in decep-
tion detection experiments as being the result of requiring subjects to detect deception
in ways other than how lies are typically detected. Park et al. (2002) also point to how
deception detection accuracy might be improved, namely, the solicitation of confes-
sions and the application of evidence.
Five additional modules focus on how deception can be accurately detected. These
include Content in Context (Blair et al., 2010), Diagnostic Utility (Levine, Blair, et al.,
2014), Correspondence and Coherence (Reimer et al., 2014), Question Effects (Levine,
Blair, et al., 2014; Levine, Shaw, et al., 2010), and Expert Questioning (Levine, Clare,
et al., 2014). These modules emphasize the use of evidence, the reliance on contextual-
ized communication content, and the active prompting of diagnostic communication
content through strategic questioning of a potential liar.
Logical Structure
TDT provides an overarching logical structure that ties together the various models
into a coherent theoretical package. Table 3 provides the 14 propositions that reflect
the key predictions of the theory and the theory’s logical flow. This section provides a
brief narrative description of the logical structure of TDT.
Humans are a social species, and our individual and collective survival requires
coordination, cooperation, and communication (at least within important in-groups).
Efficient communication requires a presumption of honesty. If the veracity of all
incoming messages need be scrutinized and questioned, communication would lose
efficiency and efficacy for coordination. The presumption of honest communication,
however, comes at a cost. It makes us vulnerable, at least in the short term, to decep-
tion and exploitation. But, at the core of TDT is the view that the tradeoff between
efficient communication and vulnerability to occasional deceit is more than worth it.
That is, the benefits gained through efficient communication and in-group cooperation
vastly outweigh the costs of occasional deception both for the individual and the
collective.
Many evolutionary perspectives on human deception assert that because humans
have evolved the ability to deceive others, humans also must have evolved the ability
to detect lies. There is, however, a more efficient solution—deterrence. It is proposed
that all human cultures develop prohibitions against deception, at least within
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 9
Table 3. TDT Propositions.
1. Most communication by most people is honest most of the time. While deception can and does
occur, in comparison to honest messages, deception is relatively infrequent, and outright lies are
more infrequent still. In fact, deception must be infrequent to be effective.
2. The prevalence of deception is not normally distributed across the population. Most lies are told by
a few prolific liars.
3. Most people believe most of what is said by most other people most of the time. That is, most people
can be said to be truth-biased most of the time. Truth-bias results from, in part, a default cognitive
state. The truth-default state is pervasive but it is not an inescapable cognitive state. Truth-bias and the
truth-default are adaptive both for the individual and the species. They enable efficient communication.
4. Furthermore, because of Proposition 1, the presumption of honesty specified in Proposition 3 is
usually correct. Truth bias, however, makes people vulnerable to occasional deception.
5. Deception is purposive. Absent psychopathology, people lie for a reason. Deception, however, is
usually not the ultimate goal, but instead a means to some other ends. That is, deception is typically
tactical. Specifically, most people are honest unless the truth thwarts some desired goal or goals. The
motives or desired goals achieved through communication are the same for honest and deceptive
communications, and deception is reserved for situations where honesty would be ineffectual,
inefficient, and/or counterproductive in goal attainment.
6. People understand that other’s deception is usually purposive, and are more likely to consider a
message as potentially or actually deceptive under conditions where the truth may be inconsistent
with a communicator’s desired outcomes. That is, people project motive states on others and this
affects suspicion and judgments of honesty and deceit.
7. The truth-default state requires a trigger event to abandon it. Trigger events include, but are not
limited to (a) a projected motive for deception, (b) behavioral displays associated with dishonest
demeanor, (c) a lack of coherence in message content, (d) a lack of correspondence between
communication content and some knowledge of reality, or (e) information from a third party
warning of potential deception.
8. If a trigger or set of triggers is sufficiently potent, a threshold is crossed, suspicion is generated, the
truth-default is at least temporarily abandoned, the communication is scrutinized, and evidence is
cognitively retrieved and/or sought to assess honesty-deceit.
9. Based on information of a variety of types, an evidentiary threshold may be crossed and a message
may be actively judged to be deceptive. The information used to assess honesty and deceit includes,
but is not limited to (a) communication context and motive, (b) sender demeanor, (c) information
from third parties, (d) communication coherence, and (e) correspondence information. If the
evidentiary threshold for a lie judgment is not crossed, an individual will may continue to harbor
suspicion or revert to the truth-default. If exculpatory evidence emerges, active judgments of
honesty are made.
10. Triggers and deception judgments need not occur at the time of the deception. Many deceptions are
suspected and detected well after the fact.
11. With the exception of a few transparent liars, deception is not accurately detected, at the time in
which it occurs, through the passive observation of sender demeanor. Honest-looking and deceptive-
looking communication performances are largely independent of actual honesty and deceit for most
people and hence usually do not provide diagnostically useful information. Consequently, demeanor
based deception detection is, on average, only slightly better than chance due to a few transparent
liars, but typically not much above chance due to the fallible nature of demeanor-based judgments.
12. In contrast, deception is most accurately detected through either (a) subsequent confession by
the deceiver or (b) by comparison of the contextualized communication content to some external
evidence or preexisting knowledge.
13. Both confessions and diagnostically informative communication content can be produced by effective
context-sensitive questioning of a potentially deceptive sender. Ill-conceived questioning, however,
can backfire and produce below-chance accuracy.
14. Expertise in deception detection rests on knowing how to prompt diagnostically useful information
rather than skill in the passive observation of sender behavior.
by guest on May 28, 2014jls.sagepub.comDownloaded from
10 Journal of Language and Social Psychology
important in-groups. Parents everywhere teach their children not to lie. Every major
world religion prohibits deception; as do most legal systems. Furthermore, recent evo-
lutionary perspectives on the development of human deception note that deception
must be infrequent to evolve (McNally & Jackson, 2013; Trivers, 2011) and that
deception coevolves with cooperation (McNally & Jackson, 2013).
This line of reasoning leads to the first four propositions. These propositions hold
that lying is much less prevalent than honesty, that most lies are told by a few prolific
liars, that people tend to believe others, and that presuming honesty makes sense
because most communication is honest. The catch is that the presumption of honesty
makes humans vulnerable to occasional deceit.
Because deception is discouraged, people need a reason to lie (Proposition 5).
People are generally honest unless the truth thwarts a goal state. Others know that
people lie for a reason (Proposition 6) and thus a projected motive for deceit is one
type of trigger event that can lead people to abandon the truth-default.
So, people tend to presume that others are honest. However, the truth-default state
is not inescapable. Proposition 7 holds that trigger events of various sorts can lead
people to abandon the truth-default state. Trigger events include, but are not limited to,
(a) a projected motive for deception, (b) behavioral displays associated with dishonest
demeanor, (c) a lack of coherence in message content, (d) a lack of correspondence
between communication content and some knowledge of reality, or (e) information
from a third party warning of potential deception. Proposition 8 specifies that if a trig-
ger or set of triggers is sufficiently potent, a threshold is crossed, suspicion is gener-
ated, the truth-default is at least temporarily abandoned, the communication is
scrutinized, and evidence is cognitively retrieved and/or sought to assess honesty-
deceit. Proposition 9 states that based on information of a variety of types, an eviden-
tiary threshold may be crossed and a message may be actively judged to be deceptive.
The information used to assess honesty and deceit includes, but is not limited to, (a)
communication context and motive, (b) sender demeanor, (c) information from third
parties, (d) communication coherence, and (e) correspondence information. If the evi-
dentiary threshold for a lie judgment is not crossed, an individual may continue to
harbor suspicion or revert to the truth-default. If exculpatory evidence emerges, active
judgments of honesty are made.
Propositions 8 and 9 specify two thresholds: one for abandoning the truth-default
and the second for actively inferring deception. It is presumed that the threshold for
triggering the abandonment of the truth-default is more sensitive than the threshold for
inferring deceit. In between the two thresholds, suspicion of deception exists. Suspicion
is viewed as a state of uncertainty where the possibility of deception is entertained. It
is a state of suspended belief. The suspicion state will not be retained indefinitely, and
either evidence is obtained sufficient to cross the second threshold and infer deceit, or
the person will eventually revert to the truth-default.
In line with Park et al. (2002), Proposition 10 adds the qualification that triggers
and deception judgments need not occur at the time of the deception. Many deceptions
are suspected and detected well after-the-fact.
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 11
Based on Park et al. (2002), Levine (2010), and Levine, Serota, et al. (2011),
Proposition 11 states that with the exception of a few transparent liars, deception is not
accurately detected, at the time at which it occurs, through the passive observation of
sender demeanor. Honest-looking and deceptive-looking communication perfor-
mances are largely independent of actual honesty and deceit for most people, and
hence usually do not provide diagnostically useful information. Consequently,
demeanor based deception detection is, on average, only slightly better than chance
due to a few transparent liars, but typically not much above chance due to the fallible
nature of demeanor-based judgments.
The final set of three propositions specifies the conditions under which deception
can be detected accurately. According to Proposition 12, deception is most accurately
detected through either (a) subsequent confession by the deceiver or (b) by compari-
son of the contextualized communication content to some external evidence or preex-
isting knowledge. Proposition 13 extends this line of thinking by specifying that both
confessions and diagnostically informative communication content can be produced
by effective context-sensitive questioning of a potentially deceptive sender. Ill-
conceived questioning, however, can backfire and produce below-chance accuracy.
Finally, the last proposition holds that expertise in deception detection rests on know-
ing how to prompt diagnostically useful information; rather than skill in the passive
observation of sender behavior.
Summary of Empirical Evidence
Clare (2013; Clare & Levine, 2014) provided evidence consistent with core premises
of TDT regarding the existence and pervasiveness of a truth-default state. Clare
exposed participants to true and false, plausible and implausible message content in
either face-to-face interaction or videotaped interviews. At times participants were
asked to make explicit veracity judgments as is typical in deception detection experi-
ments. Other times participants were asked to thought-list what they were thinking.
Order was experimentally varied, so that some participants did the thought listing first,
while others were asked about veracity first, priming the possibility of deceit. Although
participants demonstrated truth-bias in all experimental conditions, unprimed partici-
pants were substantially less likely to explicitly mention honesty or deception in the
unprimed conditions. In the unprimed conditions, less than 5% of participants explic-
itly mentioned considering veracity or deception. These findings are consistent with
Proposition 3 specifying the existence of truth-bias and the truth-default state and
Proposition 7 stating that a trigger event is required to abandon the truth-default.
Serota et al. (2010) reported three studies consistent with Propositions 1 and 2. In a
N = 1,000 representative nation-wide sample, the distribution of reported lies was
highly positively skewed with most people reporting few lies (mode was zero in past
24 hours) and a few prolific liars telling the most lies. These findings were replicated
with a college student sample and a reanalysis of previously published diary studies.
The results have subsequently been further replicated in the United Kingdom (Serota
by guest on May 28, 2014jls.sagepub.comDownloaded from
12 Journal of Language and Social Psychology
& Levine, 2014), The Netherlands (Halevy, Shalvi, & Verschuere, 2014), and with a
sample of U.S. high school students (Levine, Serota, Carey, & Messer, 2013).
Truth-bias (Proposition 3) is very well established. It is evidenced in meta-analysis
(Bond & DePaulo, 2006) as well as in primary experimental evidence (Levine et al.,
1999). Consistent with Proposition 4, research also shows that as the proportion of
messages that are honest increases, detection accuracy increases proportionally
(Levine et al., 1999; Levine, Kim, Park, & Hughes, 2006; Levine, Clare, Green,
Serota, & Park, 2014).
Data consistent with Proposition 5 are provided in three experiments reported by
Levine, Kim, and Hamel (2010). When the truth is in line with communicative goals,
honesty is nearly universal. Deception occurs frequently, but is not universal, when the
truth makes goal attainment difficult. Levine, Kim, and Hamel (2010) also show that
the pursuit of the same communicative goals guide both honest and deceptive mes-
sages. People are honest when the truth aligns with a speaker’s goals and deceptive
when the truth interferes with goal attainment. Thus, deceptive message production
does not arise for goals unique to honesty or deception.
Levine, Kim, and Blair (2010) provide evidence from three experiments that are in
line with Proposition 6. Operating from a projected motive model, it was predicted and
found that confessions tend to be almost universally believed, whereas denials of
transgression are more often doubted. There is no obvious motive to falsely confess to
a transgression, but there is motive to lie when denying a transgression.
A series of studies provide evidence consistent with Propositions 7 to 9. McCornack
and Levine (1990) and Kim and Levine (2011) show that third party prompting of
suspicion reduces truth-bias. Levine, Kim, and Blair (2010) show that truth-bias is
exceptionally strong in the absence of apparent motive but is reduced substantially
when a motive is apparent. Levine, Serota, et al. (2011) show that honest-dishonest
demeanor is strongly and predictably related to the attribution of truth and honesty.
Park et al. (2002) find that outside the lab, most discovered deception involves confes-
sions or comparison of communication content with external evidence.
Consistent with Proposition 10, Park et al. (2002) found that lies are frequently
detected well after the fact. Circumstantial evidence for the few transparent liars claim
in Proposition 10 is summarized in Levine (2010). Evidence for slightly-better-than-
chance demeanor-based detection is well documented in meta-analysis (e.g., Bond &
DePaulo, 2006). Evidence for the rest of Proposition 11 was consistently obtained in a
series of eight experiments reported by Levine, Serota, et al. (2011). Sender demeanor
was found to vary substantially across individuals, to be highly predictive of honesty-
deception judgments across student, nonstudent, and cross-cultural replications, and to
be largely independent of actual honesty.
Evidence for Proposition 12 was initially obtained by Park et al. (2002) who
reported that the vast majority of lies are detected either though confession or through
the application of evidence. Experimental evidence was produced in a series of 10
studies by Blair et al. (2010), documenting substantially improved accuracy using the
content in context approach to lie detection.
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 13
Initial experimental evidence for Proposition 13 was reported by Levine, Shaw, et
al. (2010). Those findings were subsequently replicated and extended in a series of six
experiments by Levine, Blair, et al. (2014).
Data consistent with Proposition 14 are reported by Levine, Clare, et al. (2014).
When experts were allowed to freely question potential cheaters, the experts obtained
accuracy of more than 90%.
Conclusion
The central idea behind truth-default theory is that people tend to presume that other
people communicate honestly most of the time. The presumption of honesty enables
efficient communication and cooperation. Furthermore, since most people are honest
most of time, believing others usually results in correct belief states. However, people
sometimes try to deceive others. People may become suspicious of others when others
have an obvious motive for deception, when they lack an honest demeanor, when they
are primed to expect deception by third parties, or when the communication content
appears either self-contradictory or inconsistent with known facts. When people rely
on demeanor to infer deception, accuracy is typically poor and slightly better than
chance. However, reliance on content in context improves accuracy substantially.
Accuracy can be further improved with strategic questioning that prompts diagnosti-
cally useful information.
Acknowledgment
David Clare, Rachel Kim, J. Pete Blair, Steve McCornack, Torsten Reimer, Kim Serota, and
Hee Sun Park made substantial and valuable contributions to the development and testing of
Truth-Default Theory.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interests with respect to the authorship and/or
publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship,
and/or publication of this article: The National Science Foundation and the Federal Bureau of
Investigations provided financial support for the research leading to and testing Truth-default
Theory.
References
Blair, J. P., Levine, T. R., & Shaw, A. J. (2010). Content in context improves deception detec-
tion accuracy. Human Communication Research, 36, 423-442.
Bond, C. F., Jr., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and
Social Psychology Review, 10, 214-234.
by guest on May 28, 2014jls.sagepub.comDownloaded from
14 Journal of Language and Social Psychology
Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory. Communication Theory,
6, 203-242.
Clare, D. (2013). Spontaneous, unprompted deception detection judgments (Unpublished pre-
liminary paper). East Lansing: Michigan State University.
Clare, D., & Levine, T. R. (2014). Spontaneous, unprompted deception detection judgments
(Manuscript in preparation). East Lansing: Michigan State University.
Ekman, P. (2009). Telling lies. New York, NY: W. W. Norton.
Ekman, P., & Friesen, W. V. (1969). Nonverbal leakage and clues to deception. Psychiatry, 32,
88-106.
Gilbert, D. (1991). How mental systems believe. American Psychologist, 46, 107-119.
Grice, H. P. (1989). Studies in the way of words. Cambridge, MA: Harvard University Press.
Halevy, R., Shalvi, S., & Verschuere, B. (2014). Being honest about dishonesty: Correlating
self-reports and actual lying. Human Communication Research, 40, 54-72.
Kim, R. K., & Levine, T. R. (2011). The effect of suspicion on deception detection accuracy:
Optimal level or opposing effects? Communication Reports, 24, 51-62.
Lakatos, I. (1980). The methodology of scientific research programmes. Cambridge, England:
Cambridge University Press.
Levine, T. R. (2010). A few transparent liars. In C. Salmon (Ed.), Communication Yearbook 34
(pp. 41-62). Thousand Oaks, CA: Sage.
Levine, T. R., Blair, J. P., & Clare, D. (2011). Expertise in deception detection involves actively
prompting diagnostic information rather than passive behavioral observation. Paper pre-
sented at the annual meeting of the National Communication Association, New Orleans,
LA.
Levine, T. R., Blair, J. P., & Clare, D. (2014). Diagnostic utility: Experimental demonstrations
and replications of powerful question effects and smaller question by experience interac-
tions in high stake deception detection. Human Communication Research, 40, 262-289.
Levine, T. R., Clare, D., Blair, J. P., McCornack, S. A., Morrison, K., & Park, H. S. (2014).
Expertise in deception detection involves actively prompting diagnostic information rather
than passive behavioral observation. Human Communication Research. In press.
Levine, T. R., Clare, D. D., Green, T., Serota, K. B., & Park, H. S. (2014). The effects of truth-
lie base rate on interactive deception detection accuracy. Human Communication Research.
(online 1st, 4-29-14).
Levine, T. R., Kim, R. K., & Blair, J. P. (2010). (In)accuracy at detecting true and false confes-
sions and denials: An initial test of a projected motive model of veracity judgments. Human
Communication Research, 36, 81-101.
Levine, T. R., Kim, R. K., & Hamel, L. M. (2010). People lie for a reason: An experimental test
of the principle of veracity. Communication Research Reports, 27, 271-285.
Levine, T. R., Kim, R. K., Park, H. S., & Hughes, M. (2006). Deception detection accuracy
is a predictable linear function of message veracity base-rate: A formal test of Park and
Levine’s probability model. Communication Monographs, 73, 243-260.
Levine, T. R., Park, H. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies:
Documenting the “veracity effect.” Communication Monographs, 66, 125-144.
Levine, T. R., Serota, K. B., Carey, F, & Messer, D. (2013). Teenagers lie a lot: A further
investigation into the prevalence of lying. Communication Research Reports, 30, 211-220.
Levine, T. R., Serota, K. B., Shulman, H., Clare, D. D., Park, H. S., Shaw, A. S., Shim, J. C.,
& Lee, J. H. (2011). Sender demeanor: Individual differences in sender believability have
a powerful impact on deception detection judgments. Human Communication Research,
37, 377-403.
by guest on May 28, 2014jls.sagepub.comDownloaded from
Levine 15
Levine, T. R., Shaw, A., & Shulman, H. (2010). Increasing deception detection accuracy with
strategic questioning. Human Communication Research, 36, 216-231.
McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59,
1-16.
McCornack, S. A., & Levine, T. R. (1990). When lovers become leery: The relationship between
suspicion and accuracy in detecting deception. Communication Monographs, 57, 219-230.
McCornack, S. A., Morrison, K., Paik, J. E., Wiser, A. M., & Zhu, X. (2014). Information
manipulation theory 2: A propositional theory of deceptive discourse production. Journal
of Language and Social Psychology.
McCornack, S. A., & Parks, M. R. (1986). Deception detection and relationship development:
The other side of trust. In M. L. McLaughlin (Ed.), Communication Yearbook 9 (pp. 377-
389). Beverly Hills, CA: Sage.
McNally, L., & Jackson, A. L. (2013). Cooperation creates selection for tactical deception.
Proceedings of the Royal Society B, 280, 1-7.
Park, H. S., & Levine, T. R. (2001). A probability model of accuracy in deception detection
experiments. Communication Monographs, 68, 201-210.
Park, H. S., Levine, T. R., McCornack, S. A., Morrison, K., & Ferrerra, M. (2002). How people
really detect lies. Communication Monographs, 69, 144-157.
Reimer, T., Blair, J. P., & Levine, T. R. (2014). The role of consistency in detecting decep-
tion: The superiority of correspondence over coherence (Unpublished manuscript). Purdue
University, West Layette.
Serota, K. B., & Levine, T. R. (2014). A few prolific liars: Variation in the prevalence of lying.
Journal of Language and Social Psychology.
Serota, K. B., Levine, T. R., & Boster, F. J. (2010). The prevalence of lying in America: Three
studies of reported deception. Human Communication Research, 36, 1-24.
Trivers, R. (2011). The folly of fools: The logic of deceit and self-deception in human life. New
York, NY: Basic.
Vrij, A., Granhag, P. A., & Porter, S. B. (2010). Pitfalls and opportunities in nonverbal and
verbal lie detection. Psychological Science in the Public Interest, 11, 89-121.
Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and nonverbal communication
of deception. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 14,
pp. 1-59). New York, NY: Academic Press.
Author Biography
Timothy R. Levine is a professor in the School of Media and Communication, Korea University,
Seoul, South Korea. He has published extensively on the topics of deception, credibility assess-
ment, and interpersonal communication.
by guest on May 28, 2014jls.sagepub.comDownloaded from
... This requires an investigation into the differences between AI-generated text that we suggest is inherently false (e.g., an AI wrote like it had an experience, but this experience could never have occurred) and human-generated text that is intentionally false when writing about personal experiences (e.g., a human wrote like it had an experience, but this experience did not occur and it is therefore deceptive). Such an evaluation can illuminate how intentionality (e.g., being purposefully misleading and withholding the truth from others; Levine, 2014) is revealed in language. Most agree that an AI cannot have intentionality because this requires consciousness (Husserl, 1913), but a human can. ...
... Decades of academic research suggest deception, defined as intentionally and purposefully misleading another person who is unaware of the truth (Levine, 2014;Markowitz, 2020b;Vrij, 2018), has a different linguistic signature than honesty. A recent meta-analysis of over 40 studies, for example, revealed that the relationship between deception and language produced small effect sizes and the patterns are contingent on several moderators (e.g., the interaction level such as no interaction, CMC, an interview, or person-to-person interaction; Hauch et al., 2015). ...
... Tools can also be used as weapons. Without proper thought and care for people who will receive such information and unequivocally treat it as true (e.g., the truth bias; Levine, 2014), there may be many negative downstream effects. ...
Article
To the human eye, AI-generated outputs of large language models have increasingly become indistinguishable from human-generated outputs. Therefore, to determine the linguistic properties that separate AI-generated text from human-generated text, we used a state-of-the-art chatbot, ChatGPT, and compared how it wrote hotel reviews to human-generated counterparts across content (emotion), style (analytic writing, adjectives), and structural features (readability). Results suggested AI-generated text had a more analytic style and was more affective, more descriptive, and less readable than human-generated text. Classification accuracies of AI-generated versus human-generated texts were over 80%, far exceeding chance (∼50%). Here, we argue AI-generated text is inherently false when communicating about personal experiences that are typical of humans and differs from intentionally false human-generated text at the language level. Implications for AI-mediated communication and deception research are discussed.
... This is particularly evident in distinguishing deceptive statements, where accuracy hovers around 47%, compared to 61% for honest ones (Bond & DePaulo, 2006). According to Truth-by-Default-Theory (TDT), this is because people either (1) fail to consider the possibility of deceit or (2) fail to obtain enough evidence to confirm the suspicion of deception (Levine, 2014). This results in a general assumption or bias that most messages are truthful. ...
... 15 Average deception is 31.0 percent, which suggests that CEOs are honest most of the time, although not always honest, as is commonly found in prior studies of deception (Levine, 2014). Table 2 shows the first-stage linear fixed-effects panel estimation and the second-stage ordered probit results for all models. ...
... We argue that deceptive CEOs receive better recommendations because analysts have a default-to-truth bias. Due to the low base rate of deception and difficulty of detection, analysts tend to assume that the CEO is being truthful (Levine, 2014). As a result, analysts are influenced by CEO deception and provide more positive recommendations. ...
Article
Research Summary Organizations are punished by analysts and investors when material deceit by their CEO is uncovered. However, few studies examine analysts' responses to deceptive CEOs before their deceit is publicly known. We use machine learning (ML) models to operationalize the likelihood of CEO deception as well as analysts' suspicion of CEO deception on earnings calls. Controlling for analysts' suspicion of deception, we show that analysts are prone to assigning superior recommendations to deceptive CEOs, particularly those deemed as All‐Star analysts. We find that the benefits of CEO deception are lower for habitual deceivers, pointing to diminishing returns of deception. This study contributes to corporate governance research by enhancing our understanding of analysts' reactions to CEO deception prior to public exposure of any fraud or misconduct. Managerial Summary Undetected deception by CEOs can impact the stock market by influencing analysts' recommendations. Using an advanced ML model, our study measures the likelihood of deception more accurately than previous methods and identifies a tendency among financial analysts to favor deceptive CEOs, particularly high‐status analysts. However, deception is less effective with analysts who are repeatedly exposed to deception. These findings underscore the importance of awareness of potential deception in CEO communications and the need for continuous scrutiny, learning, and adaptability among analysts.
... There is perhaps no more fundamental divide in the deception literature than over the utility of demeanor in deception detection, and we will find just how influential a politician's demeanor is in driving voters' judgments of a political interview. One theory of deception detection, truth-default theory (TDT; Levine, 2014), places itself in opposition to deception theories that find diagnostic utility in demeanor (Levine, 2022). TDT emphasizes that people tend to believe what they are told and typically only question the truth of an assertion when told otherwise. ...
... Deception comes in many formsespecially in politicsbut the focus of our paper is evasion in political interviews (Levine, 2014). Deceptive evasion involves replying to a question by changing the topic covertly. ...
... The process whereby voters process conflicting claims of veracity may be illuminated via truth-default theory (TDT: Levine, 2014;Levine, 2020). People generally have a "truthdefault," defined as a passive state of presuming honesty without actively considering the possibility of deceit (Levine, 2014). ...
... That is, people tend to use their own behavior to infer how others would act in the same situation 22 . Relatedly, because people generally operate on a truth bias and rarely consider the possibility of deception in most interactions 23,24 , one's own deception may act as a trigger that raises suspicion about trust violations. Consistent with this explanation, a phenomenon known as deceiver's distrust has been described, finding that senders who tell lies (vs. ...
... Participants were also asked to complete the Relational Communication Scale 36 , rate their partner on basic dimensions of social evaluation (e.g., warmth, competence, morality) 37 , likeability, and indicate whether they thought their partner was lying to them. It is worth noting that only n = 10 of 209 (i.e., 4.8%) of the receivers responded 'yes' to the question, "At any point during the conversation did you think that your partner was lying to you?", indicating a strong truth bias among receivers 24 . A full list of measures and the order in which they were asked can be found in the Qualtrics file, posted to OSF (https://osf.io/ezn7p/). ...
Article
Full-text available
Lies can have major consequences if undetected. Research to date has focused primarily on the consequences of deception for receivers once lies are discovered. We advance deception research and relationship science by studying the social consequences of deception for the sender—even if their lies remain undetected. In a correlational study of video conversations (Study 1; N = 776), an experimental study of text conversations (Study 2; N = 416), and a survey of dispositional tendencies (Study 3; N = 399), we find consistent evidence that people who lie tend to assume that others are lying too, and this impedes their ability to form social connections. The findings provide insight into how (dis)honesty and loneliness may go together, and suggest that lies—even when undetected—harm our relationships.
... Deception detection has remained an area of vested interest in fields like psychology, forensics, law, and computational linguistics for a myriad of reasons like understanding behavioral patterns of lying (Newman et al., 2003;DePaulo and Morris, 2004), identifying fabricated information (Conroy et al., 2015), distinguishing false statements or testimonies (Şen et al., 2022) and detecting deception in online communication (Hancock, 2009). These are relevant tasks because of the truth bias, which is the inherent inclination of humans to actively believe or passively presume that a statement made by another person is true and accurate by default, without the need for evidence to substantiate this belief (Levine, 2014). While this facilitates efficient communication, it also makes people susceptible to deception, especially in online media where digital deception (Hancock, 2009) manifests in many forms like fake news, misleading advertisements, impersonation and scams. ...
... Extant research has found that adults tend to be more accurate at identifying children's true statements as true (60% accuracy rate) than they are at identifying children's false statements as false (49% accuracy rate; see Gongola et al., 2017 for meta-analysis). Furthermore, consistent with the patterns observed in their veracity judgments of other adults (e.g., Levine et al., 1999;Levine, 2014), adults seem biased toward wrongfully labeling children's false statements as being true (Gongola et al., 2017). One reason for this pattern may be that adults often believe that children are simply unlikely to tell a lie (Quas et al., 2005;Goodman et al., 2006;Talwar et al., 2006). ...
Article
Full-text available
Introduction: Seldom has work investigated systematic biases in adults’ truth and lie judgments of children’s reports. Research demonstrates that adults tend to exhibit a bias toward believing a child is telling the truth, but it is unknown whether this truth bias applies equally to all children. Given the pervasiveness of racial prejudice and anti-Black racism in the United States, the current study examined whether adults are more or less likely to believe a child is telling the truth based on the race of the child (Black or White), the race of the adult perceiver (Black or White), and the perceiver’s concerns regarding appearing unprejudiced. Methods: Using an online data-collection platform, 593 Black and White American adults reviewed fictitious vignettes in which a child denied committing a misbehavior at school (e.g., damaging a laptop). The race of the child in the vignette was manipulated using an AI-generated photo of either a Black child or a White child. After reading each story, participants provided a categorical veracity judgment by indicating whether they believed the child in the story was lying (and therefore committed the misdeed) or telling the truth (and was innocent), as well as rated how honest or deceptive the child was being on a continuous scale. Participants also completed questionnaires assessing their internal (personal) and external (normative) motivations to respond in non-prejudiced ways. Results and discussion: Results indicated that systematic racial biases occur in adults’ veracity judgments of children’s statements. Both Black and White participants exhibited a truth bias in their veracity judgments of Black children, but not when evaluating the deceptiveness of White children. Consistent with the prejudice-related concerns hypothesis, the observed truth bias toward Black children was moderated by individual differences in participants’ desire to respond without prejudice and whether those motivations stem from external or internal sources. The current findings present novel evidence regarding racial bias and prejudice-related concerns as potential barriers to making veracity judgments of children’s statements and, ultimately, successful lie detection.
Article
Frequent and habitual engagement with social media can reinforce certain activities such as sharing, clicking hyperlinks, and liking, which may be performed with insufficient cognition. In this study, we aimed to examine the associations between personality traits, habits, and information processing to identify social media users who are susceptible to phishing attacks. Our experimental data consisted of 215 social media users. The results revealed two important findings. First, users who scored high on the personality traits of extraversion, agreeableness, and neuroticism were more likely to engage in habitual behaviors that increase their susceptibility to phishing attacks, whereas those who scored high on conscientiousness were less likely. Second, users who habitually react to social media posts were more likely to apply heuristic processing, making them more susceptible to phishing attacks than those who applied systematic processing.
Preprint
Full-text available
Given that human accuracy in detecting deception has been proven to not go above the chance level, several automatized verbal lie detection techniques employing Machine Learning and Transformer models have been developed to reach higher levels of accuracy. This study is the first to explore the performance of a Large Language Model, FLAN-T5 (small and base sizes), in a lie-detection classification task in three English-language datasets encompassing personal opinions, autobiographical memories, and future intentions. After performing stylometric analysis to describe linguistic differences in the three datasets, we tested the small- and base-sized FLAN-T5 in three Scenarios using 10-fold cross-validation: one with train and test set coming from the same single dataset, one with train set coming from two datasets and the test set coming from the third remaining dataset, one with train and test set coming from all the three datasets. We reached state-of-the-art results in Scenarios 1 and 3, outperforming previous benchmarks. The results revealed also that model performance depended on model size, with larger models exhibiting higher performance.Furthermore, stylometric analysis was performed to carry out explainability analysis, finding that linguistic features associated with the Cognitive Load framework may influence the model’s predictions. Furthermore, stylometric analysis was performed to carry out explainability analysis, finding that linguistic features associated with the Cognitive Load framework may influence the model’s predictions.
Article
Purpose Disinformation, false information designed with the intention to mislead, can significantly damage organizational operation and reputation, interfering with communication and relationship management in a wide breadth of risk and crisis contexts. Modern digital platforms and emerging technologies, including artificial intelligence (AI), introduce novel risks in crisis management (Guthrie and Rich, 2022). Disinformation literature in security and computer science has assessed how previously introduced technologies have affected disinformation, demanding a systematic and coordinated approach for sustainable counter-disinformation efforts. However, there is a lack of theory-driven, evidence-based research and practice in public relations that advises how organizations can effectively and proactively manage risks and crises driven by AI (Guthrie and Rich, 2022). Design/methodology/approach As a first step in closing this research-practice gap, the authors first synthesize theoretical and technical literature characterizing the effects of AI on disinformation. Upon this review, the authors propose a conceptual framework for disinformation response in the corporate sector that assesses (1) technologies affecting disinformation attacks and counterattacks and (2) how organizations can proactively prepare and equip communication teams to better protect businesses and stakeholders. Findings This research illustrates that future disinformation response efforts will not be able to rely solely on detection strategies, as AI-created content quality becomes more and more convincing (and ultimately, indistinguishable), and that future disinformation management efforts will need to rely on content influence rather than volume (due to emerging capabilities for automated production of disinformation). Built upon these fundamental, literature-driven characteristics, the framework provides organizations actor-level and content-level perspectives for influence and discusses their implications for disinformation management. Originality/value This research provides a theoretical basis and practitioner insights by anticipating how AI technologies will impact corporate disinformation attacks and outlining how companies can respond. The proposed framework provides a theory-driven, practical approach for effective, proactive disinformation management systems with the capacity and agility to detect risks and mitigate crises driven by evolving AI technologies. Together, this framework and the discussed strategies offer great value to forward-looking disinformation management efforts. Subsequent research can build upon this framework as AI technologies are deployed in disinformation campaigns, and practitioners can leverage this framework in the development of counter-disinformation efforts.
Article
Full-text available
This piece was the first in history to posit the notion of "truth-bias," which has now become foundational within the field of deception. It also posits what has come to be known as The McCornack-Parks Model of Deception Detection; namely, that as relational intimacy increases, detection confidence increases, truth-bias increases, and detection accuracy decreases.
Article
Full-text available
Article
Full-text available
The question of whether discernible differences exist between liars and truth tellers has interested professional lie detectors and laypersons for centuries. In this article we discuss whether people can detect lies when observing someone's nonverbal behavior or analyzing someone's speech. An article about detecting lies by observing nonverbal and verbal cues is overdue. Scientific journals regularly publish overviews of research articles regarding nonverbal and verbal cues to deception, but they offer no explicit guidance about what lie detectors should do and should avoid doing to catch liars. We will present such guidance in the present article. The article consists of two parts. The first section focuses on pitfalls to avoid and outlines the major factors that lead to failures in catching liars. Sixteen reasons are clustered into three categories: (a) a lack of motivation to detect lies (because accepting a fabrication might sometimes be more tolerable or pleasant than understanding the truth), (b) difficulties associated with lie detection, and (c) common errors made by lie detectors. We will argue that the absence of nonverbal and verbal cues uniquely related to deceit (akin Pinocchio's growing nose), the existence of typically small differences between truth tellers and liars, and the fact that liars actively try to appear credible contribute to making lie detection a difficult task. Other factors that add to difficulty is that lies are often embedded in truths, that lie detectors often do not receive adequate feedback about their judgments and therefore cannot learn from their mistakes, and that some methods to detect lies violate conversation rules and are therefore difficult to apply in real life. The final factor to be discussed in this category is that some people are just very good liars. The common errors lie detectors make that we have identified are examining the wrong cues (in part, because professionals are taught these wrong cues); placing too great an emphasis on nonverbal cues (in part, because training encourages such emphasis); tending to too-readily interpret certain behaviors, particularly signs of nervousness, as diagnostic of deception; placing too great an emphasis on simplistic rules of thumb; and neglecting inter- and intrapersonal differences. We also discuss two final errors: that many interview strategies advocated by police manuals can impair lie detection, and that professionals tend to overestimate their ability to detect deceit. The second section of this article discusses opportunities for maximizing one's chances of detecting lies and elaborates strategies for improving one's lie-detection skills. Within this section, we first provide five recommendations for avoiding the common errors in detecting lies that we identified earlier in the article. Next, we discuss a relatively recent wave of innovative lie-detection research that goes one step further and introduces novel interview styles aimed at eliciting and enhancing verbal and nonverbal differences between liars and truth tellers by exploiting their different psychological states. In this part of the article, we encourage lie detectors to use an information-gathering approach rather than an accusatory approach and to ask liars questions that they have not anticipated. We also encourage lie detectors to ask temporal questions-questions related to the particular time the interviewee claims to have been at a certain location-when a scripted answer (e.g., "I went to the gym") is expected. For attempts to detect lying about opinions, we introduce the devil's advocate approach, in which investigators first ask interviewees to argue in favor of their personal view and then ask them to argue against their personal view. The technique is based on the principle that it is easier for people to come up with arguments in favor than against their personal view. For situations in which investigators possess potentially incriminating information about a suspect, the "strategic use of evidence" technique is introduced. In this technique, interviewees are encouraged to discuss their activities, including those related to the incriminating information, while being unaware that the interviewer possesses this information. The final technique we discuss is the "imposing cognitive load" approach. Here, the assumption is that lying is often more difficult than truth telling. Investigators could increase the differences in cognitive load that truth tellers and liars experience by introducing mentally taxing interventions that impose additional cognitive demand. If people normally require more cognitive resources to lie than to tell the truth, they will have fewer cognitive resources left over to address these mentally taxing interventions when lying than when truth telling. We discuss two ways to impose cognitive load on interviewees during interviews: asking them to tell their stories in reverse order and asking them to maintain eye contact with the interviewer. We conclude the article by outlining future research directions. We argue that research is needed that examines (a) the differences between truth tellers and liars when they discuss their future activities (intentions) rather than their past activities, (b) lies told by actual suspects in high-stakes situations rather than by university students in laboratory settings, and (c) lies told by a group of suspects (networks) rather than individuals. An additional line of fruitful and important research is to examine the strategies used by truth tellers and liars when they are interviewed. As we will argue in the present article, effective lie-detection interview techniques take advantage of the distinctive psychological processes of truth tellers and liars, and obtaining insight into these processes is thus vital for developing effective lie-detection interview tools.
Article
Full-text available
Information Manipulation Theory 2 (IMT2) is a propositional theory of deceptive discourse production that conceptually frames deception as involving the covert manipulation of information along multiple dimensions and as a contextual problem-solving activity driven by the desire for quick, efficient, and viable communicative solutions. IMT2 is rooted in linguistics, cognitive neuroscience, speech production, and artificial intelligence. Synthesizing these literatures, IMT2 posits a central premise with regard to deceptive discourse production and 11 empirically testable (that is, falsifiable) propositions deriving from this premise. These propositions are grouped into three propositional sets: intentional states (IS), cognitive load (CL), and information manipulation (IM). The IS propositions pertain to the nature and temporal placement of deceptive volition, in relation to speech production. The CL propositions clarify the interrelationship between load, discourse, and context. The IM propositions identify the specific conditions under which various forms of information manipulation will (and will not) occur.
Article
Full-text available
Although it is commonly believed that lying is ubiquitous, recent findings show large, individual differences in lying, and that the proclivity to lie varies by age. This research surveyed 58 high school students, who were asked how often they had lied in the past 24 hr. It was predicted that high school students would report lying with greater frequency than previous surveys with college student and adult samples, but that the distribution of reported lies by high school students would exhibit a strongly and positively skewed distribution similar to that observed with college student and adult samples. The data were consistent with both predictions. High school students in the sample reported telling, on average, 4.1 lies in the past 24 hr—a rate that is 75% higher than that reported by college students and 150% higher than that reported by a nationwide sample of adults. The data were also skewed, replicating the “few prolific liar” effect previously documented in college student and adult samples.
Article
Inconsistency is often considered an indication of deceit. The conceptualization of consistency used in deception research, however, has not made a clear distinction between two concepts long differentiated by philosophers: coherence and correspondence. The existing literature suggests that coherence is not generally useful for deception detection. Correspondence, however, appears to be quite useful. The present research developed a model of how correspondence is utilized to make judgments, and this article reports on four studies designed to elaborate on the model. The results suggest that judges attend strongly to correspondence and that they do so in an additive fashion. As noncorrespondent information accumulates, an increasingly smaller proportion of judges make truthful assessments of guilty suspects. This work provides a basic framework for examining how information is utilized to make deception judgments and forms the correspondence and coherence module of truth-default theory.
Article
: Research relevant to psychotherapy regarding facial expression and body movement, has shown that the kind of information which can be gleaned from the patients words - information about affects, attitudes, interpersonal styles, psychodynamics - can also be derived from his concomitant nonverbal behavior. The study explores the interaction situation, and considers how within deception interactions differences in neuroanatomy and cultural influences combine to produce specific types of body movements and facial expressions which escape efforts to deceive and emerge as leakage or deception clues.