Article

Truth-Default Theory (TDT): A Theory of Human Deception and Deception Detection

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Truth-Default Theory (TDT) is a new theory of deception and deception detection. This article offers an initial sketch of, and brief introduction to, TDT. The theory seeks to provide an elegant explanation of previous findings as well as point to new directions for future research. Unlike previous theories of deception detection, TDT emphasizes contextualized communication content in deception detection over nonverbal behaviors associated with emotions, arousal, strategic self-presentation, or cognitive effort. The central premises of TDT are that people tend to believe others and that this "truth-default" is adaptive. Key definitions are provided. TDT modules and propositions are briefly explicated. Finally, research consistent with TDT is summarized.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... This issue is important because when a journalist accuses a politician of deception viewers tend to believe the interviewer (Clementson, 2019). This trend stems from viewers' truth-default (Levine, 2014): the interviewer activates suspicion, and viewers tend to distrust the politician. However, meta-analysis of political 'factchecking' indicates that media sources who essentially call a politician a liar tend to backfire among partisans (Walter et al., 2020). ...
... However, meta-analysis of political 'factchecking' indicates that media sources who essentially call a politician a liar tend to backfire among partisans (Walter et al., 2020). Moreover, the extent to which an interviewer's claim of deception is believed may depend on partisan bias perceived by viewers, based on social identity theory (SIT; Tajfel & Turner, 1979) and truth-default theory (TDT; Levine, 2014). The role of hostile media perceptions (Vallone et al., 1985) also arises. ...
... Complementing SIT is a theory of deception detection called truth-default theory (TDT; Levine, 2014). TDT holds that people have an innate presumption of belief in salient ingroups and an assumption of disbelief with outgroups. ...
Article
Full-text available
The growth of partisan media presents new challenges for political consultants. Informed by social identity theory and truth-default theory, this paper examines how U.S. voters react to politicians in combative news interviews. Experiment 1 (N = 320) establishes that an ingroup politician gains more trust among ingroup voters when interviewed by cross-partisan media accusing the politician of deception than when interviewed more congenially by copartisan media. Experiment 2 (N = 131) finds that perceived media hostility increases Democrats’ and Republicans’ trust in a politician regardless of whether the interviewer accuses the politician of deception or is congenial. Experiment 3 (N = 126) replicates the findings when a Democrat is accused of deception on Fox News. The study tested a moderated mediation model, finding that Republican and Democratic voters do not react differently in the process. The discussion highlights the practical and theoretical implications of hostile media perceptions and ingroup–outgroup partisan bias in a polarized media environment with conflicting exposure to live fact-checking and deception detection.
... By contrast, textual disinformation typically lacks heuristic cues that directly signal reality, and it often requires individuals to assess the message arguments, thereby reconsidering the logic behind the disinformation in judging its credibility or believability. Consequently, the heuristic cues involved in deepfakes may cause the suspicion of deception to be sidestepped (Levine, 2014), thus leading to stronger misperceptions and lower fact-checking intentions than those elicited by textual disinformation. However, this still leaves the question of why deepfakes lead to greater misperceptions than textual information in the first place. ...
... Deepfakes may immerse recipients in a narrative that strongly resembles reality, thus creating a perception of authenticity. By mimicking a direct index of truth, deepfakes may circumvent suspicion and hinder the active detection of deception, thereby keeping recipients in a truth-default state of information processing (Levine, 2014). ...
... Indeed, recent research suggests that multimodal disinformation facilitates automatic processing by immediately provoking anxiety across individuals with different levels of issue relevance (Lee et al., 2023). Based on this finding, one might expect that even people who perceive the issue as highly relevant to them may still process the associated deepfake while sidestepping the activation of suspicion, as has been shown in a study on disinformation about cancer prevention (Levine, 2014). However, according to the ELM framework, individuals who perceive the issue as personally relevant are prone to scrutinize deepfake arguments more critically, and they are therefore more likely to detect deceptive messages rather than falling for the realism heuristics employed by deepfakes. ...
Article
Among its many applications, artificial intelligence (AI) can be used to manipulate audiovisual content for malicious intents. Such manipulations – which are commonly known as deepfakes – present a significant obstacle to maintaining truth in sectors that serve the public interest, and it is necessary to take proactive fact-checking actions to respond to the threat they pose. The current study conducted an online experiment focusing on the topic of cancer prevention. In this experiment, we further examined the influence of individual differences, such as issue relevance and motns, in health information consumption. When compared to textual disinformation, health-related audiovisual deepfakes were found to have a significant effect on increasing misperceptions but no such effect on fact-checking intentions. We also found that exposure to such deepfakes discouraged individuals with high issue relevance from engaging in fact-checking. Deepfakes were also shown to have a particularly potent effect on increasing misperceptions among individuals with high (illusory) accuracy motivations. These findings underscore the need for increased awareness regarding the detrimental effects of health deepfakes in particular and the urgent importance of further elucidating individual variations to ultimately develop more comprehensive approaches to combat deepfakes.
... Several deception detection theories exist (for a review, see Masip, 2017). Truth-Default Theory (TDT) is one of the most well-supported theories (Levine, 2014a;2014b;Serota et al., 2021) and has been useful for understanding social engineering (Armstrong et al., 2023). ...
... According to TDT, people assume conversation partners are honest unless something "triggers" them to think otherwise (Levine, 2014b). Conversation partners are usually honest (Serota et al., 2021) and lies are typically innocuous (Serota et al., 2021), so it is generally adaptive to assume communication partners are honest (Levine, 2020). ...
... Conversation partners are usually honest (Serota et al., 2021) and lies are typically innocuous (Serota et al., 2021), so it is generally adaptive to assume communication partners are honest (Levine, 2020). Potential triggers include a third-party's warning about potential deception, as well as conversation partners having an obvious motivation for deception, saying something that contradicts either something they said earlier or something the person knows to be true, or lacking an honest demeanor (Levine, 2014b). An honest demeanor includes confidence and composure, a pleasant, friendly, engaged and involved interaction style, and giving plausible explanations. ...
Article
Purpose This study aimed to investigate how honest participants perceived an attacker to be during shoulder surfing scenarios that varied in terms of which Principle of Persuasion in Social Engineering (PPSE) was used, whether perceived honesty changed as scenarios progressed, and whether any changes were greater in some scenarios than others. Design/methodology/approach Participants read one of six shoulder surfing scenarios. Five depicted an attacker using one of the PPSEs. The other depicted an attacker using as few PPSEs as possible, which served as a control condition. Participants then rated perceived attacker honesty. Findings The results revealed honesty ratings in each condition were equal during the beginning of the conversation, participants in each condition perceived the attacker to be honest during the beginning of the conversation, perceived attacker honesty declined when the attacker requested the target perform an action that would afford shoulder surfing, perceived attacker honesty declined more when the Distraction and Social Proof PPSEs were used, participants perceived the attacker to be dishonest when making such requests using the Distraction and Social Proof PPSEs and perceived attacker honesty did not change when the attacker used the target’s computer. Originality/value To the best of the authors’ knowledge, this experiment is the first to investigate how persuasion tactics affect perceptions of attackers during shoulder surfing attacks. These results have important implications for shoulder surfing prevention training programs and penetration tests.
... There are numerous explanations for why neurotypical individuals' veracity decisions are so inaccurate. The truth-default theory (TDT; Levine, 2014) proposes that humans are naturally truth-biased; we tend to believe people only communicate things that are true based on Grice's (1989) conversational maxim of quality. For the truth-default to be abandoned, trigger events-such as perceiving deception cuesmust be experienced by the recipient of communication. ...
... Although TDT (Levine, 2014) proposes that perception of deception cues can lead to judgments of deceit, this process may be erroneous due to lie-detectors' overreliance on unreliable cues (Vrij, 2008). It is well documented that lie-detectors are heavily dependent on non-verbal deception cues (e.g., averted eye-gaze; Global Deception Research Team, 2006) even though deceit cannot be reliably inferred from these (DePaulo et al., 2003). ...
... While Levine (2014) claims the truth-default to be universal, it is possible that this state may vary across neurodiverse populations and the extent to which autistic adults display a truth-bias is currently unclear. On one hand, over 40% of autistic adults experience co-occurring anxiety disorders (Zaboski and Storch, 2018), potentially increasing their threat sensitivity due to hypervigilance and attentional biases for threat-related stimuli lowering their truth-bias threshold (Mogg et al., 2000). ...
Article
Full-text available
Due to differences in social communication and cognitive functioning, autistic adults may have greater difficulty engaging in and detecting deception compared to neurotypical adults. Consequently, autistic adults may experience strained social relationships or face increased risk of victimization. It is therefore crucial that research investigates the psychological mechanisms that are responsible for autistic adults’ difficulties in the deception process in order to inform interventions required to reduce risk. However, weaknesses of extant research exploring deception in autism include a heavy focus on children and limited theoretical exploration of underlying psychological mechanisms. To address these weaknesses, this review aims to introduce a system-level theoretical framework to the study of deception in autistic adulthood: The Brunswik Lens Model of Deception. Here, we provide a comprehensive account of how autism may influence all processes involved in deception, including: Choosing to Lie (1), Producing Deception Cues (2), Perceiving Deception Cues (3), and Making the Veracity Decision (4). This review also offers evidence-based, theoretical predictions and testable hypotheses concerning how autistic and neurotypical adults’ behavior may differ at each stage in the deception process. The call to organize future research in relation to a joint theoretical perspective will encourage the field to make substantive, theoretically motivated progress toward the development of a comprehensive model of deception in autistic adulthood. Moreover, the utilization of the Brunswik Lens Model of Deception in future autism research may assist in the development of interventions to help protect autistic adults against manipulation and victimization.
... Though adults are poor at detecting both children's and adults' lies, response biases have been found to differ across adult and child stimuli (Edelstein et al., 2006), warranting a separate investigation within childhood. If we replicate these effects within the context of childhood, this can not only advance the generalizability of context effects to a broader developmental range and provide further empirical support for truth-default theory (Levine, 2014), but it can also elucidate if present conclusions that adults are poor at detecting children's lies (Gongola et al., 2017) are perhaps confined to certain experimental settings. It is also particularly important to extend this line of research to children's communication because the misinterpretation of children's reports has potential to bring consequence in certain legal settings (Bala et al., 2005;Gongola et al., 2017). ...
... As contextual and surrounding information is provided in legal proceedings, a study that assesses how adults detect children's lies when contextual information is provided is warranted. According to truth-default theory (Levine, 2014), and in line with research with adult liars (Blair et al., 2019), learning contextual information about the event in question should alter (and enhance) one's ability to detect dishonest communication. ...
... At trial, judges and jurors are provided with information surrounding the event, detailing the events and interactions that occurred between the accuser and the accused. There are often motivations proposed for why the child's testimony may be truthful or fabricated (Stolzenberg & Lyon, 2014); however, these details have yet to be provided in child experimental lie-detection studies despite theory (Levine, 2014) suggesting that this information could alter detection patterns. In the present study, we provided contextual information about an event that occurred between an adult and child to assess if having access to this information enhanced lie-detection performance. ...
Article
Full-text available
The present research examined how contextual/coaching information and interview format influenced adults' ability to detect children's lies. Participants viewed a series of child interview videos where children provided either a truthful report or a deceptive report to conceal a co-transgression; participants reported if they thought each child was lying or telling the truth. In Study 1 (N = 400), participants were assigned to one of the following conditions that varied in the type of interview shown and if context about the event in question was provided: full interview + context, recall questions + context, recognition questions + context, or full interview only (no context). Providing context (information about the potential co-transgression and coaching) significantly enhanced overall and lie accuracy, but this served the greatest benefit when provided with the recall interview, and participants held a lie bias. In Study 2 (N = 100), participants watched the full interview with simplified coaching information. Detection accuracy was reduced slightly but remained well above chance and the lie bias was eliminated. Thus, detection performance is improved when participants are given a child's free-recall interview along with background information on the event and potential coaching, though providing specific coaching details introduces a lie bias.
... Despite these different definitions, one key element in defining photorealism is the deliberate attempt to replicate the appearance of reality, challenging perceptions by blurring the line between the real and the artificial. Research has shown that distinguishing between a photographic image and a photorealistic one is increasingly challenging, even for experts [28,51], as audiences are likely to believe that images are authentic due to their truth default [29]. ...
... Our findings that AIGIs often mix realistic imagery with surreal content, particularly in depictions of human figures such as politicians and celebrities, also challenge existing theories of perceived realism and authenticity. According to the Truth Default Theory (TDT), human cognition tends to default to trust, minimizing the cognitive cost of questioning information authenticity [29]. AIGIs may exploit this trust by seamlessly merging real and fictional elements in high-quality visuals, leading viewers to overlook surrealistic or impossible aspects. ...
Preprint
Full-text available
Advances in generative models have created Artificial Intelligence-Generated Images (AIGIs) nearly indistinguishable from real photographs. Leveraging a large corpus of 30,824 AIGIs collected from Instagram and Twitter, and combining quantitative content analysis with qualitative analysis, this study unpacks AI photorealism of AIGIs from four key dimensions, content, human, aesthetic, and production features. We find that photorealistic AIGIs often depict human figures, especially celebrities and politicians, with a high degree of surrealism and aesthetic professionalism, alongside a low degree of overt signals of AI production. This study is the first to empirically investigate photorealistic AIGIs across multiple platforms using a mixed-methods approach. Our findings provide important implications and insights for understanding visual misinformation and mitigating potential risks associated with photorealistic AIGIs. We also propose design recommendations to enhance the responsible use of AIGIs.
... Relying on a context-general guess will lead to a bias toward making truth judgments. In this way, ALIED views the truth bias as an adaptive response (Street, 2015), not an error or default (Gilbert, 1991;Levine, 2014). ...
... The Spinozan account (Gilbert, 1991) provided a somewhat detailed explanation of the process, although it has not stood the test of time (e.g., Hasson et al., 2005;Mayo, Schul, & Burnstein, 2004;Nadarevic & Erdfelder, 2013;Street & Kingstone, 2016; although see Mandelbaum, 2014, for a defense of the position). Since then, we have seen relatively little theoretical work on the decision process-at least, until recently (Levine, 2014;Street, 2015). ALIED theory offers a high-level description of the decision process and offers novel, testable predictions. ...
Chapter
This chapter promotes a shift toward a theory-driven approach to lie detection research. It does so by exploring why people show a bias to believe and disbelieve others. The adaptive lie detector theory, or ALIED, claims that these biases are adaptive and functional, rather than a sign of error. Recent tests of ALIED theory are briefly reviewed. Then, novel predictions are made ahead of the data, and research streams that naturally arise from ALIED are discussed. Finally, we conclude with a call to researchers to develop theories that produce novel predictions—regardless of whether they stand the test of time.
... The truth default theory (TDT) investigates human communication by examining the information people receive [24]. TDT demonstrates that individuals frequently exhibit a bias when presenting information, often showing predisposition to believe statements from others, even if they are false; this is known as truth bias [25]. ...
... This study examines various factors about fake news by combining risk perception, trust in the media, celebrity posts on social media platforms, and theoretical tensions of Truth-Default Theory. Integrating this theory creates a lens full of relevant items that explain why individuals may accept or disseminate fake news [24]. Previous research on false news has frequently focused on cognitive biases that distinguish between true and fake news, political affiliation, and demographic characteristics. ...
Article
Full-text available
The epidemic has had a profound negative impact on individuals worldwide, leading to pervasive anxiety, fear, and mental instability. Exploiting these fears, a significant amount of fake information proliferates and spreads rapidly on social networks. This study explores the factors that cause individuals to believe fake news under stressful and fearful conditions by applying the truth-default theory. Data was collected online in Vietnam, using Smart PLS software to analyze the research model. The findings indicated that risk perception, media trust, trust in celebrity posts, and stress were factors that urge users to believe news posted on social media, and even they actively share this news on their own channels. Disclosure willingness moderated the relationship between adoption fake news and sharing it. Both theoretical and practical implications were discussed.
... All presenters in the Trinity condition rated their trust above the neutral point (i.e., 4) on the 7-point Likert scale. They expressed trust in the system because there were no compelling reasons to doubt it and no obviously questionable or unexpected information, which aligns with findings from psychological literature known as the Truth-Default Theory [50]. However, their trust in the system was constrained by their limited knowledge of best practices and professional certification. ...
Preprint
Full-text available
Academic Oral Presentation (AOP) allows English-As-Foreign-Language (EFL) students to express ideas, engage in academic discourse, and present research findings. However, while previous efforts focus on training efficiency or speech assistance, EFL students often face the challenge of seamlessly integrating verbal, nonverbal, and visual elements into their presentations to avoid coming across as monotonous and unappealing. Based on a need-finding survey, a design study, and an expert interview, we introduce Trinity, a hybrid mobile-centric delivery support system that provides guidance for multichannel delivery on-the-fly. On the desktop side, Trinity facilitates script refinement and offers customizable delivery support based on large language models (LLMs). Based on the desktop configuration, Trinity App enables a remote mobile visual control, multi-level speech pace modulation, and integrated delivery prompts for synchronized delivery. A controlled between-subject user study suggests that Trinity effectively supports AOP delivery and is perceived as significantly more helpful than baselines, without excessive cognitive load.
... Oftentimes, public office holders or politicians, especially in developing countries such as Nigeria, conjure up what can be regarded as executive fabrications, which usually come in the form of indecision before being released piecemeal either as gossip or rumour (Onobe et al., 2023). Sadly, many politicians take advantage of the media's availability to explore the complacency (Saul, 2012) of society's humanistic tendency through what Levine (2014) refers to as a condescending assumption, made either actively or passively, that another person's communication is based on honesty. According to Clementson (2017), this assumption is distinct from actual honesty. ...
Article
Full-text available
Active citizen participation in governance, openness and prompt provision of adequate information for the public to make informed decisions is crucial for any meaningful development. Unfortunately, governance in Nigeria is shrouded in secrecy, lies, half-truths, gossip, and rumours, which generally negate some of the cardinal precepts of democracy. Since opportunities for citizen engagement are lacking, it is common for Nigerians to resort to social media platforms such as Facebook, Twitter (X), WhatsApp and Blogs to access and share information, which may result in cyber-gossip or cyber-rumour peddling, especially on burning national issues. This research, therefore, examines structured deliberative gossip and the Nigerian government in the digital era. The study uses a simple random sampling technique and 385 sample size drawn from Taraba State for analysis and discussion using tables and simple percentages anchored on structured deliberative gossip theory. Findings reveal that the Nigerian government engage in gossip and rumours as ways of gauging public opinion and testing their popularity before they are implemented, thereby serving as a precursor for policy formation. However, this research recommends that citizen participation, adequate information provision, openness, and transparency can help reduce the propagation of gossip by the government. Also, since gossip and rumours can affect the government negatively, there is a need for the government to control rumour propagation by releasing official Rumour-Refuting Information (ORI).
... If people were rarely sensitive to reasons, why provide moral justifications for our stances on abortion or gun legislation? If it turned out that people rarely, if ever, cared about being honest, despite their proclamations to the contrary, our mutual trust would dissolve (Ho, 2021;Kolb, 2008;Levine, 2014). Yet, the notion that people's moral principles guide their judgments and decisions faces an obvious challenge: People sometimes seem to act against their avowed moral principles. ...
Article
Full-text available
What role does reasoning about moral principles play in people’s judgments about what is right or wrong? According to one view, reasoning usually plays little role. People tend to do what suits their self-interests and concoct moral reasons afterward to justify their own behavior. Thus, in this view, people are far more forgiving of their own violations than of others’ violations. According to a contrasting view, principled reasoning generally guides judgments and decisions about our own and others’ actions. This view predicts that people usually can, and do, articulate the principles that guide their moral judgments and decisions. The present research examined a phenomenon at the center of these debates: students’ evaluations of academic cheating. Across three studies, we used structured interviews and online surveys to examine first- and third-party judgments and reasoning about cheating events. Third-party scenarios were derived from students’ own accounts of cheating events and manipulated based on the reasons students provided. Findings supported the view that reasoning is central to evaluations of cheating. Participants articulated reasons consistent with their judgments about their own and others’ actions. The findings advance classic debates about reasoning in morality and exemplify a paradigm that can bring further advances.
... Appel and Prietzel (2022) found that an individual's tendency to think analytically is correlated with general skepticism, which in turn is associated with being better able to discern between genuine and manipulated information. However, people also tend to display a 12 'truth bias': a tendency to automatically assume that information is true by default (Levine, 2014). In Son's (2022) study, truth bias was associated with a higher likelihood of miscategorizing deepfake videos as real. ...
Preprint
Full-text available
Deepfakes refers to a wide range of computer-generated synthetic media, in which a person’s appearance or likeness is altered to resemble that of another. This paper provides a comprehensive review of the literature on people’s ability to detect deepfakes. Five databases (IEEE, ProQuest, PubMed, Web of Science and Scopus) were searched up to December 2023. Forty independent studies from 30 unique records were included in the review. Detection performance varied widely across studies. Generally, high-quality deepfakes are harder to detect, and audio deepfakes pose a significant challenge due to the lack of visual cues. However, studies use various performance metrics such as accuracy rating, AUC, and Likert scales, making it difficult to compare results across studies. Detection accuracy varies widely, with some studies showing humans outperforming AI models and others indicating the opposite. Our review also found that detection performance is influenced by person-level (e.g., cognitive ability, analytical thinking) and stimuli-level factors (e.g., quality of deepfake, familiarity with the subject). Interventions to improve people’s deepfake detection yielded mixed results. We also found that humans and AI-based detection models focus on different aspects when detecting, suggesting a potential for human-AI collaboration. The findings highlight the complex interplay of factors influencing human deepfake detection and the need for further research to develop effective strategies for deepfake detection.
... These conditions can lead to changes in verbal and nonverbal behaviours, such as increased blinking and pupil dilation, heightened voice pitch, speech errors, pausing, and other speech hesitations, ultimately affecting physiological behaviours like SKT, EDA, and HR. Additionally, we consider the truth default [38] and interpersonal theory, which justify lying for reasons such as goal attainment, where honesty is seen as counterproductive. Thus, using games that naturally create situations requiring deception to win is considered. ...
... Speculatively, participants with less of an inclination towards reflective processes may have assumed that phishing emails were presented at a higher rate than actuality. The Truth-Default Theory (TDT) states that people on average tend to trust others, and because the phishing decision task alerted participants to the presence of phishing emails, this may have created demand characteristics that disproportionately calibrated the default decision-making for participants who generally engage less in a systematic interrogation (Levine, 2014). ...
Article
Full-text available
The study tested the role of cue utilization and cognitive reflection tendencies in email users’ phishing decision capabilities in both controlled and naturalistic settings. 94 university students completed measures of their phishing cue utilization and cognitive reflection, a phishing decision task, and a naturalistic simulated phishing campaign, in which they were sent simulated phishing emails to their personal inboxes. For the phishing decision task, results revealed that participants with lower cognitive reflection tendencies were more likely to misclassify genuine emails as phishing, compared to participants with higher cognitive reflection. Further, participants with higher cognitive reflection and lower cue utilization took the most time to diagnose emails, but participants low in both cue utilization and cognitive reflection demonstrated the shortest response latencies. These findings suggest that greater cognitive reflection can offset lower levels of cue utilization. For the naturalistic simulation, neither cue utilization nor cognitive reflection predicted an increased propensity to interact with a suspicious email. This result highlights a potential gap between phishing investigations conducted in controlled and naturalistic settings. The implications extend to future research, emphasizing the need for studies that employ naturalistic methodologies to better understand and address phishing threats in real-world environments.
... The biggest issue we observed is that users trusted the signatures even in situations where this was not warranted, e.g., when a document was signed with irrelevant personal data. This unwarranted trust is likely rooted in users' habit of not scrutinizing the legitimacy of a signature until the assumption of honest communication is lifted [34,35]. If digital signatures can be used to mislead users, they are not secure in practice [14]. ...
Preprint
Full-text available
Documents are largely stored and shared digitally. Yet, digital documents are still commonly signed using (copies of) handwritten signatures, which are sensitive to fraud. Though secure, cryptography-based signature solutions exist, they are hardly used due to usability issues. This paper proposes to use digital identity wallets for securely and intuitively signing digital documents with verified personal data. Using expert feedback, we implemented this vision in an interactive prototype. The prototype was assessed in a moderated usability test (N = 15) and a subsequent unmoderated remote usability test (N = 99). While participants generally expressed satisfaction with the system, they also misunderstood how to interpret the signature information displayed by the prototype. Specifically, signed documents were also trusted when the document was signed with irrelevant personal data of the signer. We conclude that such unwarranted trust forms a threat to usable digital signatures and requires attention by the usable security community.
... In contrast, there are relatively few empirical demonstrations that people can be lie biased, leading some to consider truth bias as arising from a default form of processing and lie bias as something that may result from an additional trigger or further processing (cf. Gilbert, Krull, & Malone, 1990;Levine, 2014). ...
Conference Paper
Full-text available
To date, no account of lie-truth judgement formation has been capable of explaining how core cognitive mechanisms such as memory encoding and retrieval are employed to reach a judgement of either truth or lie. One account, the Adaptive Lie Detector theory (ALIED: Street, Bischof, Vadillo, & Kingstone, 2016) is sufficiently well defined that its assumptions may be implemented in a computational model. In this paper we describe our attempt to ground ALIED in the representations and mechanisms of the ACT-R cognitive architecture and then test the model by comparing it to human data from an experiment conducted by Street et al. (2016). The model provides a close fit to the human data and a plausible mechanistic account of how specific and general information are integrated in the formation of truth-lie judgements.
... • Das selective exposure-Prinzip besagt, dass Nutzer:innen eher die Inhalte rezipieren, die ihre Ansichten bestätigen, • zusätzlich vertrauen Rezipient:innen dem confirmation bias-Prinzip nach Informationen stärker, die ihre vorhandene Einstellungen bestätigen (Zhou & Zafarani, 2020, S. 5). • Die Truth-default theory zeigt, dass Menschen standardmäßig und unbewusst annehmen, dass die Kommunikation anderer ehrlich ist (Levine, 2014). Rezipient:innen ziehen nicht in Betracht, dass ihr Gegenüber eine Täuschungsabsicht verfolgen könnte oder schließen als kognitiven Schutzmechanismus eine Täuschung aus, wenn sie nicht genügend Beweise dazu finden (Levine, 2020). ...
... Carson in turn suggests (p 47) to not use "deception" about the unintentional, with its negative connotations, and rather use "to inadvertently mislead". Levine (2014) would agree with Carson's interpretation too, rather using "honest communication" (p 379-80). Interestingly, the way the latter should include even honest marketing and propaganda (despite targeting and intending to steer audiences) to the extent there is an intention of veracity, makes it more likely to be relevant to larger-scale politicization theory. ...
Preprint
Full-text available
Whereas a concern about a knowledge-"deficit" with the public related to Science Communication was much discussed in the UK 20-40 years ago (with the Public Understanding of Science program, PUS - in the end abandoned), findings are compiled and synthesized here enabling to highlight seemingly innate and under-appreciated agency behind an ensuing yet different deficit problem: "certainty troughs" (MacKenzie, 1990) forming between science and its end-users - a human-rights case of large-scale epistemic paternalism also cautioned against at the time. It is primarily exemplified with climate/CO2, its geo-science area having grown relevant to sustainable development and security in particular. To this end, pertinent sections from three student-papers in Intelligence Analysis were compiled (with headings and wordings only slightly adjusted to enable new seamless comprehension) starting with the overall scope relating human rights to what could now be more aptly termed: a "Public Receipt of Science" program. Remaining sections were organized under a What-How-Why format popular in journalism. The What section presents not only the human-rights issue, the concepts of the certainty trough and the history of science medialization, but also the relevant skewed science area itself, as critically needed, i.e., that of climate/CO2 and its main in-science conundrum of the existence and understanding of medium-term solar-climate linkage. Under The How, processes behind undue politicization and misleading are related first, in that trough formation should be defined by the way such apparently enter and skew - albeit not necessarily or always - also mainstream Science Communication-function understanding and workings. Then results from the final bachelor-thesis' main question addressing trough workings, its "rhetoric", are presented. It compared the prevalent trough narrative, one of "doubt-merchants", with that of this physical geographer representing a thought science-side's SC-narrative. The Why-section starts with visiting the relevant institutional science-knowledge stakeholders in the security domain: Scientific Intelligence and Geopolitics, and parliamentary Accountability Oversight. It then gets around the argument and its combatants, and end-user rationales conducive of from-below enablement of negligence on the part of a trough, despite audiences general adherence to evidence-based policy, a field involving both questions of faith and political psychology. A final Perspectives heading collects and discusses some suggested forward-paths emerging as potentially critical possibly mitigating measures: investigative-journalism problems, avenues in corporate ESG-reporting, renewed takes in Science Communication-studies and Social Studies of Science, as well as the re-visiting of PUS objectives.
... These data add to the growing literature suggesting that both the production and perception of deception are driven by context (see Levine & McCornack, 2014). And although our data offer strong confirmation for the arguments of IMT2, they also offer empirical confirmation for other new theories of deception that prioritize information and context as driving deception decision-making, most notably Tim Levine's Truth-Default Theory (Levine, 2014) Information Manipulation Probability Curve Across Three Studies ...
Conference Paper
Full-text available
Scholars have long recognized that people are more likely to deceive when contexts are difficult to deal with truthfully (see Walczyk, Harris, Duck, & Mulay, 2014). However, virtually the entire deception literature is predicated upon the presumption of a linear relationship between complexity and deception. That is, as situations become increasingly goal-complex – and the conditionally relevant information more problematic – the probability that people will deceive in response to them increases in a linear fashion. The Information Manipulation Probability Curve (IMPC) suggests otherwise. Rooted in the propositions of Information Manipulation Theory 2 (McCornack, Morrison, Paik, Wisner, & Zhu, 2014), the IMPC posits that the probability that people will deceptively manipulate information within their discourse, when mapped onto increases in situational complexity, will be a curvilinear function. We report three analyses examining three data sets using different methods and samples from different institutions; the results of which converge in documenting the same pattern: the relationship between situational complexity and deception probability is curvilinear.
... [16], p. 55, conclude that ChatGPT "proves instrumental in overcoming language barriers, thereby improving the quality of academic writing produced by postgraduate students" and that it has the potential to "significantly contribute to the research outputs of postgraduate students". The better form does neither document a deeper understanding nor more valuable research outputsapparently a case for Truth Default Theory [17]. ...
Article
Full-text available
Large Language Models (LLM) and Generative Pre-trained Transformers (GPT), in particular, have quite recently vehemently stirred the understanding of Artificial Intelligence (AI). The expectations of revolutionary AI applications and business as well as societal impact are very high and, in contrast, reports about disastrous case studies and hallucinating GPTs are fearsome. This investigation narrows the focus to human-AI collaboration for processes of research and discovery including higher education. Based on a qualitative analysis of decisive deficiencies of Generative AI (GAI), there is developed an original approach that allows for preservation of the GAI’s full power and bears the potential to mitigate detected weaknesses and to increase the AI’s reliability. Essentially, the technology consists in symbolic wrapping of sub-symbolic AI. The result of wrapping GAI is a hybrid AI system. Scientific discovery with theory formation is a key area relevant to the progress of science, technology, and societal applications. Humans are challenging modern AI to support this process taking advantage of the Generative AI’s strength as a conversationalist. But scientific discovery and theory formation is intricate, as Albert Einstein put it in a letter to Karl Popper as early as in 1935, because theory cannot be fabricated out of the results of observation, it can only be invented. Theory is not squeezed out of data. The emergence of theory takes the form of sequences of conjectures being subject to critical analysis possibly including refutations. This requires reasoning, a pain point of Generative AI, as GAI dispenses with a sound calculus. Wrapping mitigates the deficiency. A symbolic wrapper validates GAI responses w.r.t. the prompts put in. It asks back, if necessary, to arrive at improved AI responses.
... Objective detection accuracy Deception detection accuracy was calculated as the number of correctly judged lies and correctly judged truths divided by the total number of messages judged 14,66,67 . The resulting measure is a percentage based on 10 trials. ...
Article
Full-text available
Subjective lying rates are often strongly and positively correlated. Called the deception consensus effect, people who lie often tend to believe others lie often, too. The present paper evaluated how this cognitive bias also extends to deception detection. Two studies (Study 1: N = 180 students; Study 2: N = 250 people from the general public) had participants make 10 veracity judgments based on videotaped interviews, and also indicate subjective detection abilities (self and other). Subjective, perceived detection abilities were significantly linked, supporting a detection consensus effect, yet they were unassociated with objective detection accuracy. More overconfident detectors—those whose subjective detection accuracy was greater than their objective detection accuracy—reported telling more white and big lies, cheated more on a behavioral task, and were more ideologically conservative than less overconfident detectors. This evidence supports and extends contextual models of deception (e.g., the COLD model), highlighting possible (a)symmetries in subjective and objective veracity assessments.
... These strategies in misinformation are tough to tackle because while the information is factual, the reasoning behind is flawed. Individuals with certain traits such as low cognitive abilities or a tendency to believe in truth by default are more susceptible to these persuasive strategies [49,50]. ...
Article
Full-text available
Online health misinformation commonly includes persuasive strategies that can easily deceive lay people. Yet, it is not well understood how individuals respond to misinformation with persuasive strategies at the moment of exposure. This study aims to address the research gap by exploring how and why older adults fall into the persuasive trap of online health misinformation and how they manage their encounters of online health misinformation. Using a think-aloud protocol, semi-structured interviews were conducted with twenty-nine older adults who were exposed to articles employing twelve groups of common persuasive strategies in online health misinformation. Thematic analysis of the transcripts revealed that some participants fell for the persuasive strategies, yet the same strategies were detected by others as cues to pin down misinformation. Based on the participants’ own words, informational and individual factors as well as the interplay of these factors were identified as contributors to susceptibility to misinformation. Participants’ strategies to manage misinformation for themselves and others were categorized. Implications of the findings are discussed.
... Laypersons tend to perform better at identifying truthful statements than at detecting false statements due to a truth bias (Vrij, 2008). As noted by Levine (2014), people are generally more exposed to truthful than deceptive statements in daily life, resulting in reliance on heuristic modes of thinking when evaluating the truthfulness of a statement. By contrast, law enforcement professionals often demonstrate a lie bias and may assume a suspect is guilty (Kassin, 2005;Meissner & Kassin, 2002). ...
Article
Despite the potential of visual disinformation to deceive people on pressing socio-political issues, we currently lack an understanding of how online visual disinformation (de)legitimizes partisan truth claims at times of war. As an important next step in disinformation theory and research, this article inductively mapped a wide variety of global visual disinformation narratives on armed conflicts disseminated via social media. The narratives were sampled through various international fact-checking databases, involving multiple social media platforms and countries. The analyses reveal that visual disinformation mainly consisted of existing footage that was decontextualized in a deceptive manner based on time, location, or fictionality. Moving beyond existing research exploring how decontextualized visuals offer proof for counter-factual narratives, our findings indicate that visuals contribute to the process of othering by constructing a “delusional rationality” that legitimizes mass violence and the destruction of the other. These findings have crucial ramifications for international policy and interventions at times of global armed conflicts that are covered widely across social media channels.
Chapter
According to our folk theory, we can reliably detect when people are lying through observing behavioral cues. The cues occur because people are motivated to lie but afraid to get caught. Lying is also commonplace according to our folk theory, so it is a good thing that we have this capacity to tell when people are deceiving. Social epistemologists have agreed, elevating our folk theory into epistemological wisdom. Elizabeth Fricker's epistemology of testimony leads the field in transforming folk thoughts into philosophical theories. Extensive research in communication studies, however, shows that our folk theory is mistaken. The social epistemology of communication should understand and incorporate lessons from this research when explaining why we acquire knowledge and justification through conversation and when making recommendations for how to do better. Starting with the facts provides a surer foundation. This chapter elaborates this research, with special attention to its ecological validity, concluding with a detailed discussion of Fricker's argument for a monitoring requirement on justified testimony-based belief and her argument that we frequently possess a reliably true quasi-perceptual belief that our speakers are trustworthy.
Article
Full-text available
Truth-default theory posits that most people are normatively honest and are believed by others. Do people think that others consider them honest? This paper explores how social perceptions of deception align with the truth-default. Using data from the All of Us dataset (N = 116,914 total respondents), we observed that most people feel trusted. While the predicted long-tail distribution was universal across subsamples, people who self-identified as male, are minorities in the US, had less education, and had less income were less likely to feel trusted by others. Connections to and implications for truth-default theory are discussed.
Article
Full-text available
Cross cultural differences in behavioral and verbal norms and expectations can undermine credibility, often triggering a lie bias which can result in false convictions. However, current understanding is heavily North American and Western European centric, hence how individuals from non-western cultures infer veracity is not well understood. We report novel research investigating native Arabic speakers’ truth and lie judgments having observed a matched native language forensic interview with a mock person of interest. 217 observers viewed a truthful or a deceptive interview and were directed to attend to detailedness as a veracity cue or given no direction. Overall, a truth bias (66% accuracy) emerged, but observers were more accurate (79%) in the truth condition with the truthful interviewee rated as more plausible and more believable than the deceptive interviewee. However, observer accuracy dropped to just 23% when instructed to use the detailedness cue when judging veracity. Verbal veracity cues attended too were constant across veracity conditions with ‘corrections’ emerging as an important veracity cue. Some results deviate from the findings of research with English speaking western participants in cross- and matched-culture forensic interview contexts, but others are constant. Nonetheless, this research raises questions for research to practice in forensic contexts centred on the robustness of western centric psychological understanding for non-western within culture interviews centred on interview protocols for amplifying veracity cues and the instruction to note detailedness of verbal accounts which significantly hindered Arabic speaker’s performance. Findings again highlight the challenges of pancultural assumptions for real-world practices.
Article
Objectives Difficulties with deception detection may leave older adults especially vulnerable to fraud. Interoception, i.e., the awareness of one’s bodily signals, has been shown to influence deception detection, but this relationship has not been examined in aging yet. The present study investigated effects of interoceptive accuracy on two forms of deception detection: detecting interpersonal lies in videos and identifying text-based deception in phishing emails. Method Younger (18-34 years) and older (53-82 years) adults completed a heartbeat-detection task to determine interoceptive accuracy. Deception detection was assessed across two distinct, ecologically valid tasks: i) a lie detection task in which participants made veracity judgments of genuine and deceptive individuals, and ii) a phishing email detection task to capture online deception detection. Using multilevel logistic regression models, we determined the effect of interoceptive accuracy on lie and phishing detection in younger versus older adults. Results In older, but not younger, adults greater interoceptive accuracy was associated with better accuracy in both detecting deceptive people and phishing emails. Discussion Interoceptive accuracy was associated with both lie detection and phishing detection accuracy among older adults. Our findings identify interoceptive accuracy as a potential protective factor for fraud susceptibility, as measured through difficulty detecting deception. These results support interoceptive accuracy as a relevant factor for consideration in interventions targeted at fraud prevention among older adults.
Article
Full-text available
Research has shown that complications are more common in truth tellers' accounts than in lie tellers' accounts, but there is yet no experiment that examined the accuracy of observers' veracity judgments when looking at complications. A total of 87 participants were asked to judge 10 transcripts (five truthful and five false) derived from a set of 59 transcripts generated in a previous experiment [1]. Approximately half of the participants were trained to detect complications (Trained) and the other half did not receive training (Untrained). Trained participants were more likely to look for complications but they did not detect them accurately, and thus their veracity judgments did not improve beyond Untrained participants' judgments. We discuss that the training may have been too brief or not sensitive enough to enhance decision making.
Chapter
This research examines the relationship between online dating platforms and Non-Profit Organisations, hypothesising that the algorithms of the latter would differ from profit-driven counterparts. These platforms have grown in significance, serving millions of users globally while generating substantial profits. This study combines literature review and qualitative research to explore the potential changes if online dating platforms adopted a non-profit model. Convenience sampling was used, with 36 interviews conducted either orally or in writing, on the phone, in person, or via email. Respondents’ perspectives on the differences between profit-driven and non-profit online dating platforms vary. Some suggest minimal change, while others envision improved algorithms leading to enhanced matchmaking, greater efficiency, and societal well-being. The study includes data analysis, implications, and future research suggestions.
Article
Misinformation is widely regarded as an undermining force to European democracies. Yet, to date, empirical research shows that the amount of misinformation people encounter is rather low, and not in proportion to the strong alarming messages spread throughout society. In this light, current interventions that pre-bunk misinformation by using warning messages may disproportionally prime suspicion and result in inflated estimates of misinformation. To assess whether messages that pre-bunk misinformation result in disproportionate risk perceptions related to inaccurate or false information, and to explore the effectiveness of alternative interventions, this article relied on an online between-subjects experiment in the Netherlands ( N = 437). Our main findings indicate that exposure to a media literacy intervention does not result in higher first- or third-person risk perceptions related to misinformation exposure. However, a warning message that emphasizes the identification of reliable news while contextualizing the threats of misinformation significantly lowers perceived misinformation salience. As an important implication of our findings, we suggest that pre-bunking interventions should relativize the threats of misinformation by facilitating the recognition of honest and reliable information as an alternative path to help people identify reliable information.
Article
Occupational violence (OV) is an insidious problem for emergency medical services with continued high levels of paramedic exposure despite significant education and resources devoted to mitigation. Though there is considerable data on the epidemiology of the phenomenon, the available evidence on the experiences of paramedics exposed to acts of violence during healthcare is limited. Utilising a generic qualitative approach and a semi-structured interview framework, we examined the perceptions and experiences of 25 Australian paramedics who had been exposed to incidents of patient-initiated violence during out-of-hospital care. A general inductive methodology and a first- and second-cycle coding process assisted in the development of the principal concepts of the patient and the paramedic from the raw data. Subsequently, a further four main themes and 15 secondary themes were developed which characterise the influence of social interaction on the evolution of paramedic OV. The results of this study provide a unique insight into the phenomenon of paramedic OV. As opposed to the rudimentary manifestation of aggression typically endorsed by emergency medical services, aggressive behaviour during healthcare presents as a judicious interaction of dynamic scene management and situational context. The social interactions that occur during healthcare, and the premises which both promote and suppress this connection, were identified to exert significant influence on the evolution of aggressive behaviour. The consequences of these findings challenge traditional violence mitigation strategies which seek to position the patient as both the focal point of initiation and the key to its extenuation.
Article
While, by default, people tend to believe communicated content, it is also possible that they become more vigilant when personal stakes increase. A lab ( N = 72) and an online ( N = 284) experiment show that people make judgements affected by explicitly tagged false information and that they misremember such information as true – a phenomenon dubbed the ‘truth bias’. However, both experiments show that this bias is significantly reduced when personal stakes – instantiated here as a financial incentive – become high. Experiment 2 also shows that personal stakes mitigate the truth bias when they are high at the moment of false information processing, but they cannot reduce belief in false information a posteriori, that is once participants have already processed false information. Experiment 2 also suggests that high stakes reduce belief in false information whether participants’ focus is directed towards making accurate judgements or correctly remembering information truthfulness. We discuss the implications of our findings for models of information validation and interventions against real‐world misinformation.
Chapter
This chapter takes an interdisciplinary approach to the study of deception from the critical perspectives of rhetoric, communication, and media studies. The primary objective is to interrogate the interrelationship of communication, identity, and technology relevant to social media in order to confront issues related to online deception. To that end, this case study is centrally focused on social media sensation Miquela Sosa, also known as Lil Miquela, and the implications of artificial intelligence (AI) technologies and social media influencers to contribute to a more robust critical consciousness regarding misinformation online.
Article
Full-text available
This piece was the first in history to posit the notion of "truth-bias," which has now become foundational within the field of deception. It also posits what has come to be known as The McCornack-Parks Model of Deception Detection; namely, that as relational intimacy increases, detection confidence increases, truth-bias increases, and detection accuracy decreases.
Article
Full-text available
The question of whether discernible differences exist between liars and truth tellers has interested professional lie detectors and laypersons for centuries. In this article we discuss whether people can detect lies when observing someone's nonverbal behavior or analyzing someone's speech. An article about detecting lies by observing nonverbal and verbal cues is overdue. Scientific journals regularly publish overviews of research articles regarding nonverbal and verbal cues to deception, but they offer no explicit guidance about what lie detectors should do and should avoid doing to catch liars. We will present such guidance in the present article. The article consists of two parts. The first section focuses on pitfalls to avoid and outlines the major factors that lead to failures in catching liars. Sixteen reasons are clustered into three categories: (a) a lack of motivation to detect lies (because accepting a fabrication might sometimes be more tolerable or pleasant than understanding the truth), (b) difficulties associated with lie detection, and (c) common errors made by lie detectors. We will argue that the absence of nonverbal and verbal cues uniquely related to deceit (akin Pinocchio's growing nose), the existence of typically small differences between truth tellers and liars, and the fact that liars actively try to appear credible contribute to making lie detection a difficult task. Other factors that add to difficulty is that lies are often embedded in truths, that lie detectors often do not receive adequate feedback about their judgments and therefore cannot learn from their mistakes, and that some methods to detect lies violate conversation rules and are therefore difficult to apply in real life. The final factor to be discussed in this category is that some people are just very good liars. The common errors lie detectors make that we have identified are examining the wrong cues (in part, because professionals are taught these wrong cues); placing too great an emphasis on nonverbal cues (in part, because training encourages such emphasis); tending to too-readily interpret certain behaviors, particularly signs of nervousness, as diagnostic of deception; placing too great an emphasis on simplistic rules of thumb; and neglecting inter- and intrapersonal differences. We also discuss two final errors: that many interview strategies advocated by police manuals can impair lie detection, and that professionals tend to overestimate their ability to detect deceit. The second section of this article discusses opportunities for maximizing one's chances of detecting lies and elaborates strategies for improving one's lie-detection skills. Within this section, we first provide five recommendations for avoiding the common errors in detecting lies that we identified earlier in the article. Next, we discuss a relatively recent wave of innovative lie-detection research that goes one step further and introduces novel interview styles aimed at eliciting and enhancing verbal and nonverbal differences between liars and truth tellers by exploiting their different psychological states. In this part of the article, we encourage lie detectors to use an information-gathering approach rather than an accusatory approach and to ask liars questions that they have not anticipated. We also encourage lie detectors to ask temporal questions-questions related to the particular time the interviewee claims to have been at a certain location-when a scripted answer (e.g., "I went to the gym") is expected. For attempts to detect lying about opinions, we introduce the devil's advocate approach, in which investigators first ask interviewees to argue in favor of their personal view and then ask them to argue against their personal view. The technique is based on the principle that it is easier for people to come up with arguments in favor than against their personal view. For situations in which investigators possess potentially incriminating information about a suspect, the "strategic use of evidence" technique is introduced. In this technique, interviewees are encouraged to discuss their activities, including those related to the incriminating information, while being unaware that the interviewer possesses this information. The final technique we discuss is the "imposing cognitive load" approach. Here, the assumption is that lying is often more difficult than truth telling. Investigators could increase the differences in cognitive load that truth tellers and liars experience by introducing mentally taxing interventions that impose additional cognitive demand. If people normally require more cognitive resources to lie than to tell the truth, they will have fewer cognitive resources left over to address these mentally taxing interventions when lying than when truth telling. We discuss two ways to impose cognitive load on interviewees during interviews: asking them to tell their stories in reverse order and asking them to maintain eye contact with the interviewer. We conclude the article by outlining future research directions. We argue that research is needed that examines (a) the differences between truth tellers and liars when they discuss their future activities (intentions) rather than their past activities, (b) lies told by actual suspects in high-stakes situations rather than by university students in laboratory settings, and (c) lies told by a group of suspects (networks) rather than individuals. An additional line of fruitful and important research is to examine the strategies used by truth tellers and liars when they are interviewed. As we will argue in the present article, effective lie-detection interview techniques take advantage of the distinctive psychological processes of truth tellers and liars, and obtaining insight into these processes is thus vital for developing effective lie-detection interview tools.
Article
Full-text available
Information Manipulation Theory 2 (IMT2) is a propositional theory of deceptive discourse production that conceptually frames deception as involving the covert manipulation of information along multiple dimensions and as a contextual problem-solving activity driven by the desire for quick, efficient, and viable communicative solutions. IMT2 is rooted in linguistics, cognitive neuroscience, speech production, and artificial intelligence. Synthesizing these literatures, IMT2 posits a central premise with regard to deceptive discourse production and 11 empirically testable (that is, falsifiable) propositions deriving from this premise. These propositions are grouped into three propositional sets: intentional states (IS), cognitive load (CL), and information manipulation (IM). The IS propositions pertain to the nature and temporal placement of deceptive volition, in relation to speech production. The CL propositions clarify the interrelationship between load, discourse, and context. The IM propositions identify the specific conditions under which various forms of information manipulation will (and will not) occur.
Article
Full-text available
Although it is commonly believed that lying is ubiquitous, recent findings show large, individual differences in lying, and that the proclivity to lie varies by age. This research surveyed 58 high school students, who were asked how often they had lied in the past 24 hr. It was predicted that high school students would report lying with greater frequency than previous surveys with college student and adult samples, but that the distribution of reported lies by high school students would exhibit a strongly and positively skewed distribution similar to that observed with college student and adult samples. The data were consistent with both predictions. High school students in the sample reported telling, on average, 4.1 lies in the past 24 hr—a rate that is 75% higher than that reported by college students and 150% higher than that reported by a nationwide sample of adults. The data were also skewed, replicating the “few prolific liar” effect previously documented in college student and adult samples.
Article
Full-text available
It has been commonplace in the deception literature to assert the pervasive nature of deception in communication practices. Previous studies of lie prevalence find that lying is unusual compared to honest communication. Recent research, and reanalysis of previous studies reporting the frequency of lies, shows that most people are honest most of the time and the majority of lies are told by a few prolific liars. The current article reports a statistical method for distinguishing prolific liars from everyday liars and provides a test of the few prolific liars finding by examining lying behavior in the United Kingdom. Participants (N = 2,980) were surveyed and asked to report on how often they told both little white lies and big important lies. Not surprisingly, white lies were more common than big lies. Results support and refine previous findings about the distinction between everyday and prolific liars, and implications for theory are discussed.
Article
Full-text available
Consistent with the Park and Levine's (PL) probability model of deception detection accuracy, previous research has shown that as the proportion of honest messages increases, there is a corresponding linear increase in correct truth–lie discrimination. Three experiments (N = 120, 205, and 243, respectively) varied the truth–lie base rate in an interactive deception detection task. Linear base-rate effects were observed in all 3 experiments (average effect r#x02009;= .61) regardless of whether the judges were interactive participants or passive observers, previously acquainted or strangers, or previously exposed to truths or lies. The predictive power of the PL probability model appears robust and extends to interactive deception despite PL's logical incompatibility with interpersonal deception theory.
Article
Full-text available
Does everybody lie? A dominant view is that lying is part of everyday social interaction. Recent research, however, has claimed, that robust individual differences exist, with most people reporting that they do not lie, and only a small minority reporting very frequent lying. In this study, we found most people to subjectively report little or no lying. Importantly, we found self-reports of frequent lying to positively correlate with real-life cheating and psychopathic tendencies. Our findings question whether lying is normative and common among most people, and instead suggest that most people are honest most of the time and that a small minority lies frequently.
Article
Full-text available
One way of thinking about how deceptive messages are generated is in terms of how the information that interactants possess is manipulated within the messages that they produce. Information Manipulation Theory suggests that deceptive messages function deceptively because they covertly violate the principles that govern conversational exchanges. Given that conversational interactants possess assumptions regarding the quantity, quality, manner, and relevance of information that should be presented, it is possible for speakers to exploit any or all of these assumptions by manipulating the information that they possess so as to mislead listeners. By examining various message examples, it is demonstrated that IMT helps to reconcile previous disagreement about the properties of deceptive messages.
Article
Full-text available
Although researchers of relational deception have recently become interested in the role that suspicion plays in the deception process, a more thorough examination of the relationship between suspicion and accuracy in detecting deception is warranted. Previous researchers have not found a significant relationship between suspicion and accuracy. In the current paper, we argue that the lack of findings in previous research can be attributed to methodological inadequacies, and that moderate levels of situationally‐aroused suspicion should substantially enhance accuracy in detecting deception. In addition, a predisposition toward being suspicious (i.e., generalized communicative suspicion, or “GCS") should moderate the relationship between aroused suspicion and accuracy. Three hypotheses were tested in a sample of 107 non‐marital romantically‐involved couples. Results suggest that both situationally‐aroused suspicion and GCS significantly influenced accuracy. Under certain conditions, aroused suspicion substantially improved the accuracy with which individuals could detect the deception of relational partners. Implications of these findings for future research in deception are discussed.
Article
Full-text available
Conditional social behaviours such as partner choice and reciprocity are held to be key mechanisms facilitating the evolution of cooperation, particularly in humans. Although how these mechanisms select for cooperation has been explored extensively, their potential to select simultaneously for complex cheating strategies has been largely overlooked. Tactical deception, the misrepresentation of the state of the world to another individual, may allow cheaters to exploit conditional cooperation by tactically misrepresenting their past actions and/or current intentions. Here we first use a simple game-theoretic model to show that the evolution of cooperation can create selection pressures favouring the evolution of tactical deception. This effect is driven by deception weakening cheater detection in conditional cooperators, allowing tactical deceivers to elicit cooperation at lower costs, while simple cheats are recognized and discriminated against. We then provide support for our theoretical predictions using a comparative analysis of deception across primate species. Our results suggest that the evolution of conditional strategies may, in addition to promoting cooperation, select for astute cheating and associated psychological abilities. Ultimately, our ability to convincingly lie to each other may have evolved as a direct result of our cooperative nature.
Article
Full-text available
Is there a difference between believing and merely understanding an idea? R. Descartes (e.g., 1641 [1984]) thought so. He considered the acceptance and rejection of an idea to be alternative outcomes of an effortful assessment process that occurs subsequent to the automatic comprehension of that idea. This article examined B. Spinoza's (1982) alternative suggestion that (1) the acceptance of an idea is part of the automatic comprehension of that idea and (2) the rejection of an idea occurs subsequent to, and more effortfully than, its acceptance. In this view, the mental representation of abstract ideas is quite similar to the mental representation of physical objects: People believe in the ideas they comprehend, as quickly and automatically as they believe in the objects they see. Research in social and cognitive psychology suggests that Spinoza's model may be a more accurate account of human belief than is that of Descartes. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Sender demeanor is an individual difference in the believability of message senders that is conceptually independent of actual honesty. Recent research suggests that sender demeanor may be the most influential source of variation in deception detection judgments. Sender demeanor was varied in five experiments (N = 30, 113, 182, 30, and 35) to create demeanor-veracity matched and demeanor-veracity mismatched conditions. The sender demeanor induction explained as much as 98% of the variance in detection accuracy. Three additional studies (N = 30, 113, and 104) investigated the behavioral profiles of more and less believable senders. The results document the strong impact of sender effects in deception detection and provide an explanation of the low-accuracy ceiling in the previous findings.
Article
Full-text available
This study addresses the frequency and the distribution of reported lying in the adult population. A national survey asked 1,000 U.S. adults to report the number of lies told in a 24-hour period. Sixty percent of subjects report telling no lies at all, and almost half of all lies are told by only 5% of subjects; thus, prevalence varies widely and most reported lies are told by a few prolific liars. The pattern is replicated in a reanalysis of previously published research and with a student sample. Substantial individual differences in lying behavior have implications for the generality of truth-lie base rates in deception detection experiments. Explanations concerning the nature of lying and methods for detecting lies need to account for this variation.
Article
Full-text available
Interpersonal deception theory (IDT) represents a merger of interpersonal communication and deception principles designed to better account for deception in interactive contexts. At the same time, it bas the potential to enlighten theories related to (a) credibility and truthful communication and (b) interpersonal communication. Presented here are key definitions, assumptions related to the critical attributes and key features of interpersonal communication and deception, and 18 general propositions from which specific testable hypotheses can be derived. Research findings relevant to the propositions are also summarized.
Article
Full-text available
We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature.
Article
Inconsistency is often considered an indication of deceit. The conceptualization of consistency used in deception research, however, has not made a clear distinction between two concepts long differentiated by philosophers: coherence and correspondence. The existing literature suggests that coherence is not generally useful for deception detection. Correspondence, however, appears to be quite useful. The present research developed a model of how correspondence is utilized to make judgments, and this article reports on four studies designed to elaborate on the model. The results suggest that judges attend strongly to correspondence and that they do so in an additive fashion. As noncorrespondent information accumulates, an increasingly smaller proportion of judges make truthful assessments of guilty suspects. This work provides a basic framework for examining how information is utilized to make deception judgments and forms the correspondence and coherence module of truth-default theory.
Article
: Research relevant to psychotherapy regarding facial expression and body movement, has shown that the kind of information which can be gleaned from the patients words - information about affects, attitudes, interpersonal styles, psychodynamics - can also be derived from his concomitant nonverbal behavior. The study explores the interaction situation, and considers how within deception interactions differences in neuroanatomy and cultural influences combine to produce specific types of body movements and facial expressions which escape efforts to deceive and emerge as leakage or deception clues.
Article
In a proof-of-concept study, an expert obtained 100% deception-detection accuracy over 33 interviews. Tapes of the interactions were shown to N = 136 students who obtained 79.1% accuracy (Mdn = 83.3%, mode = 100%). The findings were replicated in a second experiment with 5 different experts who collectively conducted 89 interviews. The new experts were 97.8% accurate in cheating detection and 95.5% accurate at detecting who cheated. A sample of N = 34 students watched a random sample of 36 expert interviews and obtained 93.6% accuracy. The data suggest that experts can accurately distinguish truths from lies when they are allowed to actively question a potential liar, and nonexperts can obtain high accuracy when viewing expertly questioned senders.
Article
The concept of diagnostic utility was used to create questions that would differentially affect deception detection accuracy. Six deception detection studies show that subtle differences in questioning produced accuracy rates that were predictably, substantially, and reliably above and below chance. The first 3 detection studies demonstrate that diagnostically useful questioning can reliably achieve accuracy rates over 70% with student and experienced judges. The fourth and fifth experiments demonstrated negative diagnostic utility among federal investigators but not students. The final experiment crossed 3 sets of interview questions with experience. Strong question effects produced a swing in accuracy from 32 to 73%. A questioning by experience interaction was also obtained.
Article
The current paper reexamines how suspicion affects deception detection accuracy. McCornack and Levine's (199016. McCornack , S. A. , & Levine , T. R. ( 1990 ). When lovers become leery: The relationship between suspicion and accuracy in detecting deception . Communication Monographs , 57 , 219 – 230 . doi: doi:10.1080/03637759009376197 [Taylor & Francis Online], [Web of Science ®]View all references) nonlinear “optimal level” hypothesis is contrasted with an “opposing effects” hypothesis. Three different levels of suspicion were experimentally induced and participants (N = 91) made veracity judgments of videotaped interviews involving denials of cheating. The results were more consistent with the opposing effects hypotheses than the optimal level hypotheses.
Article
Past research has shown that people are only slightly better than chance at distinguishing truths from lies. Higher accuracy rates, however, are possible when contextual knowledge is used to judge the veracity of situated message content. The utility of content in context was shown in a series of experiments with students (N = 26, 45, 51, 25, 127) and experts (N = 66). Across studies, average accuracy was 75% in the content in context groups compared with 57% in the controls. These results demonstrate the importance of situating judges within a meaningful context and have important implications for deception theory.
Article
A primary focus of research in the area of deceptive communication has been on people's ability to detect deception. The premise of the current paper is that participants in previous deception detection experiments may not have had access to the types of information people most often use to detect real-life lies. Further, deception detection experiments require that people make immediate judgements, although lie detection may occur over much longer spans of time. To test these speculations, respondents (N=202) were asked to recall an instance in which they had detected that another person had lied to them. They then answered open-ended questions concerning what the lie was about, who lied to them, and how they discovered the lie. The results suggest people most often rely on information from third parties and physical evidence when detecting lies, and that the detection of a lie is often a process that takes days, weeks, months, or longer. These findings challenge some commonly held assumptions about deception detection and have important implications for deception theory and research.
Article
This essay extends the recent work of Levine, Park, and McCornack (1999) on the veracity effect in deception detection. The probabilistic nature of a receiver's accuracy in detecting deception is explained, and a receiver's detection of deception is analyzed in terms of set theory and conditional probability. Detection accuracy is defined as intersections of sets, and formulas are presented for truth accuracy, lie accuracy, and total accuracy in deception detection experiments. In each case, accuracy is shown to be a function of the relevant conditional probability and the truth-lie base rate. These formulas are applied to the Levine et al. results, and the implications for deception research are discussed.
Article
Lying and lie detection are the two components that, together, make up the exchange called as the “communication of deception.” Deception is an act that is intended to foster in another person a belief or understanding that the deceiver considers false. This chapter presents a primarily psychological point of view and a relatively microanalysis of the verbal and nonverbal exchange between the deceiver and the lie detector. The chapter discusses the definition of deception. It describes the deceiver's perspective in lie-detection, including the strategies of deception and behaviors associated with lie-telling. The lie-detector's perspective is also discussed in the chapter, and it has described behaviors associated with the judgments of deception and strategies of lie detection. The chapter discusses the outcomes of the deceptive communication process—that is, the accuracy of lie detection—and explores methodological issues, channel effects in the detection of deception, and other factors affecting the accuracy of lie detection.
Article
Deception research has consistently shown that accuracy rates tend to be just over fifty percent when accuracy rates are averaged across truthful and deceptive messages and when an equal number of truths and lies are judged. Breaking accuracy rates down by truths and lies, however, leads to a radically different conclusion. Across three studies, a large and consistent veracity effect was evident. Truths are most often correctly identified as honest, but errors predominate when lies are judged. Truth accuracy is substantially greater than chance, but the detection of lies was often significantly below chance. Also, consistent with the veracity effect, altering the truth‐lie base rate affected accuracy. Accuracy was a positive linear function of the ratio of truthful messages to total messages. The results show that this veracity effect stems from a truth‐bias, and suggest that the single best predictor of detection accuracy may be the veracity of message being judged. The internal consistency and parallelism of overall accuracy scores are also questioned. These findings challenge widely held conclusions about human accuracy in deception detection.
Article
The principle of veracity specifies a moral asymmetry between honesty and deceit. Deception requires justification, whereas honesty does not. Three experiments provide evidence consistent with the principle of veracity. In Study 1, participants (N = 66) selected honest or deceptive messages in response to situations in which motive was varied. Study 2 (N = 66) replicated the first with written, open-ended responses coded for deceptive content. Participants in Study 3 (N = 126) were given an opportunity to cheat for monetary gain and were subsequently interrogated about cheating. As predicted, when honesty was sufficient to meet situational demands, honest messages were selected, generated, and observed 98.5% to 100% of the time. Alternatively, deception was observed 60.0% to 64.3% of the time when variations in the same situations made the truth problematic. It is concluded that people usually deceive for a reason, that motives producing deception are usually the same that guide honesty, and that people usually do not lie when goals are attainable through honest means.
Article
Absent a perceived motive for deception, people will infer that a message source is honest. As a consequence, confessions should be believed more often than denials, true confessions will be correctly judged as honest, and false confessions will be misjudged. In the first experiment, participants judged true and false confessions and denials. As predicted, confessions were judged as honest more frequently than denials. Subsequent experiments replicated these results with an independent groups design and with a sample of professional investigators. Together, these three experiments document an important exception to the 50%+ accuracy conclusion, provide evidence consistent with a projected motive explanation of deception detection, and highlight the importance of the content-in-context in judgmental processes.
Article
One explanation for the finding of slightly above-chance accuracy in detecting deception experiments is limited variance in sender transparency. The current study sought to increase accuracy by increasing variance in sender transparency with strategic interrogative questioning. Participants (total N = 128) observed cheaters and noncheaters who were questioned with either indirect background questions or strategic questioning. Accuracy was significantly below chance (44%) in the background questions condition and substantially above chance (68%) in the strategic interrogative questioning condition. The results suggest that transparency can be increased by strategic question asking and that accuracy rates well above chance are possible even for untrained judges exposed to only brief communications.
Article
This study provided the first empirical test of point predictions made by the Park-Levine probability model of deception detection accuracy. Participants viewed a series of interviews containing truthful answers, unsanctioned, high-stakes lies, or some combination of both. One randomly selected set of participants (n0/50) made judgments where the probability that each message was honest was P(H)0/.50. Accuracy judgments in this condition were used to generate point predictions generated from the model and tested against the results from a second set of data (n 0/413). Participants were randomly assigned to one of eight base-rate conditions where the probability that a message was honest systematically varied from 0.00 to 1.00. Consistent with the veracity effect, participants in P(H)0/.50 condition were significantly more likely to judge messages as truths than as lies, and consequently truths (67%) were identified with greater accuracy than lies (34%). As predicted by the model, overall accuracy was a linear function of message veracity base-rate, the base-rate induction explained 24% of the variance in accuracy scores, and, on average, raw accuracy scores for specific conditions were predicted to within approximately9/2.6%. The findings show that specific deception detection accuracy scores can be precisely predicted with the Park-Levine model.
Book
Imre Lakatos' philosophical and scientific papers are published here in two volumes. Volume I brings together his very influential but scattered papers on the philosophy of the physical sciences, and includes one important unpublished essay on the effect of Newton's scientific achievement. Volume II presents his work on the philosophy of mathematics (much of it unpublished), together with some critical essays on contemporary philosophers of science and some famous polemical writings on political and educational issues. Imre Lakatos had an influence out of all proportion to the length of his philosophical career. This collection exhibits and confirms the originality, range and the essential unity of his work. It demonstrates too the force and spirit he brought to every issue with which he engaged, from his most abstract mathematical work to his passionate 'Letter to the director of the LSE'. Lakatos' ideas are now the focus of widespread and increasing interest, and these volumes should make possible for the first time their study as a whole and their proper assessment.
Spontaneous, unprompted deception detection judgments
  • D Clare
  • T R Levine
Clare, D., & Levine, T. R. (2014). Spontaneous, unprompted deception detection judgments (Manuscript in preparation). East Lansing: Michigan State University.