Article

Nonverbal Leakage and Clues to Deception †

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

: Research relevant to psychotherapy regarding facial expression and body movement, has shown that the kind of information which can be gleaned from the patients words - information about affects, attitudes, interpersonal styles, psychodynamics - can also be derived from his concomitant nonverbal behavior. The study explores the interaction situation, and considers how within deception interactions differences in neuroanatomy and cultural influences combine to produce specific types of body movements and facial expressions which escape efforts to deceive and emerge as leakage or deception clues.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This can be categorized by the predominant, but not exclusive, type of semiotic ground (Sonesson 2010) between expression and denoted object : iconic (resembling the object), indexical/deictic (bringing the object to attention), and symbolic (denoting the object on the basis of a socially shared convention). In addition, such expressions may also have non-denotational meaning, expressing emphasis, modality (uncertainty, rejection, etc.), and affect (surprise, repulsion, etc.) (Ekman and Friesen 1969;Kendon 2004;Streeck 2009). In such cases the non-denotational meaning contains information about the speaker's attitude. ...
... A class of sensory-motor movements that mostly fall on the signal side of the sign/ signal divide are affect displays, expressing spontaneous information about the nature and the intensity of affect like surprise, indifference, or repulsion (Ekman and Friesen 1969). Another type of bodily signals are adaptors, which function as part of a total adaptive system (e.g., to satisfy bodily needs, manage emotions, learn instrumental activities, etc.). ...
... Unlike gestures, they are responses to environmental triggers that are not intended to communicate a message, and (generally) performed without (focal) awareness (Ekman and Friesen 1969: 84). Adaptors have been categorized differently according to their form and function (Ekman and Friesen 1969;Freedman 1972), but a major distinction is whether they are geared towards one's body (self-adaptors) or an external object. It has been claimed that in communicative settings where ambiguous, interfering, and conflicting cues are involved, "the speaker is likely to turn to soothing, grooming, rubbing, or scratching, as ways of confirming the boundaries of the self at the time when the sharing of thoughts is also required" (Freedman 1977: 114). ...
Article
Full-text available
Recent cognitive science research suggests that occasional “blindness” to choice manipulations indicates a lack of awareness in choice making. This claim is based on participants’ tendency not to detect choice manipulations and the similarity between their justifications for choices they made and those they were tricked into believing they made. Using a cognitive-semiotic framework, we argue that such conclusions underestimate the embodied, intersubjective nature of human meaning-making. We support this by investigating choice awareness beyond language to include non-verbal behavior. Forty-one participants were asked to choose from pairs of photographs of human faces the one they found most attractive and then to justify their choices, without knowing that for some of the trials they were asked to justify a choice that they had not made. Verbal responses were categorized as (i) non-manipulated, (ii) detected manipulated, and (iii) undetected manipulated trials. Bodily expressions, assessed using five different Categories of Bodily Expression (CBE): Adaptors, Torso, Head, Face and Hand expressions, revealed differences in: (a) duration, (b) rates of occurrence and (c) variety of the CBEs across trials. Thus, even when manipulations were not verbally detected, participants took longer to assess choices, showed increased bodily expressions, and engaged more body parts in undetected manipulations compared to non-manipulated choice trials. This suggests a degree of awareness to the choice manipulation, even if pre-reflective, manifested in participants’ bodily expressions.
... As a crucial aspect of human emotional expression, facial expressions are ubiquitous in daily life. These expressions can be categorized into macro-expressions and micro-expressions based on their duration and intensity [3]. In contrast to the conspicuous macroexpressions, micro-expressions are typically considered to last less than 0.5 seconds, with minimal facial muscle movement [27]. ...
... Facial expression detection aims to identify the onset and offset frames of expressions in long video sequences. Recently, with the rapid advancement of deep learning technology and the emergence of facial expression datasets such as CAS(ME) 2 [19], CAS(ME) 3 [10], SAMM Long Videos (SAMM-LV) [29], SMIC-E-long [23], SAMM [1] and 4DME [12], data-driven deep learning methods have made some progress [15,21]. However, due to the scarcity of expression detection data samples, such methods have not achieved notable success [18]. ...
... The development history of micro expressions can be traced back to 1969, when Ekman [3] and team members discovered in a video of a psychologist talking to a patient with depression that the patient attempted to conceal their suicidal thoughts by using a positive emotional state. Through repeated research on the video, Ekman's team found that there were several frames in the video where the patient's face showed very faint expressions of pain. ...
Article
Full-text available
With facial expression recognition gradually becoming a hot topic in the fields of image processing and artificial intelligence research, more and more scholars are paying attention to the fact that the micro expressions instantly revealed on the face can better reflect human inner emotions and thoughts. In this paper, firstly, the research status of micro expression and the commonly used micro expression datasets are described, and the advantages and disadvantages of each dataset are analyzed. Then the feature extraction of micro expression is analyzed from the two algorithms. Finally, the application fields of micro expression research and the challenges facing the future development are discussed.
... We believe that a multimodal approach, i.e., integrating behavioral cues from different behavioral modalities, represents a novel opportunity for the faking detection field, as it is considered difficult for applicants to control all aspects of their communication simultaneously. Ekman and Friesen (1969) suggest that deceptive cues are more likely to emerge from body parts with higher expressive capacity and more sensory feedback. Ekman (2001) later refined this, noting that while words and facial expressions are easier to control, it is more difficult to manage body movements and vocal cues. ...
Article
The aim of this study was to investigate the possibility of faking detection in a selection interview using a multimodal approach based on paraverbal, verbal/nonverbal cues, and facial expressions. In addition, we compared detection accuracies of simple linear and complex nonlinear machine learning algorithms. A sample of 102 participants were interviewed in two conditions—honest responding and simulated highly realistic selection. Results showed only several significant univariate effects of experimental condition for paraverbal, verbal, and facial expression cues. All the algorithms performed comparably and above chance levels, except for random forests, which overfitted on the training sets and underperformed on the testing sets. Still, considering the algorithms' accuracy was limited, usefulness of multimodal data for deception detection remains questionable.
... Macro-expressions are observable with the naked eye, albeit they are deceitful [1], while micro-expressions [2,3] are short-lived and unconscious expressions [4,5] that are harder to spot and recognize. Micro-expressions are more reliable measures for psychological states and are more important in understanding people's real emotions. ...
Article
Full-text available
Micro-expressions reveal underlying emotions and are widely applied in political psychology, lie detection, law enforcement and medical care. Micro-expression spotting aims to detect the temporal locations of facial expressions from video sequences and is a crucial task in micro-expression recognition. In this study, the problem of micro-expression spotting is formulated as micro-expression classification per frame. We propose an effective spotting model with sliding windows called the spatio-temporal spotting network. The method involves a sliding window detection mechanism, combines the spatial features from the local key frames and the global temporal features and performs micro-expression spotting. The experiments are conducted on the CAS(ME)2 database and the SAMM Long Videos database, and the results demonstrate that the proposed method outperforms the state-of-the-art method by 30.58% for the CAS(ME)2 and 23.98% for the SAMM Long Videos according to overall F-scores.
... Compared to macro-expressions, Micro-Expression is subtle and involuntary facial movements that last no more than 1/2 second [7,10,30]. When emotions are intentionally or unintentionally concealed, Micro-Expression emerge, effectively revealing hidden emotional information [6]. Due to their brief and subtle nature, Micro-Expression is usually difficult to detect but hold significant value in emotion research and behavior analysis. ...
... Scherer 1977, 181), der sich sowohl auf eine Diskrepanz zwischen verbalen und nonverbalen Sprachelementen bezieht als auch auf das Konzept der "nonverbalen leakage" (vgl. Ekman und Friesen 1969). Bei der Analyse von Live-Lyrik sollte man natürlich zwischen realer Autor:in und fiktiver Sprecher:in unterscheiden, von denen letztere eine Textfunktion und nicht etwa ein lebendiges Subjekt ist. ...
... Micro-expressions (MEs) are involuntary and rapid facial movements that reveal emotions of people in a hidden manner [1,2]. In view of the short duration (1/25 to 1/3 second) and crypticity, MEs are quite difficult to be identified and recognized [3][4][5][6][7][8][9][10][11][12]. ...
Article
Full-text available
Micro-expressions are spontaneous, rapid and subtle facial movements that can hardly be suppressed or fabricated. Micro-expression recognition (MER) is one of the most challenging topics in affective computing. It aims to recognize subtle facial movements which are quite difficult for humans to perceive in a fleeting period. Recently, many deep learning-based MER methods have been developed. However, how to effectively capture subtle temporal variations for robust MER still perplexes us. We propose a counterfactual discriminative micro-expression recognition (CoDER) method to effectively learn the slight temporal variations for video-based MER. To explicitly capture the causality from temporal dynamics hidden in the micro-expression (ME) sequence, we propose ME counterfactual reasoning by comparing the effects of the facts w.r.t. original ME sequences and the counterfactuals w.r.t. counterfactually-revised ME sequences, and then perform causality-aware prediction to encourage the model to learn those latent ME temporal cues. Extensive experiments on four widely-used ME databases demonstrate the effectiveness of CoDER, which results in comparable and superior MER performance compared with that of the state-of-the-art methods. The visualization results show that CoDER successfully perceives the meaningful temporal variations in sequential faces.
... What's more, it is especially interesting that the majority of Wright's chapter concerns how facial expressions betray the true quality of a person's affective state: the long anecdote about Alexander obviously centers this moral, and the observations on either side reach a similar conclusion, to the point of suggesting that analysis of facial expression can uncover hidden information about a person's behaviors and intentions. This, obviously, has been the premise of much research that has developed from the Ekman paradigm, including technology that purports to aid in lie-detection and counterterriorism (Ekman, 2009;Leys, 2017;Maguire, 2015); Wright, indeed, seems to anticipate Ekman and Friesen's (1969) concept of nonverbal leakage. ...
Article
Full-text available
“The study of emotional expression,” it has recently been said, “has long been the provenance of scientific discovery and heated controversy” (Keltner et al., 2016, p. 467)—and nothing has been more central to this inquiry than attempts to understand the precise connection between affective experience and human facial expression. But as science moves forward, it is also wise to consider where it has been. This Brief Report reproduces a pre-Darwinian account of the facial expression of emotion from Thomas Wright’s The Passions of the Minde in Generall (1604), one of the most interesting books on emotion from the English Renaissance. Before the modern scientific revolution, Wright’s theorization anticipates several key aspects of 21st Century thought on the facial expression of emotion, an intriguing reminder of the connection between historical folk understanding and modern research.
... According to Van Der Heijden [53] and Oliveira [29], we identify three principles -'Scarcity', 'Authority', and 'Consistency', that significantly amplify phishing effectiveness. Additionally, we explore frequently observed phishing cues such as 'Enticement', 'Urgency Tactics', and 'Personalized Greeting' ( [5,6,10,21,38]). These cues are prominent in deceptive communications, either tempting users into actions or arising from a lack of personalization. ...
... Examining the interaction between AUs might yield significant observations regarding an individual's psychological condition, facilitating prompt identification, tracking of treatment progress, and tailored healthcare. Table 1 shows the significantly used action units for emotion recognition with their related muscle movement area on the face [9]: By deciphering facial expressions, one can transcend subjective interpretations and get profound insight into the connection between the internal state and external manifestations of an individual [10]. The identification of micro-expressions (MEs) may contribute across a wide range of domains, including mental health, lie detection, law enforcement, political psychology, medical care, and human-computer interaction [11]. ...
Article
Full-text available
Mental health is indispensable for effective daily functioning and stress management. Facial expressions may provide vital clues about the mental state of a person as they are universally consistent across cultures. This study intends to detect the emotional variances through facial micro-expressions using facial action units (AUs) to identify probable mental health issues. In addition, convolutional neural networks (CNN) were used to detect and classify the micro-expressions. Further, combinations of AUs were identified for the segmentation of micro-expressions classes using K-means square. Two benchmarked datasets CASME II and SAMM were employed for the training and evaluation of the model. The model achieved an accuracy of 95.62% on CASME II and 93.21% on the SAMM dataset, respectively. Subsequently, a case analysis was done to identify depressive patients using the proposed framework and it attained an accuracy of 92.99%. This experiment revealed the fact that emotions like disgust, sadness, anger, and surprise are the prominent emotions experienced by depressive patients during communication. The findings suggest that leveraging facial action units for micro-expression detection offers a promising approach to mental health diagnostics.
... We also found that, while children with DLD manipulated objects during UPs, TD children produced more self-adaptors. Thus, although both groups relied on adaptors during UPs, the way they used them, potentially to suppress discomfort during hindered speech planning (as stated by Ekman and Friesen 1969), was qualitatively different. Since self-adaptors are more nondeliberate than object-adaptors, it could be possible that TD children used them when thinking about their following narrative sequence, whereas children with DLD used more object-adaptors when they could not go on with their narrative because of their language deficit. ...
Article
Full-text available
This study aims at observing the co-occurrence of filled (FP) and unfilled pauses (UP) and gestures in the narratives of children with and without Developmental Language Disorder (DLD). Although children with DLD are known to be more “disfluent” than typically developing children (TD), little is known about the role of pauses in children’s speech and their interaction with gestures. 22 French-speaking children with DLD and 22 age- and gender-matched controls, between 7 and 10, recounted a cartoon excerpt. We annotated pauses and their position in utterances, and we coded gestures according to their function. Despite a similar pausing rate across groups, results show that TD children produced more utterance-beginning FPs and more mid-utterance UPs, while children with DLD produced more standalone FPs and mid-utterance UPs. Furthermore, multimodal patterns of co-occurrence, specific to pause type, emerged. While both groups had similar gesture rates and produced mostly referential gestures, TD children produced slightly more beat gestures during FPs and more self-adaptors and pragmatic gestures during UPs. Children with DLD produced more referential gestures and object-adaptors during UPs. These differences point to the temporal relationship between gestures and pauses and the multiple ways these two phenomena may interact according to the child’s profile.
... Although facial expressions are crucial elements of nonverbal behavior, enough information related to the teacher's (and children's) nonverbal (or verbal) behavior seems to be conveyed, resulting in accurate thin slices ratings. Aspects of a person's state, personality or characteristics of an interaction seem to chronically "leak through" in behavior ("nonverbal leakage") and provide additional information that is not available in the verbal channel (Ekman and Friesen, 1969). The concept of nonverbal leakage might be involved in the explanation of the accuracy of the thin slices raters in the present study. ...
Article
Full-text available
There are a variety of instruments for measuring interaction quality of Early Childhood Education and Care (ECEC) teachers. However, these instruments are extremely resource-demanding in terms of time and money. Hence, a more economical and yet accurate method for measuring interaction quality of ECEC teachers would be desirable. The so-called thin slices technique has been applied to observe, measure and predict human behavior with only minimal amounts of information. In a wide array of research domains, thin slices ratings (i.e., ratings based on first impressions) proved to be accurate. The present study explores the accuracy of thin slices ratings of interaction quality in toddler classrooms along two CLASS Toddler domains (Emotional and Behavioral Support and Engaged Support for Learning). Eight CLASS-certified raters assessed interaction quality based on 30-s classroom videos. The findings suggest predominantly good reliabilities of these ratings. Confirmatory factor analysis yielded evidence for construct validity, meaning that thin slices raters could differentiate between two domains of interaction quality. Further, thin slices ratings correlated, at least partly, with ratings based on full-length videos, indicating that thin slices raters and raters watching the full-length videos had a similar impression of interaction quality of ECEC teachers.
... Qingyun Zhang 1937909850@qq.com 1 School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang 212100, China emotions. Ekman and Friesen [4] also discovered microexpression in 1969. ...
Article
Full-text available
Micro-expressions are instantaneous flashes of facial expressions that reveal a person's true feelings and emotions. Micro-expression recognition (MER) is challenging due to its low motion intensity, short duration, and the limited number of publicly available samples. Although the present MER methods have achieved great progress, they face the problems of a large number of training parameters and insufficient feature extraction ability. In this paper, we propose a lightweight network MFE-Net with Res-blocks to extract multi-scale features for MER. To extract more valuable features, we incorporate Squeeze-and-Excitation attention and multi-headed self-attention mechanisms in our MFE-Net. The proposed network is used for learning features from three optical flow features (i.e. optical strain, horizontal and vertical optical flow images) which are calculated from the onset and apex frames. We employ the LOSO cross-validation strategy to conduct experiments on CASME II and the composite dataset selected by MEGC2019, respectively. The extensive experimental results demonstrate the viability and effectiveness of our method.
... For example, while most deception cues are objectively faint, unreliable, or substantially moderated by contextual factors (DePaulo et al., 2003;Hauch et al., 2015;Sporer & Schwandt, 2007), people often believe that eye gaze aversion reveals a liar despite it having little-to-no diagnostic value (Global Deception Research Team, 2006). More generally, cue theories describe a host of communicative tells that can reveal a liar from a truth-teller (Blandón-Gitlin et al., 2014;Buller & Burgoon, 1996;Ekman & Friesen, 1969;Vrij et al., 2008;Zuckerman et al., 1981); at best, however, the evidence is mixed to suggest the social, psychological, or behavioral cues that reveal dishonesty. Therefore, subjective beliefs about the cues that reveal deception, and objective evidence about the cues that reliably reveal deception, are typically inconsistent. ...
Preprint
Full-text available
Subjective lying rates are often strongly and positively correlated. Called the deception consensus effect, people who lie often tend to believe others lie often, too. The present paper evaluated how this cognitive bias also extends to deception detection. Two studies (Study 1: N = 180 students; Study 2: N = 250 people from the general public) had participants make 10 veracity judgments based on videotaped interviews, and also indicate subjective detection abilities (self and other). Subjective detection abilities were significantly linked, supporting a detection consensus effect, yet they were unassociated with objective detection accuracy. More subjectively overconfident detectors (e.g., their subjective detection accuracy was greater than their objective detection accuracy) reported telling more white and big lies, cheated more on a behavioral task, and were more ideologically conservative than less subjectively overconfident detectors. This evidence supports and extends truth-default theory, highlighting possible asymmetries in subjective and objective veracity assessments.
... • Mikro ifadeler. Duygunun yüzün bütün bölgelerinde 0,5 saniyeden daha kısa sürede oluşmasıdır (Haggard ve Isaacs, 1966;Ekman ve Friesen, 1969a). Başka bir deyişle duygu ifadesinin bir saniyeden az bir sürede görülmesidir. ...
Article
Full-text available
Öz: Kriminal görüşmelerde önemli hususlardan birisi sorguya alınan ki-şiden edinilen bilgilerdir. Bu bilgiler sözlü olduğu kadar sözsüz olarak da elde edilebilir. Bu öyküsel derlemenin amacı kriminal görüşmelerde sözsüz davranış analizi yapılırken dikkat edilmesi gerekenleri genel hatlarıyla ele almaktır. Bu bağlamda sözsüz davranışları analiz etmenin temel ilkeleri ayrıntılarıyla tartışılmış ve sorgulamalardaki rolü vurgulanmıştır. Bu hu-suslara değinilirken sözsüz davranışların alt boyutları yüz ifadeleri, ses, jestler ve diğer beden hareketleri olarak kategorilendirilmiş ve açıklanmış-tır. Dahası, yalan söyleyen şahısların nasıl bir profile sahip olduğu ele alın-mıştır. Son olarak, kriminal görüşmeye katılan meslek elemanlarının söz-süz davranışlara ilişkin becerilerini geliştirmeye yönelik Sözsüz Davranış Eğitimi Programı önerilmiştir. Bu programın, ilgili meslek elemanlarının soruşturma işlemlerine katkı sağlayacağı düşünülmektedir. Anahtar sözcükler: kriminal görüşme, sözsüz davranış analizi, sözsüz dav-ranış eğitimi programı.
Article
Full-text available
Automatic spotting and classification of facial Micro-Expressions (MEs) in ’in-the-wild’ videos is a topic of great interest in different fields involving sentiment analysis. Unfortunately, automatic spotting also represents a great challenge due to MEs quick temporal evolution and the lack of correctly annotated videos captured in the wild. In fact, the former makes ME difficult to grasp, while the latter results in the scarcity of real examples of spontaneous expressions in uncontrolled contexts. This paper proposes a novel but very simple spotting method that mainly exploits MEs perceptual characteristics. Specifically, the contribution is twofold: i) a distinguishing feature is defined for MEs in a domain that can capture and represent the peceptual stimuli of MEs, thus representing a suitable input for a standard binary classifier; ii) a proper numerical strategy is developed to augment the training set used to define the classification model. The rationale is that since MEs are visible by a human observer almost regardless of the specific context, it stands to reason that they have some sort of perceptual signature that activates pre-attentive vision. In this work this fingerprint is called Perceptual Emotional Signature (PES) and is modelled using the well-known Structural SIMilarity index (SSIM), which is a measure based on visual perception. A machine learning based classifier is then appropriately trained to recognize PESs. For this purpose, a suitable numerical strategy is applied to augment the training set; it mainly exploits error propagation rules in accordance with perceptual sensitivity to noise. The whole procedure is called PESMESS - Perceptual Emotional Signature of Micro- Expressions via SSIM and SVM. Preliminary studies show that SSIM can effectively guide the detection of MEs by identifying frames that contain PESs. Localization of PESs is accomplished using a properly trained Support Vector Machine (SVM) classifier that benefits from very short input feature vectors. Various tests on different benchmarking databases, containing both ’simulated’ and ’in-the-wild’ videos, confirm the potential and promising effectiveness of PESMESS when trained on appropriately perception-based augmented feature vectors.
Article
This study, grounded in the interpersonal deception theory (IDT), aims to analyze the new form of digital deception known as “visual poverty” in livestreaming rooms. Through a multimodal discourse analysis of the collected data, this study found three distinct linguistic strategies employed in “visual poverty” livestream: illocutionary strategy, discourse strategy, and nonverbal strategy. These strategies are designed to achieve three interactional goals: constructing a false identity, reinforcing credibility, and gaining economic benefits. Based on these findings, this study further analyzed why it is difficult to detect digital deception and proposed suggestions to prevent it. This study theoretically proposes online IDT and offers practical suggestions for preventing digital deception, bearing both theoretical and practical significance.
Chapter
In recent years, the comprehension of human emotions has reached its profound understanding where the perception of human emotion in diverse age groups under different contexts has been advanced. Researchers have significantly progressed from theorizing emotions to application of these emotions in various physical senses. However, many questions regarding the functionality of emotions remain unanswered and the need to comprehend emotion recognition opens up the possibility of conducting deeper conceptual analysis. Computers are capable of identifying and interpreting human emotions accurately by examining facial expressions, verbal intonations, physiological signals, and other cues. The fundamental layout of human emotion is based on seven primary emotions, i.e., fear, anger, joy, sad, contempt, disgust, and surprise through which the emotion analysis in audio signals, linguistic pattern, textual pattern, skin conductance and physiological signals are comprehended. In this issue of Emotion Recognition and Analysis, we will have a deeper interpretation of emotion psychology, affect elicitation, and sensor modalities. This chapter helps us to understand the fundamental nature of emotion, identification of affect mechanisms, emotion regulation in diverse social contexts, and application of emotion mechanisms for overall social well-being.
Article
This essay examines the detailed process of isolating facial data from the context of its emergence through the early work of psychologist Paul Ekman in the 1960s. It explores how Ekman's data practices have been developed, criticized, and compromised by situating them within the political and intellectual landscape of his early career. This essay follows Ekman's journey from the Langley Porter Neuropsychiatric Institute to New Guinea, highlighting his brief but notable collaborations with psychologist Charles E. Osgood and NIH researchers D. Carleton Gajdusek and E. Richard Sorenson. It argues that the different meanings assigned to the human face resulted in how each group developed their studies – examining facial expressions either in interaction, where they shape reciprocal actions in interpersonal communication, or in isolation, where faces surface from the individual's unconscious interior.
Article
We express our personality through verbal and nonverbal behavior. While verbal cues are mostly related to the semantics of what we say, nonverbal cues include our posture, gestures, and facial expressions. Appropriate expression of these behavioral elements improves conversational virtual agents’ communication capabilities and realism. Although previous studies focus on co-speech gesture generation, they do not consider the personality aspect of the synthesized animations. We show that automatically generated co-speech gestures naturally express personality traits, and heuristics-based adjustments for such animations can further improve personality expression. To this end, we present a framework for enhancing co-speech gestures with the different personalities of the Five-Factor model. Our experiments suggest that users perceive increased realism and improved personality expression when combining heuristics-based motion adjustments with co-speech gestures.
Article
Full-text available
Macro-expression spotting is an important prior step in many dynamic facial expression analysis applications. It automatically detects the onset and offset image frames of a macro-expression in the video. The state-of-the-art methods of macro-expression spotting characterize the movement of facial muscle through explicit analysis of the optical flow map and have achieved promising results. However, optical flow map estimation and expression spotting in these methods are performed in two separate and successive stages. In this paper, we propose a new dual-branch network to achieve unified optimization for expression spotting and optical flow estimation tasks. The proposed dual-branch network implicitly learns optical flow during training and enriches the feature representation with motion information. During inference, we use only the encoder of the optical flow estimation network for motion feature extraction and integrate it with expression spotting into a one-stage framework. The proposed method eliminates the need to construct optical flow maps explicitly during inference and significantly reduces the computational cost. We also apply a consistency constraint on the global- and local-level semantic features of the clip to guide the model to focus on the category-consistent regions of the video. We evaluate the proposed methods extensively on two popular facial expression spotting datasets, CAS(ME)22^2 and SAMM Long Videos. The experimental results show that compared with the state-of-the-art methods, the proposed method improves the F1-scores for MaE spotting by 5.81%%\% and 1.57%%\% on the CAS(ME)22^2 and SAMM Long Videos datasets respectively.
Article
Full-text available
Micro-expression is a special kind of human emotion. Due to its characteristics of short time, low intensity, and local region, micro-expression recognition is a difficult task. At the same time, it is a natural, spontaneous, and unconcealable emotion that can well convey a person's actual psychological state and, therefore, has certain research value and practical significance. This paper focuses on micro-expression recognition in the field of deep learning through the survey and understanding of existing micro-expression recognition research, as well as grasping the research trend, for the previous literature on micro-expression review ignored the handcrafted features as an important part of the micro-expression recognition framework, and at the same time lacked the analysis of the various enhancement processing, a new micro-expression recognition framework based on deep learning is proposed. The model is designed from the perspective of modularity and streaming data. On the other hand, unlike the previous process of feeding the data directly into the network for training and recognition, the handcrafted features are used as the initial encoding of the micro-expression recognition data, followed by the training and learning of the deep model and at the same time the modular embedding approach is used to incorporate the feature enhancement module, and finally the classification and recognition. The article provides a detailed summary and analysis of each part of the whole framework and a comprehensive introduction to the current problems, experimental protocols, evaluation metrics, and application areas. Finally, it summarizes and gives possible future research directions. Therefore, this paper provides a comprehensive summary and analysis of micro-expression recognition in deep learning so that the related personnel can have a new understanding of the development of this field. On the other hand, it proposes a new recognition framework that also provides a reference for the researchers' later research.
Article
Full-text available
Subjective lying rates are often strongly and positively correlated. Called the deception consensus effect, people who lie often tend to believe others lie often, too. The present paper evaluated how this cognitive bias also extends to deception detection. Two studies (Study 1: N = 180 students; Study 2: N = 250 people from the general public) had participants make 10 veracity judgments based on videotaped interviews, and also indicate subjective detection abilities (self and other). Subjective, perceived detection abilities were significantly linked, supporting a detection consensus effect, yet they were unassociated with objective detection accuracy. More overconfident detectors—those whose subjective detection accuracy was greater than their objective detection accuracy—reported telling more white and big lies, cheated more on a behavioral task, and were more ideologically conservative than less overconfident detectors. This evidence supports and extends contextual models of deception (e.g., the COLD model), highlighting possible (a)symmetries in subjective and objective veracity assessments.
Conference Paper
Numerous studies have demonstrated the impact of emotional scenes on the interpretation of facial expressions. Nevertheless, limited research has explored the extent to which the exposure time of facial expressions influences the effect of emotional scenes on the categorization of facial expressions that convey congruent or incongruent emotional valence with the scene. To address this gap, the current study employed the Micro Expression Training Tool (METT) to examine the influence of emotional scenes on the perception of facial expressions presented for varying durations, including 120 ms or 200 ms (i.e., microexpressions) and 600 ms or 1000 ms (i.e., macroexpressions). Forty-seven participants were asked to categorize facial expressions displayed on the scene. The results showed that recognition of fear was more accurate in negative scenes than in positive scenes, while recognition of surprise was more accurate in positive scenes than in negative scenes. Nevertheless, the exposure time of facial expression did not impact the effect of emotional scenes. Therefore, this study indicates that the perception of both microexpressions and macroexpressions of fear and surprise are influenced by emotional scenes, and the minimal condition under which the effect of emotional scenes manifests is 120 ms of exposure.
Article
Full-text available
This preregistered study replicates and extends studies concerning emotional response to wartime rally speeches and applies it to U.S. President Donald Trump’s first national address regarding the COVID-19 pandemic on March 11, 2020. We experimentally test the effect of a micro-expression (ME) by Trump associated with appraised threat on change in participant self-reported distress, sadness, anger, affinity, and reassurance while controlling for followership. We find that polarization is perpetuated in emotional response to the address which focused on portraying the COVID-19 threat as being of Chinese provenance. We also find a significant, albeit slight, effect by Trump’s ME on self-reported sadness, suggesting that this facial behavior served did not diminish his speech, instead serving as a form of nonverbal punctuation. Further exploration of participant response using the Linguistic Inventory and Word Count software reinforces and extends these findings.
Article
Two contemporary theoretical perspectives explain when and how people make lie–truth judgments. The adaptive lie detector account (ALIED) and truth-default theory (TDT) are described, compared, and contrasted. ALIED and TDT come from different scholarly traditions and propose very different processes and mechanisms, yet they converge on many behavioral predictions. Both views presume adaptive processes. ALIED presumes that humans are adaptive by using available information while TDT presumes that the adaptive value of efficient communication outweighs the value of real-time deception detection. ALIED proposes a Bayesian reasoning approach to lie–truth judgments that weighs information based on its perceived diagnosticity, making no distinction in the processes between reaching a lie and truth judgment. TDT alternatively proposes that the passive presumption of the truth is the default, and the presence of triggers is required to reach a lie judgment. Suggestions for future research are provided.
Article
Full-text available
Micro‐expressions are spontaneous and unconscious facial movements that reveal individuals’ genuine inner emotions. They hold significant potential in various psychological testing fields. As the face is a 3D deformation object, the emergence of facial expression leads to spatial deformation of the face. However, existing databases primarily offer 2D video sequences, limiting descriptions of 3D spatial information related to micro‐expressions. Here, a new micro‐expression database is proposed, which contains 2D image sequences and corresponding 3D point cloud sequences. These samples were classified using both an objective method based on the facial action coding system and a non‐objective emotion classification method that considers video contents and participants’ self‐reports. A variety of feature extraction techniques are applied to 2D data, including traditional algorithms and deep learning methods. Additionally, a novel local curvature‐based algorithm is developed to extract 3D spatio‐temporal deformation features from the 3D data. The authors evaluated the classification accuracies of these two features individually and their fusion results under leave‐one‐subject‐out (LOSO) and tenfold cross‐validation. The results demonstrate that fusing 3D features with 2D features results in improved recognition performance compared to using 2D features alone.
Chapter
Microexpressions are involuntary facial movements that often reflect a person’s true emotions. Their fleeting nature and subtle shifts, however, make them challenging to detect. Our earlier work, the Facial Dynamics Map, represented a microexpression by estimating dense optical flow. Although it achieved high prediction accuracy, it was inefficient in feature extraction and lacked magnitude information. In this chapter, we address these issues by proposing ExpressionFlow, a novel descriptor which directly captures the dominant motion patterns in microexpression image sequences. Geometrically intuitive and relatively easy to implement, ExpressionFlow reflects the nature of microexpressions while preserving complete information. Comparative experiments on four benchmark datasets suggest that our method attains the best performance in real-time compared with other state-of-the-art algorithms.
Article
Facial emotion expressions play a central role in interpersonal interactions; these displays are used to predict and influence the behavior of others. Despite their importance, quantifying and analyzing the dynamics of brief facial emotion expressions remains an understudied methodological challenge. Here, we present a method that leverages machine learning and network modeling to assess the dynamics of facial expressions. Using video recordings of clinical interviews, we demonstrate the utility of this approach in a sample of 96 people diagnosed with psychotic disorders and 116 never-psychotic adults. Participants diagnosed with schizophrenia tended to move from neutral expressions to uncommon expressions (e.g., fear, surprise), whereas participants diagnosed with other psychoses (e.g., mood disorders with psychosis) moved toward expressions of sadness. This method has broad applications to the study of normal and altered expressions of emotion and can be integrated with telemedicine to improve psychiatric assessment and treatment.
Article
Full-text available
The growth of machine learning and artificial intelligence has made it possible for automatic lie detection systems to emerge. These can be based on a variety of cues, such as facial features. However, there is a lack of knowledge about both the development and the accuracy of such systems. To address this lack, we conducted a review of studies that have investigated automatic lie detection systems by using facial features. Our analysis of twenty-eight eligible studies focused on four main categories: dataset features, facial features used, classifier features and publication features. Overall, the findings showed that automatic lie detection systems rely on diverse technologies, facial features, and measurements. They are mainly based on factual lies, regardless of the stakes involved. On average, these automatic systems were based on a dataset of 52 individuals and achieved an average accuracy ranging from 61.87% to 72.93% in distinguishing between truth-tellers and liars, depending on the types of classifiers used. However, although the leakage hypothesis was the most used explanatory framework, many studies did not provide sufficient theoretical justification for the choice of facial features and their measurements. Bridging the gap between psychology and the computational-engineering field should help to combine theoretical frameworks with technical advancements in this area.
Article
Full-text available
The surge of online scams is taking a considerable financial and emotional toll. This is partially because humans are poor at detecting lies. In a series of three online experiments (N exp1 = 102, N exp2 = 108, N exp3 = 100) where participants are given the opportunity to lie as well as to assess the potential lies of others, we show that poor lie detection is related to the suboptimal computations people engage in when assessing lies. Participants used their own lying behaviour to predict whether other people lied, despite this cue being uninformative, while under-using more predictive statistical cues. This was observed by comparing the weights participants assigned to different cues, to those of a model trained on the ground truth. Moreover, across individuals, reliance on statistical cues was associated with better discernment, while reliance on one’s own behaviour was not. These findings suggest scam detection may be improved by using tools that augment relevant statistical cues.
Chapter
In psychotherapy research, one is often hard pressed to make sense out of the many behaviors, processes, and other phenomena which can be observed in the therapy situation. The present report is concerned with one class of behaviors and processes which cannot be observed—namely, facial expressions which are so short-lived that they seem to be quicker-than-the-eye. These rapid expressions can be seen when motion picture films are run at about one-sixth of their normal speed. The film and projector thus become a sort of temporal microscope, in that they expand time sufficiently to enable the investigator to observe events not otherwise apparent to him.
Article
The purposes of this study were to: (a) to obtain a clearer idea of individual differences in nonverbal interview behavior than is provided by everyday clinical experience; (b) to explore intraindividual variability in nonverbal behavior during the interview; and (c) to explore the relationships between individual differences and intraindividual variations in nonverbal behavior, on the one hand, and personality variables or other aspects of the individual's behavior during interviews, on the other hand. Nonverbal behavior occurs during psychotherapy and is apparently relevant to factors and processes of concern to psychotherapists. The implications of the findings on nonverbal interview behavior are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This paper presents a scheme, derived both rationally and empirically, for the analysis of body movements occurring spontaneously in psychotherapeutic interviews. Focussing on hand movements, a distinction is made between two broad, conceptually different, and independent classes of movements: those accompanying speech (object-focussed), and those involving some form of self-stimulation but not speech-related (body-focussed). Furthermore, different kinds of object-focussed movements are identified according to their integration with and primacy vis-a-vis speech. Observations on two paranoid patients, each at two different points in his treatment, suggest that the coding scheme can reflect the patient's altered clinical states.