Article

Evidence for training the ability to read microexpressions of emotion

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Microexpressions are extremely quick facial expressions of emotion that appear on the face for less than ½ a s. To date, no study has demonstrated that the ability to read them can be trained. We present two studies that do so, as well as evidence for the retention of the training effects. In Study 1 department store employees were randomly assigned to a training or comparison group. The training group had significantly higher scores than the comparison group in microexpression reading accuracy at the end of the training; 2weeks later the training group had better third-party ratings of social and communicative skills on the job. Study 2 demonstrated that individuals trained in reading microexpressions retained their ability to read them better than a comparison group tested 2–3weeks after initial training. These results indicated that the ability to read microexpressions can be trained and are retained. KeywordsEmotion–Facial expressions–Microexpressions–Training–Nonverbal behavior

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Due to their involuntary nature and their close association with emotion hiding and deception (Ekman and Friesen, 1969;Ekman, 2003Ekman, , 2009Hwang, 2011, 2018;Yan et al., 2013), microexpressions have great application potentials in fields that require face-to-face interpersonal skills, such as national security, law enforcement, medical treatment, education, and politics (e.g., Russell et al., 2008;Endres and Laidlaw, 2009;Frank et al., 2009;Stewart et al., 2009;Weinberger, 2010;Hwang, 2011, 2018;Frank and Svetieva, 2015;Zhu et al., 2017Zhu et al., , 2019Stewart and Svetieva, 2021). However, because the durations of micro-expressions are so short, it is very difficult for observers to accurately detect and recognize these fleeting involuntary facial expressions (e.g., Ekman and Friesen, 1969;Ekman, 2009;Matsumoto and Hwang, 2011;Shen et al., 2012Shen et al., , 2016Zeng et al., 2018). ...
... The short duration of micro-expression is not the only obstacle we encountered during micro-expression recognition. To the best of our knowledge, only a few studies have tried to investigate the psychological or brain mechanisms of microexpression recognition, which makes the scientists difficult to design efficient micro-expression recognition training programs (e.g., Ekman, 2002;Frank et al., 2009;Matsumoto and Hwang, 2011;Döllinger et al., 2021). For example, researchers have found that the factors like emotional context, age, childhood family environment, personality, and profession (e.g., Frank et al., 2009;Hurley et al., 2014;Zhang et al., 2014Zhang et al., , 2020bSvetieva and Frank, 2016;Demetrioff et al., 2017;Felisberti, 2018) may affect the recognition accuracy of micro-expressions. ...
... Therefore, in the present research, we utilized a double-blind, placebo-controlled, mixed-model experimental design to investigate the effects of intranasal oxytocin administration on the recognition of micro-expressions. More specifically, considering previous studies have suggested that oxytocin may have an emotion-specific enhancement effect for the recognition of macro-expressions (e.g., Di Simplicio et al., 2009;Marsh et al., 2010;Leknes et al., 2012;Shahrestani et al., 2013;Fang et al., 2014;Leppanen et al., 2017;Shin et al., 2018;Schwaiger et al., 2019), in the present research, in three behavioral studies we investigated whether intranasal oxytocin has differential effects on the recognition of different categories of micro-expressions (i.e., micro-expressions of the six basic emotions, including sadness, surprise, anger, disgust, fear, and happiness; e.g., Matsumoto and Hwang, 2011;Shen et al., 2012;Hurley et al., 2014;Svetieva and Frank, 2016;Wu et al., 2016;Demetrioff et al., 2017;Zeng et al., 2018;Zhang et al., 2020a). Specifically, in Study 1 and 2, we tested the effects of oxytocin on the recognition of standardized intense (Study 1) and subtle (Study 2) micro-expressions. 2 In Study 3, we further examined the effects of oxytocin on the recognition of natural micro-expressions. ...
Article
Full-text available
As fleeting facial expressions which reveal the emotion that a person tries to conceal, micro-expressions have great application potentials for fields like security, national defense and medical treatment. However, the physiological basis for the recognition of these facial expressions is poorly understood. In the present research, we utilized a double-blind, placebo-controlled, mixed-model experimental design to investigate the effects of oxytocin on the recognition of micro-expressions in three behavioral studies. Specifically, in Studies 1 and 2, participants were asked to perform a laboratory-based standardized micro-expression recognition task after self-administration of a single dose of intranasal oxytocin (40 IU) or placebo (containing all ingredients except for the neuropeptide). In Study 3, we further examined the effects of oxytocin on the recognition of natural micro-expressions. The results showed that intranasal oxytocin decreased the recognition speed for standardized intense micro-expressions of surprise (Study 1) and decreased the recognition accuracy for standardized subtle micro-expressions of disgust (Study 2). The results of Study 3 further revealed that intranasal oxytocin administration significantly reduced the recognition accuracy for natural micro-expressions of surprise and disgust. The present research is the first to investigate the effects of oxytocin on micro-expression recognition. It suggests that the oxytocin mainly plays an inhibiting role in the recognition of micro-expressions and there are fundamental differences in the neurophysiological basis for the recognition of micro-expressions and macro-expressions.
... Due to the multitude of positive outcomes of ERA, several training programs to improve this skill in healthy adults have been developed (for an overview, see Blanch-Hartigan et al., 2012. These programs range in duration from one session (e.g., Training of Emotion Recognition Ability TERA, Schlegel, Vicaria, et al., 2017;Microexpression Recognition Training Tool MiX;Matsumoto & Hwang, 2011) to repeated sessions over several weeks (e.g., Microexpression Training Tool METT, Hurley, 2012), and are either self-administered (e.g., Schlegel, Vicaria, et al., 2017) or involve an instructor (e.g., Hurley, 2012;Herpertz et al., 2016). Most trainings use a combination of practice and feedback; that is, participants are presented with emotional expressions, guess which emotion was being shown in each, and are informed about the correctness of their guesses Hurley, 2012;Matsumoto & Hwang, 2011;Schlegel, Vicaria, et al., 2017). ...
... These programs range in duration from one session (e.g., Training of Emotion Recognition Ability TERA, Schlegel, Vicaria, et al., 2017;Microexpression Recognition Training Tool MiX;Matsumoto & Hwang, 2011) to repeated sessions over several weeks (e.g., Microexpression Training Tool METT, Hurley, 2012), and are either self-administered (e.g., Schlegel, Vicaria, et al., 2017) or involve an instructor (e.g., Hurley, 2012;Herpertz et al., 2016). Most trainings use a combination of practice and feedback; that is, participants are presented with emotional expressions, guess which emotion was being shown in each, and are informed about the correctness of their guesses Hurley, 2012;Matsumoto & Hwang, 2011;Schlegel, Vicaria, et al., 2017). Some trainings additionally involve discussions between participants (e.g., Ruben et al., 2015) or instructions raising their awareness of the importance of ERA (Blanch-Hartigan, 2012). ...
... Although existing trainings typically lead to an increased performance in standard ERA tests, very little is known about whether this improvement translates into higher effectiveness in real-life social interactions (Blanch-Hartigan et al., 2016). The study by Matsumoto and Hwang (2011) found that department store employees received higher ratings of social and communicative skills on the job after completing a micro-expression recognition training, but to our knowledge no study has examined the effects of ERA training in a face-to-face interaction. The present study aims to close this gap in order to first, gain a better understanding of the utility of ERA trainings, and second, to investigate the assumed causal relationship between ERA and interpersonal outcomes. ...
Article
Emotion recognition ability (ERA) predicts more successful interpersonal interactions. However, it remains unknown whether ERA training can affect behaviors and improve social outcomes in such interactions. Here, 83 dyads of same-gender students completed either a self-administered 45 min ERA training based on audio-visual clips of 14 different emotions, or a control training about cloud types. All dyads then engaged in a face-to-face employee-recruiter negotiation about a job contract. Dyads trained in ERA reached more egalitarian economic outcomes, rated themselves and their partners as less competitive after the negotiation, and received more positive affect ratings as well as lower ratings on forcing from independent observers. Applications of the training in the context of work, education, and therapy are discussed.
... Furthermore, Lecomte expressions (Loth et al., 2018;Peng et al., 2017). In addition to ordinary expression recognition, individuals' social functioning is also related to micro-expression (ME) recognition (Matsumoto & Hwang, 2011;Yin et al., 2016). ...
... MEs are defined as very brief (1/25 to 1/5 s) and involuntary facial expressions that express the true emotions that humans attempt to suppress or hide (Ekman & Friesen, 1975;Haggard & Isaacs, 1966). MEs can neither be forged nor be controlled by consciousness, thus reflecting the processing of inner emotional information better than ordinary facial expressions (Ekman & Friesen, 1976;Ekman & O'Sullivan, 2006;Matsumoto & Hwang, 2011;Wu et al., 2010;Yin et al., 2016). Therefore, exploring ME recognition in individuals with IGD may more effectively reveal the underlying mechanism of their impaired social functioning. ...
... The JACBART not only eliminates the influence of the visual aftereffect of the target stimulus, but it also simulates the appearance of MEs in real life, in which MEs, when they appear, are usually concealed by ordinary expressions. Moreover, several studies have reported that the test has good validity and reliability (Matsumoto & Hwang, 2011;Zhu et al., 2019). ...
Article
Full-text available
Previous studies have found that poor social functioning is related to impaired facial expression recognition in individuals with Internet gaming disorder (IGD). However, these studies have focused on ordinary facial expression recognition and have not investigated micro-expression (ME) recognition. Thus, this study aimed to explore whether individuals with IGD have impairments in ME recognition and its psychological mechanism. In this study, 60 individuals with IGD and 60 healthy controls (HCs) were recruited to test their ME recognition ability using the Japanese and Caucasian Brief Affect Recognition Test (JACBART). Furthermore, their levels of IGD, depression, anxiety, and social anxiety were measured. The results were as follows: (1) the accuracy of recognizing MEs in individuals with IGD was significantly lower than that in HCs, and the reaction time (RT) in individuals with IGD was significantly longer than that in HCs; (2) the accuracy of recognizing happy MEs was significantly lower than that of recognizing angry MEs in individuals with IGD; (3) the score in the Interaction Anxiousness Scale was negatively correlated with the accuracy of recognizing happy MEs but positively correlated with the accuracy of recognizing angry MEs in individuals with IGD. These results implied that individuals with IGD had an overall impairment in ME recognition and a more significant impairment in the recognition of happy MEs; meanwhile, impairment in recognizing happy MEs was associated with social anxiety.
... Subtle expressions are partial expressions of suppressed or masked affect, displayed with only fragments of the prototypical expression musculature. Unlike microexpressions, their presentation is longer in duration, but they are also more ambiguous (Ekman, 2003a;Matsumoto & Hwang, 2011). While few studies have researched subtle expressions, EBA proponents have suggested that their recognition does relate to veracity judgements (e.g., Matsumoto et al., 2014;Warren et al., 2009). ...
... If emotional cues generalise to all deceptive situations, training decoders to detect them should improve their overall lie-catching ability (Ekman, 2009). This assertion has been bolstered by findings showing that micro-and subtle expression identification can improve with training (Ekman & Friesen, 1974;Hurley, 2012;Matsumoto & Hwang, 2011). Furthermore, deception training providing information about how to classify emotions shows positive effects on accuracy (Ekman et al., 1991;Frank & Ekman, 1997;Shaw et al., 2013). ...
... As such, we cannot assess the impact of the ERT on recognition rates; this also prohibits us from analysing the relationship between ERT and lie-types (e.g., how SETT scores correlate with emotional lie detection as in Warren et al., 2009). Nonetheless, our protocols mirror the standards in the field (e.g., Jordan et al., 2019;Warren et al., 2009), employing tools which produce reliable effects (e.g., Hurley, 2012;Matsumoto & Hwang, 2011;McDonald et al., 2018). Second, the videos were not coded for emotional cues, nor did we question decoders whether they had relied on such information given that self-reports rarely provide accurate insights into judgement processes. ...
Article
Full-text available
People hold strong beliefs about the role of emotional cues in detecting deception. While research on the diagnostic value of such cues has been mixed, their influence on human veracity judgments is yet to be fully explored. Here, we address the relationship between emotional information and veracity judgments. In Study 1, the role of emotion recognition in the process of detecting naturalistic lies was investigated. Decoders’ veracity judgments were compared based on differences in trait empathy and their ability to recognize micro-expressions and subtle expressions. Accuracy was found to be unrelated to facial cue recognition and negatively related to empathy. In Study 2, we manipulated decoders’ emotion recognition ability and the type of lies they saw: experiential or affective (emotional and unemotional). Decoders either received emotion recognition training, bogus training, or no training. In all scenarios, training did not affect veracity judgments. Experiential lies were easier to detect than affective lies; however, affective unemotional lies were overall the hardest to judge. The findings illustrate the complex relationship between emotion recognition and veracity judgments, with abilities for facial cue detection being high yet unrelated to deception accuracy.
... The neutral image before and after the microexpressions could eliminate the visual aftereffects of the microexpressions. The researchers found that participants could easily recognize the common expressions by using JACBART, but it was very difficult to recognize the microexpressions, and their accuracies were usually 45 -59% [4][5]. ...
... Depression Participants with expressions recognition disorders (e.g. alexithymia and schizophrenia) had microexpressions recognition disorders [5,11]. Liu, Huang, Wang, Gong and Han [13] found that depressive patients tend to misjudge natural or even positive expressions as negative, suggesting that depressive patients have obstacles in recognizing common expressions. ...
... Micro-expressions have great research significance and have been studied in the field of psychology for many years [1,2,3,4,5]. Micro-expression has been proven to be effective in reflecting people's real emotions, and recognizing micro-expression is valuable for many applications, including lie detection [6,7], medical diagnosis [5], public safety [8] and so on [9,10]. ...
... The frames number was unified to 10 using TIM and each frame was resized to 168 * 136 (grayscale image); thus, each video was normalized to 168 * 136 * 10. The quantized number N was set to 2, which is equivalent to binarization according to Equation 8. Also, 3,4,and Alpha = 8,9,10,11,12. The CASME II database is the largest spontaneous micro-expression database published by the Institute of Psychology, Chinese Academy of Sciences, and the participants come from China. ...
Preprint
Micro-expression can reflect people's real emotions. Recognizing micro-expressions is difficult because they are small motions and have a short duration. As the research is deepening into micro-expression recognition, many effective features and methods have been proposed. To determine which direction of movement feature is easier for distinguishing micro-expressions, this paper selects 18 directions (including three types of horizontal, vertical and oblique movements) and proposes a new low-dimensional feature called the Histogram of Single Direction Gradient (HSDG) to study this topic. In this paper, HSDG in every direction is concatenated with LBP-TOP to obtain the LBP with Single Direction Gradient (LBP-SDG) and analyze which direction of movement feature is more discriminative for micro-expression recognition. As with some existing work, Euler Video Magnification (EVM) is employed as a preprocessing step. The experiments on the CASME II and SMIC-HS databases summarize the effective and optimal directions and demonstrate that HSDG in an optimal direction is discriminative, and the corresponding LBP-SDG achieves state-of-the-art performance using EVM.
... Microexpressions are extremely quick facial expressions of emotion that appear on the face. Individuals can be trained to better recognize these expressions and skills such as these can benefit law enforcement, medicine, security, and other professions that must read people (Frank & Svetieva, 2015;Matsumoto & Hwang, 2011;Hurley, 2012;Svetieva & Frank, 2016). ...
... Although, MEs occur on persons' faces according to the emotions being experienced within a fraction of a second this emotional leakage exposes enough cues to understand the true feelings. Thus, MEs can be experienced only in a few frames as they last between 1/25 to 1/30 seconds [2] [3]. Micro expression recognition (MER) is a challenging task due to low intensity and rapid movement of facial muscles. ...
Preprint
Full-text available
Micro expression recognition (MER)is a very challenging task as the expression lives very short in nature and demands feature modeling with the involvement of both spatial and temporal dynamics. Existing MER systems exploit CNN networks to spot the significant features of minor muscle movements and subtle changes. However, existing networks fail to establish a relationship between spatial features of facial appearance and temporal variations of facial dynamics. Thus, these networks were not able to effectively capture minute variations and subtle changes in expressive regions. To address these issues, we introduce an active imaging concept to segregate active changes in expressive regions of a video into a single frame while preserving facial appearance information. Moreover, we propose a shallow CNN network: hybrid local receptive field based augmented learning network (OrigiNet) that efficiently learns significant features of the micro-expressions in a video. In this paper, we propose a new refined rectified linear unit (RReLU), which overcome the problem of vanishing gradient and dying ReLU. RReLU extends the range of derivatives as compared to existing activation functions. The RReLU not only injects a nonlinearity but also captures the true edges by imposing additive and multiplicative property. Furthermore, we present an augmented feature learning block to improve the learning capabilities of the network by embedding two parallel fully connected layers. The performance of proposed OrigiNet is evaluated by conducting leave one subject out experiments on four comprehensive ME datasets. The experimental results demonstrate that OrigiNet outperformed state-of-the-art techniques with less computational complexity.
... has not yet been systematized nor standardized as a lie detection tool, although it is used as the basis for a training program on deception 83 . The lack of standardization in the use of EDT predictions to detect lies can be seen as a weakness of the method. ...
Article
Full-text available
Lying is ubiquitous in every society. However, in forensic contexts lies must be revealed so that investigations/judgments can be fair and effective. For this reason, distinct tools (verbal and nonverbal) of lie detection were examined. CBCA and RM showed the best performance in distinguishing between truth and lie within verbal tools. Lack of empirical support made SCAN not recommended for lie detection applications. Moreover, studies have shown that people guided by BAI are less accurate in detecting lies than untrained people. Ekman’s Deception Theory (EDT) showed more effective predictions about nonverbal deception cues than BAI. However, the lack of standardization in the use of EDT predictions to detect lies can be seen as a weakness of the method. Future efforts may be aimed at developing a tool that uses both verbal and nonverbal predictions to obtain greater accuracy in detecting lies than currently available methods.
... La synJrgOlOgie tient compte de plusieurs caractéristiques des émotions car de nombreux phénomènes influencent leur apparition sur le visage.Chimères(1998). Le concept de chimère fait référence à la notion de microexpression(Matsumoto, Keltner, Shiota, 0 ' Sullivan, & Frank, 2008 ;Hurley, 2012;Hurley, Anker, Frank, Matsumoto, & Hwang, 2014;Matsumoto & Hwang, 2011). Elle correspond à l'expression faciale d' une émotion explicite, mais fugace (1I16 e de seconde). ...
Experiment Findings
Rencontrer les chercheurs en synergologie pour leur demander quelle est leur méthodologie de recherche. Comparer la méthodologie de recherche synergologique avec la démarche scientifique.
... The duration of MEs is very short. The general duration is less than 500 milliseconds (ms) [23], [10]. The close connection between MEs and deception makes the relevant research have great significance on many applications such as medical care [3] and law enforcement [4]. ...
Conference Paper
Full-text available
This paper presents baseline results for the Third Facial Micro-Expression Grand Challenge (MEGC 2020). Both macro-and micro-expression intervals in CAS(ME) 2 and SAM-M Long Videos are spotted by employing the method of Main Directional Maximal Difference Analysis (MDMD). The MDMD method uses the magnitude maximal difference in the main direction of optical flow features to spot facial movements. The single-frame prediction results of the original MDMD method are post-processed into reasonable video intervals. The metric F1-scores of baseline results are evaluated: for CAS(ME) 2 , the F1-scores are 0.1196 and 0.0082 for macro-and micro-expressions respectively, and the overall F1-score is 0.0376; for SAMM Long Videos, the F1-scores are 0.0629 and 0.0364 for macro-and micro-expressions respectively, and the overall F1-score is 0.0445. The baseline project codes are publicly available at https://github.com/HeyingGithub/ Baseline-project-for-MEGC2020_spotting.
... However, it is also time consuming, much like classroom observation (Ubben, Salisbury, & Daniel, 2019). Nevertheless, in recent years, students' facial microexpression states (FMES) were revealed as a viable real-time predictor of students' performance in a conceptual conflict-based science learning scenario (Chiu, Chou, Wu, & Liaw, 2014;Chiu, Liaw, Yu, & Chou, 2019;Matsumoto & Hwang, 2011). Such a revelation is offering researchers a much needed alternative approach when they wish to gather real-time and direct reactions from learners. ...
Article
Full-text available
Kinematics is an important but challenging area in physics. In previously published works of the current research project, it was revealed that there is a significant relationship between facial microexpression states (FMES) changes and conceptual conflict-induced conceptual change. Consequently, the current study integrated FMES into a kinematics multiple representation instructional scenario to investigate if FMES could be used to help construct students’ conceptual paths, and help predict students’ learning outcome. Analysis revealed that types of students’ FMES (neutral, surprised, positive, and negative) were important in helping instructors predict students’ learning outcomes. Findings showed that exhibiting negative FMES through all three major representation segments of the instructional process (i.e., scientific demonstration, textual instruction, and animated instruction) suggests a higher probability of conceptual change among students with sufficient background knowledge on the topic. For students with insufficient prior knowledge, the result was the opposite. Moreover, animated representation was found to be critical to the prediction of student conceptual change. In sum, the results showed FMES as a viable indicator for conceptual change in kinematics, and also reaffirmed the importance of prior knowledge and representations of scientific concepts.
... Micro expressions are very brief facial displays in response to emotions (Ekman, 2009). Evidence supports that people can recognize micro expressions through training (Matsumoto & Hwang, 2011). It has been argued that through micro expression recognition people can detect deception via discernment between genuine and fake emotional displays (Ekman, 2009). ...
Article
Full-text available
Objective: Investigation of deception within psychotherapy has recently gained attention. Micro expression training software has been suggested to improve deception detection and enhance emotion recognition. The current study examined the effects of micro expression training software on deception detection and emotion recognition. Method: The current study recruited 23 counseling psychology graduate students and 32 undergraduate students and randomly assigned them to either a training group or a control group. The training and control group received all the same materials and measures pre- and post-test, with the training group differing by receiving the micro expression training. Results: Findings revealed no significant difference in deception detection between the control group and training group. The training did reveal significant improvement for emotion recognition, specifically in contempt, anger, and fear. State and trait anxiety did not predict deception detection nor did it mediate emotion recognition. No significant difference was found between graduate trainees and undergraduate students. Conclusion: The use of the F.A.C.E. software was not effective for increasing deception detection but did serve to increase emotion recognition. Implications for training, practice, and research are discussed.
... Oddly, the few empirical investigations of microexpressions focus on training methods to improve their recognition, which do find positive results (Hurley, 2012;Matsumoto & Hwang, 2011), while studies arguing for a link between microexpressions and deception detection are correlational in nature only. From such studies, in specific scenarios, individual differences in microexpression recognition accuracy have shown positive correlations with deception detection, such as for emotional lies (Ekman & O'Sullivan, 1991) and mock crimes (Frank & Ekman, 1997). ...
Chapter
Full-text available
The function of facial expressions of emotions in detecting deception has been a hotly debated topic. One side argues that liars and truth-teller display different facial expressions which can be used as diagnostic cues of deceit. The other argues that such cues are rare, unpredictable, and ambiguous, and as such are unreliable to detecting deception. This chapter overviews facial expression in deception detection, separating their alleged diagnostic value as cues to deception from that of strategic affective signals in human communication. Building upon our current understanding and research in the deception and emotion fields, I elaborate on relevant but underdeveloped concepts, and address how the process of detecting lies can be influenced by facial expressions of emotions. I critically evaluate several assumptions of the emotion-based approach to detecting deception, illustrating the limitations of this view. A strong emphasis is placed on expanding the role of facial expressions in deception, by considering both the encoder-decoder and the affective-signaling perspectives. I propose a careful distinction between genuine cues and deceptive cues, considering the importance of emotional authenticity and sender intent. Finally, I consider the role of facial expressions of emotion in human veracity judgment and future directions for the field of emotion and deception in light of the current propositions. This is done in light of recent propositions to the use of automated lie detection tools on the basis of facial expressions of emotion. I argue that caution must be given to such techniques, elaborating on the flawed underpinnings guiding their decisions, and make considerations for the future of this research.
... bluffing in poker or business interactions. Evidence has shown that human beings are not good at classifying emotion [3][4][5]. An image that will be classified to have a particular emotion by one person might be classified to have a different emotion by another person. ...
Chapter
Full-text available
Emotion categorization can be the process of identifying different emotions in humans based on their facial expressions. It requires time and sometimes it is hard for human classifiers to agree with each other about an emotion category of a facial expression. However, machine learning classifiers have done well in classifying different emotions and have widely been used in recent years to facilitate the task of emotion categorization. Much research on emotion video databases uses a few frames from when emotion is expressed at peak to classify emotion, which might not give a good classification accuracy when predicting frames where the emotion is less intense. In this paper, using the CK+ emotion dataset as an example, we use more frames to analyze emotion from mid and peak frame images and compared our results to a method using fewer peak frames. Furthermore, we propose an approach based on sequential voting and apply it to more frames of the CK+ database. Our approach resulted in up to 85.9% accuracy for the mid frames and overall accuracy of 96.5% for the CK+ database compared with the accuracy of 73.4% and 93.8% from existing techniques.
... 122 Automatic human affect analysis4 While AHAA provides new avenues for more fine-grained and subtle expression analysis, certain use 123 cases might fail to translate to future research. For example, it is unlikely that micro-expressions 124(Ekman, 2009;Matsumoto and Hwang, 2011) offer a promising theoretical approach towards a better 125 understanding of expression dynamics in consumer research. Micro-expressions refer to brief 126 displays (20 -500 ms) argued to "leak" an individual's true emotional state before the expression can 127 be actively controlled(Ekman and Friesen, 1969). ...
Preprint
The ability to automatically assess emotional responses via contact-free video recording taps into a rapidly growing market aimed at predicting consumer choices. If consumer attention and engagement are measurable in a reliable and accessible manner, relevant marketing decisions could be informed by objective data. Although significant advances have been made in automatic affect recognition, several practical and theoretical issues remain largely unresolved. These concern the lack of cross-system validation, a historical emphasis of posed over spontaneous expressions, as well as more fundamental issues regarding the weak association between subjective experience and facial expressions. To address these limitations, the present paper argues that extant commercial and free facial expression classifiers should be rigorously validated in cross-system research. Furthermore, academics and practitioners must better leverage fine-grained emotional response dynamics, with stronger emphasis on understanding naturally occurring spontaneous expressions. We posit that applied consumer research might be better situated to examine facial behavior in socio-emotional contexts rather than decontextualized, laboratory studies, and highlight how AHAA can be successfully employed in this context. Also, facial activity should be considered less as a single outcome variable, and more as promising input for two-step machine learning in combination with other (multimodal) features. We illustrate this point in a case study using facial activity as input features to predict crying behavior in response to sad movies. Implications of this approach and potential obstacles that need to be overcome are discussed within the context of consumer research.
... The duration of MEs is very short. The general duration is less than 500 milliseconds (ms) [23], [10]. The close connection between MEs and deception makes the relevant research have great significance on many applications such as medical care [3] and law enforcement [4]. ...
Preprint
Full-text available
This paper presents baseline results for the Third Facial Micro-Expression Grand Challenge (MEGC 2020). Both macro- and micro-expression intervals in CAS(ME)$^2$ and SAMM Long Videos are spotted by employing the method of Main Directional Maximal Difference Analysis (MDMD). The MDMD method uses the magnitude maximal difference in the main direction of optical flow features to spot facial movements. The single frame prediction results of the original MDMD method are post processed into reasonable video intervals. The metric F1-scores of baseline results are evaluated: for CAS(ME)$^2$, the F1-scores are 0.1196 and 0.0082 for macro- and micro-expressions respectively, and the overall F1-score is 0.0376; for SAMM Long Videos, the F1-scores are 0.0629 and 0.0364 for macro- and micro-expressions respectively, and the overall F1-score is 0.0445. The baseline project codes is publicly available at https://github.com/HeyingGithub/Baseline-project-for-MEGC2020_spotting.
... La synJrgOlOgie tient compte de plusieurs caractéristiques des émotions car de nombreux phénomènes influencent leur apparition sur le visage.Chimères(1998). Le concept de chimère fait référence à la notion de microexpression(Matsumoto, Keltner, Shiota, 0 ' Sullivan, & Frank, 2008 ;Hurley, 2012;Hurley, Anker, Frank, Matsumoto, & Hwang, 2014;Matsumoto & Hwang, 2011). Elle correspond à l'expression faciale d' une émotion explicite, mais fugace (1I16 e de seconde). ...
Thesis
Full-text available
Presentation of synergology; Analysis of the scientificity of synergology, Professional application of synergology. This essay answers the following question: what contributions can we expect from synergology in psychotherapy? In addition, it allows to situate synergology in the field of science and non-verbal communication. The author is a synergologist and psychologist. http://depot-e.uqtr.ca/id/eprint/9384/ Présentation de la synergologie, Analyse de la scientificité de la synergolgie, Application professionnelle de la synergologie. Cet essai répons à la question suivante : quels apports peut-on attendre de la synergologie en psychothérapie? En outre, il permet de bien situer la synergologie dans le champ de la science et de la communication non-verbale. L'auteur est synergologue et psychologue.
... Due to the low intensity of micro-expression and short duration [23,43], several factors need to be taken into account when analyzing micro-expression data sets: frame rate, resolution, emotional induction, category labeling, and sample distribution. Since the duration of the micro-expression is very short, usually, only 1/25 s to 1/3 s, the SMIC, CASME II, and SAMM databases collect data through high-speed cameras. ...
Article
Full-text available
The sample category distribution of spontaneous facial micro-expression datasets is unbalanced, due to the experimental environment, collection equipment, and individualization of subjects, which brings great challenges to micro-expression recognition. Therefore, this paper introduces a micro-expression recognition model based on the Hierarchical Support Vector Machine (H-SVM) to reduce the interference of sample category distribution imbalance. First, we calculated the position of the apex frame in the micro-expression image sequence. To keep micro-expression frames balanced, we sparsely sample the images sequence according to the apex frame. Then, the Low-level Descriptors of the region of interest of the micro-expression image sequence and the High-level Descriptors of apex frame are extracted. Finally, the H-SVM model is used to classify the fusion features of different levels. The experimental results on SMIC, CAMSE2, SAMM, and their composite datasets show that our method can achieve superior performance in micro-expression recognition.
... The frame rate was given according to that of each video instead of setting a fixed one. Generally, humans can complete the entire process of emotional change and reach a peak within 10 s [14,38]. Consequently, to reduce redundant information and computational burden, we re-edited the videos with lengths longer than 10 seconds. ...
Article
Full-text available
The study of affective computing in the wild setting is underpinned by databases. Existing multimodal emotion databases in the real-world conditions are few and small, with a limited number of subjects and expressed in a single language. To meet this requirement, we collected, annotated, and prepared to release a new natural state video database (called HEU Emotion). HEU Emotion contains a total of 19,004 video clips, which is divided into two parts according to the data source. The first part contains videos downloaded from Tumblr, Google, and Giphy, including 10 emotions and two modalities (facial expression and body posture). The second part includes corpus taken manually from movies, TV series, and variety shows, consisting of 10 emotions and three modalities (facial expression, body posture, and emotional speech). HEU Emotion is by far the most extensive multimodal emotional database with 9951 subjects. In order to provide a benchmark for emotion recognition, we used many conventional machine learning and deep learning methods to evaluate HEU Emotion. We proposed a multimodal attention module to fuse multimodal features adaptively. After multimodal fusion, the recognition accuracies for the two parts increased by 2.19% and 4.01%, respectively, over those of single-modal facial expression recognition.
... Understanding MiEs helps to identify the deception and the true mental condition of a person. Unlike macro-facial expressions (MaE), which typically last for 0.5-4 seconds [Matsumoto and Hwang 2011] and thus can be immediately recognized by humans, MiEs generally remain less than 0.2 seconds, as well they are very subtle [Ekman 2009, Warren et al. 2009] which makes them difficult to spot and recognize them. In order to improve the capacity of people to identify and recognize MiEs, researchers in psychology made improvements to train specialists using the Micro Expression Training Tools [Ekman 2002]. ...
Thesis
Full-text available
Facial expression analysis is an important problem in many biometric tasks, such as face recognition, face animation, affective computing and human computer interface. In this thesis, we aim at analyzing facial expressions using images and video sequences. We divided the problem into three leading parts. First, we study Macro Facial Expressions for Emotion Recognition and we propose three different levels of feature representations. Low-level feature through a Bag of Visual Word model, mid-level feature through Sparse Representation and hierarchical features through a Deep Learning based method. The objective of doing this is to find the most effective and efficient representation that contains distinctive information of expressions andthat overcomes various challenges coming from: 1) intrinsic factors such as appearance and expressiveness variability and 2) extrinsic factors such as illumination, pose, scale and imaging parameters, e.g., resolution, focus, imaging, noise. Then, we incorporate the temporal dimension to extract spatio-temporal features with the objective to describe subtle feature deformations to discriminate ambiguous classes.Second, we direct our research toward transfer learning, where we aim at Adapting Facial Expression Models to New Domains and Tasks. Thus we study domain adaptation and zero shot learning for developing a method that solves the two tasks jointly. Our method is suitable for unlabelled target datasets coming from different data distributions than the source domain and for unlabelled target datasets with different label distributions but sharing the same context as the source domain. Therefore, to permit knowledge transfer between domains and tasks, we use Euclidean learning and Convolutional Neural Networks to design a mapping function that maps the visual information coming from facial expressions into a semantic space coming from a Natural Language model that encodes the visual attribute description or uses the label information. The consistency between the two subspaces is maximized by aligning them using the visual feature distribution.Third, we study Micro Facial Expression Detection. We propose an algorithm to spot micro-expression segments including the onset and offset frames and to spatially pinpoint in each image the regions involved in the micro-facial muscle movements. The problem is formulated into Anomaly Detection due to the fact that micro-expressions occur infrequently and thus leading to few data generation compared to natural facial behaviours. In this manner, first, we propose a deep Recurrent Convolutional Auto-Encoder to capture spatial and motion feature changes of natural facial behaviours. Then, a statistical based model for estimating the probability density function of normal facial behaviours while associating a discriminat-ing score to spot micro-expressions is learned based on a Gaussian Mixture Model. Finally, an adaptive thresholding technique for identifying micro expressions from natural facial behaviours is proposed.Our algorithms are tested over deliberate and spontaneous facial expression benchmarks.
... Despite its broad commercial use, there are only a few peer-reviewed studies about Ekman et al.'s training or micro expression trainings in general. Matsumoto and Hwang (2011) were the first to systematically research whether micro expressions could be trained. In two randomized controlled studies, they found significantly higher micro expression ERA immediately after Ekman et al.'s (Paul Ekman Group, 2020a) training as well as 2-3 weeks after training. ...
Article
Full-text available
Nonverbal emotion recognition accuracy (ERA) is a central feature of successful communication and interaction, and is of importance for many professions. We developed and evaluated two ERA training programs—one focusing on dynamic multimodal expressions (audio, video, audio-video) and one focusing on facial micro expressions. Sixty-seven subjects were randomized to one of two experimental groups (multimodal, micro expression) or an active control group (emotional working memory task). Participants trained once weekly with a brief computerized training program for three consecutive weeks. Pre-post outcome measures consisted of a multimodal ERA task, a micro expression recognition task, and a task about patients' emotional cues. Post measurement took place approximately a week after the last training session. Non-parametric mixed analyses of variance using the Aligned Rank Transform were used to evaluate the effectiveness of the training programs. Results showed that multimodal training was significantly more effective in improving multimodal ERA compared to micro expression training or the control training; and the micro expression training was significantly more effective in improving micro expression ERA compared to the other two training conditions. Both pre-post effects can be interpreted as large. No group differences were found for the outcome measure about recognizing patients' emotion cues. There were no transfer effects of the training programs, meaning that participants only improved significantly for the specific facet of ERA that they had trained on. Further, low baseline ERA was associated with larger ERA improvements. Results are discussed with regard to methodological and conceptual aspects, and practical implications and future directions are explored.
... According to Professor Matsumoto, there are two nerve pathways that originate from different parts of the brain (Matsumoto & Hwang, 2011). The pyramidal pathway controls voluntary facial movements, while the extrapyramidal pathway directs involuntary emotional activities. ...
Preprint
Full-text available
Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.
... Moreover, the subcortical circuit, which is located in the subcortical areas of the brain, is primarily responsible for spontaneous facial expressions (i.e., involuntary emotion). When people attempt to conceal or restrain their expressions in an intensely emotional situation, both systems are likely to be activated, resulting in the fleeting leakage of genuine emotions in the form of micro-expressions [15] ( Figure 2). Throughout this paper, we will focus on spontaneous micro-expressions. ...
Preprint
Unlike the conventional facial expressions, micro-expressions are involuntary and transient facial expressions capable of revealing the genuine emotions that people attempt to hide. Therefore, they can provide important information in a broad range of applications such as lie detection, criminal detection, etc. Since micro-expressions are transient and of low intensity, however, their detection and recognition is difficult and relies heavily on expert experiences. Due to its intrinsic particularity and complexity, video-based micro-expression analysis is attractive but challenging, and has recently become an active area of research. Although there have been numerous developments in this area, thus far there has been no comprehensive survey that provides researchers with a systematic overview of these developments with a unified evaluation. Accordingly, in this survey paper, we first highlight the key differences between macro- and micro-expressions, then use these differences to guide our research survey of video-based micro-expression analysis in a cascaded structure, encompassing the neuropsychological basis, datasets, features, spotting algorithms, recognition algorithms, applications and evaluation of state-of-the-art approaches. For each aspect, the basic techniques, advanced developments and major challenges are addressed and discussed. Furthermore, after considering the limitations of existing micro-expression datasets, we present and release a new dataset - called micro-and-macro expression warehouse (MMEW) - containing more video samples and more labeled emotion types. We then perform a unified comparison of representative methods on CAS(ME)2 for spotting, and on MMEW and SAMM for recognition, respectively. Finally, some potential future research directions are explored and outlined.
... Emotion perception accuracy can be improved by training using feedback about the correct answer (Elfenbein, 2006;Blanch-Hartigan, 2012;Ruben et al., 2015). This can be also applied in recognition of micro-expressions that are results of the efforts to control the expression of emotions and they last only for a very short time (Matsumoto, Hwang, 2011). Earlier findings (Costanzo, 1992;Grinspan, Hemphill, Nowicki, 2003) even suggest that the improvement occurs by solving tasks on recognition, without knowing the correct answer. ...
Book
Full-text available
This book contains peer reviewed papers presented at the various conferences organized by the Athens Institute for Education and Research (ATINER) and especially the Social Sciences Research Division. Social sciences is currently at a cross roads. The subjects covered are so diverse and the methodologies so different that it is very difficult to compare results in order to advance the knowledge about the society. If we add the diversion of societies then it makes the communication between social scientists a thorny issue. The same social issue or problem is viewed differently depending on the country of origin of the principal investigator. However, bringing these social scientists together makes the communication easier and one can only hope that this can be for the best of social science research, useful to all societies of the modern turbulent world that we live in. This is exactly the mission of ATINER, i.e. to bring social scientists together in the historic city of Athens in order to discuss the current developments and the future prospects of social science research. This book includes 17 essays written by social scientists coming from 12 different countries and five continents (Russia, USA, Brazil, Turkey, India, Georgia, South Africa, Slovakia, Italy, Latvia, UK, and Germany). The same dispersion is noted in the topics covered. It is an anthology of essays determined only by the specific interests of the authors. The 16 papers are organized into two sections: the first on society which includes five papers and the second on behavior and attitude with eleven papers. Chapter one can be considered an introductory chapter to issues concerning the society. Kenneth Smith examines the concept of collective consciousness of society based on Durkheimian sociology one of the three modern founders of the discipline of social sciences. The other two are Marx and Weber. The term “modern” is used to highlight that social sciences was a favoured subject of ancient Greek Philosophers especially Plato and Aristotle. The former can be considered as the father of social sciences. Many ideas developed in the following chapters of this book depend on the foundations of social analyses such as the one described in this chapter. Chapter two deals with Russia’s disabled people. The emphasis is on vocational training. Olga Borodkina, based on empirical research, argues that inclusive education is lacking behind in Russia. Most Russian Universities cannot facilitate disabled people. The author concludes that there are improvements but further changes are needed which require the collaboration of the state, educational institutions, society and people with special needs. Chapter three examines the Latvian society from the prism of its ethnic and religious diversity. Julija Stare states that Latvia is one of the most diverse countries in the Baltics and Europe. The concepts of ethnicity and religion are examined by the author in terms of their interaction to produce a hybrid identity. The conclusion is that cultural diversity contributes to the creations of new, hybrid and different forms of identities. Chapter four investigates the Turkish society. Ayça Yılmaz Deniz looks at Turkey’s working conditions. The author uses qualitative research based on interviews of 44 workers. The conclusions show that although employees do not take part collective resistence, they develop individual resistence strategies which she analyse by reffering Bhabha's conception of mimicry. Chapter five looks at an important issue which relates to the spin off between research at a university level and the business sector. Sabrina Moretti & Francesco Sacchetti investigate the Italian case using the method of interviews of academics and examining how these researchers have reconciled the market demand for research and the academic objective of producing new knowledge. Chapter six Deborah Zuercher, Jon Yoshioka & Teresa Rishel deals with teacher quality using an experiment from two islands: Guam and American Samoa. Content Area Specialized Teacher (CAST) facilitators are used to promote professional development. They authors discuss the results fo their study and provide recommendations. Chapter seven is the first of the second part of the book which deals with behavior and attitudes. Sonia Sirtoli Färber looks at thanatology and mourning which includes different types of losses. Chapter eight is an application of applied behavior analysis. Ishhita Gupta, Shefali Thaman & S. P. K. Jena used two case studies to evaluate the intervention of differential reinforcement of other behavior. A number of important conclusions are drawn and suggestions are made for future research Chapter nine examines the aggressiveness behavior of 262 hopsital employees. Susan M. Stewart finds that dispositional aggressiveness was related to all forms of organizational injustice and workplace deviance. Chapter ten looks at the behaviour of Oidipus the lenses of Loewald’s ‘Waning of the Oedipus complex’. The author, Zelda G. Knight, claims that going back to the original interpretation different psychoanalytic perspectives regarding the process of ‘growing up, growing old, and in between the two’. Chapter eleven investigates the psychometric properties of the Death Obsession Scale (DOS) using a sample of South African university students. Solomon Mashegoane & Simon Moripe concludes that further studies are needed to test the basic hypotheses and the scale of measurement. Chapter twelve examines emotions in human relations Tomáš Sollár, Jana Turzáková, Martina Romanová & Andrea Solgajová look at the relevant literature of reading emotions through facial expressions. A sample of psychology student and nursing students in Slovakia and the results were analyzed according to standardized manifestations of basic emotions and neutral expressions. Various implications for education and training are discussed. Chapter thirteen measures passive discrimination using a lost-letter technique. This technique assesses community’s attitudes towards groups and institutitons. William Phillips, Afshin Gharib and Matt Davis sampled people from the USA, Poland, Italy and Germany. The basic hypothesis is that people are more likely to mail a letter addressed to an individual or organization that they feel neutral or positive about, than to mail a letter addressed to an organization or person they feel negatively about. They showed that there was no difference between the two names. Chapter fourteen uses a scale to incorporate three components of an attitude: cognitive, affective and behavioral. Nino Javakhishvili, Johann F. Schneider, Ana Makashvili and Natia Kochlashvili discuss this scale in terms of empirical evidence and conclude that “social distance and tolerance scales are good measures of ethnic attitudes and values”. Chapter fifteen studies integrated personality in a nontherapeutic background. Using a sample of public employees, Eva Sollárová and Tomáš Sollár show that more integrated persons choose proactively oriented strategies and high-integrated individuals are more likely to act proactively and make life opportunities. Chapter sixteen examines social workers. Ergun Hasgul and Ayse Sezen Serpen examines empathy amng social workers using a sample of Turkish social workers. They found that female social workers possess more emphatic skills than their male counterparts. Chapter seventeen uses evidence from Brazil to examine the rights of persons with disabilities. Raclene Ataide de Faria found that the group of people with intellectual disabilities is heterogeneous and their self-representation makes a positive self-image of themselves as students in regular schools.
... Furthermore, the intensity of facial muscle movements of microexpressions is very low because they are elicited when people try to hide their emotions and repress their expressions. Furthermore, only part of the facial muscle movements of a typical facial expression is found in microexpressions (Ito, Murano, & Gomi, 2004;Matsumoto & Hwang, 2011;Yan, Wu, Liang, Chen, & Fu, 2013). According to the results obtained in (Frank, Herbasz, Sinuk, Keller, & Nolan, 2009), even highly trained individuals can distinguish five types of microexpressions with an accuracy of as low as 47%. ...
Article
Facial microexpressions are defined as brief, subtle, and involuntary movements of facial muscles reflecting genuine emotions that a person tries to conceal. Because microexpressions are involuntary and uncontrollable, automatic detection of microexpressions and recognition of emotions reflected in the microexpressions can be used in various applications. With the advancement of artificial-intelligence-based non-face-to-face interviews and computer-assisted treatment of mood disorders, the need for developing a technique to precisely detect microexpressions is gradually increasing. In this study, we developed facial electromyography (fEMG)- and electroencephalography (EEG)-based methods for the detection of microexpressions and recognition of emotions reflected in microexpressions as a potential alternative to computer vision-based methods. We first assessed the performance of microexpression detection, and then evaluated the performance of classification of the emotions reflected in the microexpressions. In our experiments with 16 participants, six discrete emotions could be classified using support vector machine with the best F1 score of 0.971 when optimal fEMG and EEG channels were selected, demonstrating the potential usability of the fEMG- and EEG-based emotion recognition method in practical scenarios. It is noteworthy that EEG was more useful for classifying discrete emotions compared to fEMG (best F1 scores: EEG–0.962; fEMG–0.797). To the best of our knowledge, this is the first study to estimate emotions reflected in facial microexpressions using EEG.
... Ekman [5] pointed out that human beings unconsciously show their true emotions between 1/25-1/5s, and it can be judged by micro-expressions. Because it can reflect people's true emotions, micro-expression recognition has great application value including medical diagnosis, public safety, international exchanges, negotiations, and so on [3,26,29,31,39,43]. micro-expression recognition still has many challenges. ...
Article
Full-text available
Micro-expression recognition has important research value and huge research difficulties. Local Binary Pattern from Three Orthogonal Planes (LBP-TOP) is a common and effective feature in micro-expression recognition. However, LBP-TOP only extracts the dynamic texture features in the horizontal and vertical directions and does not consider muscle movement in the oblique direction. In this paper, the feature in oblique directions is studied, and a new feature called Local Binary Pattern from Five Intersecting Planes (LBP-FIP) is proposed by analyzing the movement direction of facial muscles in the micro-expression video. LBP-FIP concatenates the proposed Eight Vertices LBP (EVLBP) with LBP-TOP extracted from three planes, where EVLBP is extracted from two planes in the oblique direction. In this way, the dynamic texture features in the oblique direction are extracted more directly. On the CASME II and SMIC database, we evaluated the proposed feature and the effectiveness of the features in the oblique direction. Extensive experiments prove that LBP-FIP provides more effective feature information than LBP-TOP, and extracting the features in oblique directions is discriminative for recognizing micro-expressions. Also, LBP-FIP has advantages comparing with other LBP based features and achieves satisfactory performance, especially on CASME II.
... Previous studies have highlighted the impact of those type elements [i.e., type of the task, nature of the stimuli (e.g., static vs dynamic), intensity of expressions] on facial expression processing in individuals with ASD (Speer et al., 2007;Guillon et al., 2014;Mouga et al., 2021;Nagy et al., 2021). Future studies on similar issues should carefully consider those elements in order to produce enough variation to optimize the evaluation of an effect, for example by increasing the presentation speed (Matsumoto and Hwang, 2011). ...
Article
Full-text available
Processing and recognizing facial expressions are key factors in human social interaction. Past research suggests that individuals with autism spectrum disorder (ASD) present difficulties to decode facial expressions. Those difficulties are notably attributed to altered strategies in the visual scanning of expressive faces. Numerous studies have demonstrated the multiple benefits of exposure to pet dogs and service dogs on the interaction skills and psychosocial development of children with ASD. However, no study has investigated if those benefits also extend to the processing of facial expressions. The aim of this study was to investigate if having a service dog had an influence on facial expression processing skills of children with ASD. Two groups of 15 children with ASD, with and without a service dog, were compared using a facial expression recognition computer task while their ocular movements were measured using an eye-tracker. While the two groups did not differ in their accuracy and reaction time, results highlighted that children with ASD owning a service dog directed less attention toward areas that were not relevant to facial expression processing. They also displayed a more differentiated scanning of relevant facial features according to the displayed emotion (i.e., they spent more time on the mouth for joy than for anger, and vice versa for the eyes area). Results from the present study suggest that having a service dog and interacting with it on a daily basis may promote the development of specific visual exploration strategies for the processing of human faces.
... The short duration of micro-expressions is the main feature that distinguishes micro-expressions from general facial expressions (also referred to as macro-expressions), and researches suggest that the generally accepted upper limit of during is 1/2s [3], [6]. Also, the occurrence of micro-expressions is characterized by low intensity and localization, so it is generally difficult for people to detect or notice them with the naked eye. ...
Preprint
Micro-expressions are spontaneous, unconscious facial movements that show people's true inner emotions and have great potential in related fields of psychological testing. Since the face is a 3D deformation object, the occurrence of an expression can arouse spatial deformation of the face, but limited by the available databases are 2D videos, which lack the description of 3D spatial information of micro-expressions. Therefore, we proposed a new micro-expression database containing 2D video sequences and 3D point clouds sequences. The database includes 259 micro-expressions sequences, and these samples were classified using the objective method based on facial action coding system, as well as the non-objective method that combines video contents and participants' self-reports. We extracted facial 2D and 3D features using local binary patterns on three orthogonal planes and curvature descriptors, respectively, and performed baseline evaluations of the two features and their fusion results with leave-one-subject-out(LOSO) and 10-fold cross-validation methods. The best fusion performances were 58.84% and 73.03% for non-objective classification and 66.36% and 77.42% for objective classification, both of which have improved performance compared to using LBP-TOP features only.The database offers original and cropped micro-expression samples, which will facilitate the exploration and research on 3D Spatio-temporal features of micro-expressions.
... Facial micromovement is also called microexpression. Micromovement occurs with less than a second of movement and with vibration lasting between 0.04 and 0.5 s [12][13][14]. Simultaneously, in a typical interaction, an emotional expression begins and ends with a macroexpression that occurs in less than 4 s [15]. The degree of movement or the vibration of the facial muscles between real and fake expressions can be significantly different [11]. ...
Article
Full-text available
People tend to display fake expressions to conceal their true feelings. False expressions are observable by facial micromovements that occur for less than a second. Systems designed to recognize facial expressions (e.g., social robots, recognition systems for the blind, monitoring systems for drivers) may better understand the user’s intent by identifying the authenticity of the expression. The present study investigated the characteristics of real and fake facial expressions of representative emotions (happiness, contentment, anger, and sadness) in a two-dimensional emotion model. Participants viewed a series of visual stimuli designed to induce real or fake emotions and were signaled to produce a facial expression at a set time. From the participant’s expression data, feature variables (i.e., the degree and variance of movement, and vibration level) involving the facial micromovements at the onset of the expression were analyzed. The results indicated significant differences in the feature variables between the real and fake expression conditions. The differences varied according to facial regions as a function of emotions. This study provides appraisal criteria for identifying the authenticity of facial expressions that are applicable to future research and the design of emotion recognition systems.
... The FERP test consisted of 48 trials where participants viewed neutral and emotional photographic images taken from the NimStim set of normed, multicultural male and female facial emotion expressions (Tottenham et al., 2009). We designed the FERP test using a neutral-emotion-neutral presentation of faces (Matsumoto & Hwang, 2011) with each trial first presenting a face showing a neutral expression for 1000 ms, followed by an emotional image of the same face presented for 1000 ms showing one of the six facial emotion expressions of sadness, happiness, fear, surprise, disgust, or anger (Ekman, 2003), followed by another 1000 ms of the same face in a neutral expression. The participant was then asked to identify from a multiple-choice list which of the six emotions had been presented. ...
Article
Empathy is critical for human interactions to become shared and meaningful, and it is facilitated by the expression and processing of facial emotions. Deficits in empathy and facial emotion recognition are associated with individuals with autism spectrum disorder (ASD), with specific concerns over inaccurate recognition of facial emotion expressions conveying a threat. Yet, the number of evidenced interventions for facial emotion recognition and processing (FERP), emotion, and empathy remains limited, particularly for adults with ASD. Transcranial direct current stimulation (tDCS), a noninvasive brain stimulation, may be a promising treatment modality to safely accelerate or enhance treatment interventions to increase their efficacy. Methods: This study investigates the effectiveness of FERP, emotion, and empathy treatment interventions paired with tDCS for adults with ASD. Verum or sham tDCS was randomly assigned in a within-subjects, double-blinded design with seven adults with ASD without intellectual disability. Outcomes were measured using scores from the Empathy Quotient (EQ) and a FERP test for both verum and sham tDCS. Results: Verum tDCS significantly improved EQ scores and FERP scores for emotions that conveyed threat. Conclusions: These results suggest the potential for increasing the efficacy of treatment interventions by pairing them with tDCS for individuals with ASD.
... This has proven useful as a tool in pain assessment in non-verbal humans such as infants [29]. Even facial expressions of durations less than 0.5 s may be interpreted [30]. Social ungulates, such as sheep and horses, also use facial visual cues for recognition of identity and emotional state of conspecifics [31]. ...
Article
Full-text available
Automated recognition of human facial expressions of pain and emotions is to a certain degree a solved problem, using approaches based on computer vision and machine learning. However, the application of such methods to horses has proven difficult. Major barriers are the lack of sufficiently large, annotated databases for horses and difficulties in obtaining correct classifications of pain because horses are non-verbal. This review describes our work to overcome these barriers, using two different approaches. One involves the use of a manual, but relatively objective, classification system for facial activity (Facial Action Coding System), where data are analyzed for pain expressions after coding using machine learning principles. We have devised tools that can aid manual labeling by identifying the faces and facial keypoints of horses. This approach provides promising results in the automated recognition of facial action units from images. The second approach, recurrent neural network end-to-end learning, requires less extraction of features and representations from the video but instead depends on large volumes of video data with ground truth. Our preliminary results suggest clearly that dynamics are important for pain recognition and show that combinations of recurrent neural networks can classify experimental pain in a small number of horses better than human raters.
Article
As a branch of affective computing and machine learning, recognizing micro-expressions is more difficult than recognizing macro-expressions because micro-expression has a small motion and short duration. A large number of features and methods have been proposed, and feature extraction is a critical focus of research. For improving performance, feature fusion is an effective strategy that involves two groups of features, and two groups of features usually have some differences, such as discriminability, distribution and dimension. In addition, the extracted features usually have redundant or misleading feature information. Thus, before feature fusion, an algorithm that can automatically learn and select discriminative features from two groups of different features is needed. In this paper, we propose a kernelized two-groups sparse learning (KTGSL) model to automatically learn more discriminative features from two groups of features. We propose two learning strategies to learn the weights: one is that the weights of one group of features are fixed and don’t be learned; the other one is that both groups of weights are learned and the two are given to different penalty coefficients, which can flexibly adjust the interrelation between the two groups of features by adjusting the two penalty coefficients. This work is the first one to select discriminative features from two groups of features in micro-expression recognition. The experiments are conducted on three datasets (CASME II, SMIC and SAMM). The experimental results show that our method can automatically select discriminative features from two groups of features and achieve state-of-the-art performance.
Chapter
Social interaction involves exchanging of feelings, and the human face emitting facial signals is essential in this process. Our skill sets to identify truth from falsehood in facial appearance allow us to intelligently interact and function adapting to our constantly changing social environment. The question that this chapter raises relates to the versatility of social intelligence in deceitfulness and control of body language inclusive of facial features. On the other hand, do people have the ability and social skills to make accurate interpretation of observed expressions in deciphering truthfulness from falsehood? Interaction is dynamic, and as socially intelligent humans, we are often in the role of both the deceiver and the truth seeker.
Article
In cognitive science, the real-time recognition of human’s emotional state is pertinent for machine emotional intelligence and human-machine interaction. Conventional emotion recognition systems use subjective feedback questionnaires, analysis of facial features from videos, and online sentiment analysis. This research proposes a system for real-time detection of emotions in response to emotional movie clips. These movie clips elicitate emotions in humans, and during that time, we have recorded their brain signals using Electroencephalogram (EEG) device and analyze their emotional state. This research work considered four class of emotions (happy, calm, fear, and sadness). This method leverages Fast Fourier Transform (FFT) for feature extraction and Genetic Programming (GP) for classification of EEG data. Experiments were conducted on EEG data acquired with a single dry electrode device NeuroSky Mind Wave 2. To collect data, a standardized database of 23 emotional Hindi film clips were used. All clips individually induce different emotions, and data collection was done based on these emotions elicited as the clips contain emotionally inductive scenes. Twenty participants took part in this study and volunteered for data collection. This system classifies four discrete emotions which are: happy, calm, fear, and sadness with an average of 89.14% accuracy. These results demonstrated improvements in state-of-the-art methods and affirmed the potential use of our method for recognizing these emotions.
Article
Blunted facial affect is a transdiagnostic component of Serious Mental Illness (SMI) and is associated with a host of negative outcomes. However, blunted facial affect is a poorly understood phenomenon, with no known cures or treatments. A critical step in better understanding its phenotypic expression involves clarifying which facial expressions are altered in specific ways and under what contexts. The current literature suggests that individuals with SMI show decreased positive facial expressions, but typical, or even increased negative facial expressions during laboratory tasks. While this literature has coalesced around general trends, significantly more nuance is available regarding what components facial expressions are atypical and how those components are associated with increased severity of clinical ratings. The present project leveraged computerized facial analysis to test whether clinician-rated blunted affect is driven by decreases in duration, intensity, or frequency of positive versus other facial expressions during a structured clinical interview. Stable outpatients meeting criteria for SMI (N = 59) were examined. Facial expression did not generally vary as a function of clinical diagnosis. Overall, clinically-rated blunted affect was not associated with positive expressions, but was associated with decreased surprise and increased anger, sadness, and fear expressions. Blunted affect is not a monolithic lack of expressivity, and increased precision in operationally defining it is critical for uncovering its causes and maintaining factors. Our discussion focuses on this effort, and on advancing digital phenotyping of blunted facial affect more generally.
Article
Full-text available
Micro-expressions are deliberate or unconscious movements of people's psychological activities, reflecting the transient facial true expressions. Previous works focus on the whole face for micro-expressions recognition. These methods can extract a number of feature vectors which are relevant or irrelevant to the micro-expressions recognition. Besides, the high-dimension feature vectors can result in longer computational time and increased computational complexity. In order to address these problems, we propose a new framework which combines the local-region division and the feature selection. Based on the proposed framework, the original images can retain more efficient regions and filter out the invalid components of feature vectors. Specifically, with the joint efforts of the facial deformation identification model and facial action coding system, the global region is divided into seven local regions with their corresponding actions units. The ReliefF algorithm is used to select effective components of feature vectors and reduce the dimension. To evaluate the proposed framework, we conduct experiments on both the Chinese Academy of Sciences Micro-expression II Database and Spontaneous Micro-expression Database with Leave-One-Subject-Out Cross Validation method. The results show that the performance in local combined regions outperforms its counterpart in the global region, and the recognition accuracy is further improved with the combination of feature selection. INDEX TERMS Micro-expression recognition, local region division, feature selection, ReliefF algorithm.
Article
Micro-expression can reflect people’s real emotions. Recognizing micro-expressions is difficult because they are small motions and have a short duration. As the research is deepening into micro-expression recognition, many effective features and methods have been proposed. To determine which direction of movement feature is easier for distinguishing micro-expressions, this paper selects 18 directions (including three types of horizontal, vertical and oblique movements) and proposes a new low-dimensional feature called the Histogram of Single Direction Gradient (HSDG) to study this topic. In this paper, HSDG in every direction is concatenated with LBP-TOP to obtain the LBP with Single Direction Gradient (LBP-SDG) and analyze which direction of movement feature is more discriminative for micro-expression recognition. As with some existing work, Euler Video Magnification (EVM) is employed as a preprocessing step. The experiments on the CASME II and SMIC-HS databases summarize the effective and optimal directions and demonstrate that HSDG in an optimal direction is discriminative, and the corresponding LBP-SDG achieves state-of-the-art performance using EVM.
Article
Lay abstract: Children and adults with autism spectrum disorder show difficulty recognizing facial emotions in others, which makes social interaction challenging. While there are many treatments developed to improve facial emotion recognition, there is no agreement on the best way to measure such abilities in individuals with autism spectrum disorder. The purpose of this review is to examine studies that were published between January 1998 and November 2019 and have measured change in facial emotion recognition to evaluate the effectiveness of different treatments. Our search yielded 65 studies, and within these studies, 36 different measures were used to evaluate facial emotion recognition in individuals with autism spectrum disorder. Only six of these measures, however, were used in different studies and by different investigators. In this review, we summarize the different measures and outcomes of the studies, in order to identify promising assessment tools and inform future research.
Article
Full-text available
Facial micro-expressions are short and imperceptible expressions that involuntarily reveal the true emotions that a person may be attempting to suppress, hide, disguise, or conceal. Such expressions can reflect a person's real emotions and have a wide range of application in public safety and clinical diagnosis. The analysis of facial micro-expressions in video sequences through computer vision is still relatively recent. In this research, a comprehensive review on the topic of spotting and recognition used in microexpression analysis databases and methods, is conducted, and advanced technologies in this area are summarized. In addition, we discuss challenges that remain unresolved alongside future work to be completed in the field of micro-expression analysis.
Article
This paper proposes a novel approach for privacy-preserving facial recognition based on the new feature computation technique: Local Binary Pattern from Temporal Planes (LBP-TP) that extracts information from only the XT or YT planes of a video sequence; in contrast to previous work that depend significantly on spatial information within the video frames. To our knowledge, this is the first known facial recognition work that does not rely on the spatial plane, nor that requires processing a facial input. The removal of this spatial reliance therefore withholds the facial appearance information from public view, where only one-dimensional spatial information that varies across time are extracted for recognition. Privacy is thus assured, yet without impeding the facial recognition task which is vital for many security applications such as street surveillance and perimeter access control. Experimental results indicate that the proposed method achieves accuracy of 99.56%, 98.19% and 100% for the recent CASME II, CAS(ME)² and Honda/UCSD databases respectively. In addition, a 66% reduction in the number of bytes required for storage and recognition was also observed from these experiments. The outcomes of this research demonstrate that privacy in face recognition can be preserved, without compromising its security (i.e., recognition accuracy) and efficiency.
Article
Full-text available
Automatic story generation systems usually deliver suspense by including an adverse outcome in the narrative, in the assumption that the adversity will trigger a certain set of emotions that can be categorized as suspenseful. However, existing systems do not implement solutions relying on predictive models of the impact of the outcome on readers. A formulation of the emotional effects of the outcome would allow storytelling systems to perform a better measure of suspense and discriminate among potential outcomes based on the emotional impact. This paper reports on a computational model of the effect of different outcomes on the perceived suspense. A preliminary analysis to identify and evaluate the affective responses to a set of outcomes commonly used in suspense was carried out. Then, a study was run to quantify and compare suspense and affective responses evoked by the set of outcomes. Next, a predictive model relying on the analyzed data was computed, and an evolutionary algorithm for automatically choosing the best outcome was implemented. The system was tested against human subjects' reported suspense and electromyography responses to the addition of the generated outcomes to narrative passages. The results show a high correlation between the predicted impact of the computed outcome and the reported suspense.
Chapter
Facial micro-expressions are fast, subtle, and involuntary facial movement which reveals a person’s underlying emotions. These expressions last for only a fraction of a second, so it’s difficult to deceit such expressions. The subtleness of these expressions poses a significant challenge to the naked eye; hence, a lot of work and researches has been made to detect and recognize these facial micro-expressions. One of the challenges for the detection of micro-expressions is the scarcity of a well-defined dataset. For now, there are mainly three datasets that are used for their detection, i.e., SAMM, SMIC, and CASMEII. In recent years, many types of research have been done, and different machine learning and deep learning algorithms have been introduced for the detection of these micro-expressions. Many efforts have been made in the past years. First, researchers extracted temporal features from a video for recognition. The optical flow method was also used with neural networks for their detection. Nowadays, apex frame within a video frame is being used for the spatial-temporal credit, and the 3DCNN model is also being used. This chapter explains the Main Directional Maximal Differential Analysis (MDMD) technique. The MDMD method uses significant differences in the main direction of the flow of light to detect facial motion. Instead of a single frame, three frames were used in the MDMD technique. Between these sets of frames, the optical flow method was applied. The F1 score obtained for micro-expression detection was 0.0376 for CAS(ME)2 and 0.04445 for SAMM.
Article
Full-text available
La credibilidad del testimonio constituye un pilar fundamental en los casos de abuso sexual infantil, donde frecuentemente la única prueba que se tiene es el testimonio de la presunta víctima. En algunos casos la víctima no tiene desarrollado un lenguaje verbal suficiente como para ofrecer un testimonio suficientemente elaborado para ser valorado por el perito psicólogo. Por ello, el objetivo de esta revisión fue describir y analizar los indicios emocionales o microexpresiones para emitir juicios de credibilidad. Se realizó una revisión bibliográfica sistemática incluyendo exclusivamente estudios empíricos. Se concluyó que no es recomendable un uso exclusivo de estos indicios para valorar la credibilidad de un testimonio, debido a su baja frecuencia y bajas tasas de precisión encontradas. Esto supone una gran limitación, junto al resto de factores que envuelven a las microexpresiones, para la admisibilidad de estos indicios en contextos forenses como pruebas para valorar la credibilidad del testimonio.
Article
Full-text available
This study investigated the effectiveness of a social skills intervention targeting nonverbal communication for 8 adolescents With Asperger syndrome (AS) and related pervasive developmental delays. The Diagnostic Analysis of Nonverbal Accuracy 2 (DANVA2; NoWicki, 1997) Was used as a pre- and posttest measure to assess participants' nonverbal language skills. During the 8-Week social skills intervention, lessons Were adapted from those presented in Teaching Your Child the Language of Social Success (Duke, NoWicki, & Martin, 1996). Training during the first 4 Weeks targeted paralanguage (deciphering varying tones of voice and rates of speech, understanding nonverbal sound patterns, and gaining meaning from others' marked emphases in speech). The remaining 4 sessions focused on identifying and responding to the facial expressions of others. The folloWing teaching strategies Were employed throughout the social skills intervention: role-playing, modeling, and reinforcement through feedback. Results are discussed relative to social groWth among participants.
Article
Full-text available
This research examined the role of personality, nonverbal skills, and gender as moderators of judging and being judged accurately in zero-acquaintance situations. Unacquainted participants, assembled in groups, completed a battery of personality tests, took 2 audiovisual tests (the Profile of Nonverbal Sensitivity [PONS] and the Interpersonal Perception Task [IPT]) intended to assess decoding skills and then rated themselves and every other person in the group on a set of personality dimensions. Results indicated that more sociable and extraverted participants tended to be more legible, that is, were judged more accurately. Participants who were more accurate judges tended to be less sociable and performed better on tests of decoding accuracy. Performance on the PONS predicted accuracy of judgment for men, whereas performance on the IPT predicted accuracy of judgment for women. On the whole, results suggest that some important and theoretically relevant moderators of accuracy in the zero-acquaintance situation have been identified. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Using meta-analysis, we find a consistent positive correlation between emotion recognition accuracy (ERA) and goal-oriented performance. However, this existing research relies primarily on subjective perceptions of performance. The current study tested the impact of ERA on objective performance in a mixed-motive buyer-seller negotiation exercise. Greater recognition of posed facial expressions predicted better objective outcomes for participants from Singapore playing the role of seller, both in terms of creating value and claiming a greater share for themselves. The present study is distinct from past research on the effects of individual differences on negotiation outcomes in that it uses a performance-based test rather than self-reported measure. These results add to evidence for the predictive validity of emotion recognition measures on practical outcomes.
Article
Full-text available
The Diagnostic Analysis of Nonverbal Accuracy (DANVA) was designed to measure individual differences in the accurate sending and receiving of nonverbal social information. The DANVA consists of four receptive and three expressive subtests that measure nonverbal processing accuracy in children from 6 to 10 years of age. Four propositions were offered to guide the gathering of construct validity data for the DANVA. In support of the propositions, researchers found that DANVA accuracy scores increased with age, were internally consistent and reliable over time, and snowed significant relationships with indices of personal and social adjustment and academic achievement but were not related to IQ. Evidence for construct validity was stronger for receptive, as compared to expressive, subtests. Future research should include additional populations of subjects and study of the impact of intensity of emotion being sent or received.
Article
Full-text available
The purpose of the present study was to investigate the relation between nonverbal decoding skills and relationship well-being. Sixty college students were administered tests of their abilities to identify the affective meanings in facial expressions and tones of voice. The students also completed self-report measures of relationship well-being and depression. Correlational analyses indicated that errors in decoding facial expressions and tones of voice were associated with less relationship well-being and greater depression. Hierarchical regression revealed that nonverbal decoding accuracy was significantly related to relationship well-being even after controlling for depression.
Article
Full-text available
This preliminary study presents data on training to improve the accuracy of judging facial expressions of emotion, a core component of emotional intelligence. Feedback following judgments of angry, fearful, sad, and surprised states indicated the correct answers as well as difficulty level of stimuli. Improvement was greater for emotional expressions originating from a cultural group more distant from participants’ own family background, for which feedback likely provides greater novel information. These results suggest that training via feedback can improve emotion perception skill. Thus, the current study also provides suggestive evidence for cultural learning in emotion, for which previous research has been cross-sectional and subject to selection biases.
Article
Full-text available
Encoders were video recorded giving either truthful or deceptive descriptions of video footage designed to generate either emotional or unemotional responses. Decoders were asked to indicate the truthfulness of each item, what cues they used in making their judgements, and then to complete both the Micro Expression Training Tool (METT) and Subtle Expression Training Tool (SETT). Although overall performance on the deception detection task was no better than chance, performance for emotional lie detection was significantly above chance, while that for unemotional lie detection was significantly below chance. Emotional lie detection accuracy was also significantly positively correlated with reported use of facial expressions and with performance on the SETT, but not on the METT. The study highlights the importance of taking the type of lie into account when assessing skill in deception detection.
Article
Full-text available
Previous studies have consistently shown emotion regulation to be an important predictor of intercultural adjustment. Emotional intelligence theory suggests that before people can regulate emotions they need to recognize them; thus emotion recognition ability should also predict intercultural adjustment. The present study tested this hypothesis in international students at three times during the school year. Recognition of anger and emotion regulation predicted positive adjustment; recognition of contempt, fear and sadness predicted negative adjustment. Emotion regulation did not mediate the relationship between emotion recognition and adjustment, and recognition and regulation jointly predicted adjustment. These results suggest recognition of specific emotions may have special functions in intercultural adjustment, and that emotion recognition and emotion regulation play independent roles in adjustment.
Article
Full-text available
In this article, we attempt to distinguish between the properties of moderator and mediator variables at a number of levels. First, we seek to make theorists and researchers aware of the importance of not using the terms moderator and mediator interchangeably by carefully elaborating, both conceptually and strategically, the many ways in which moderators and mediators differ. We then go beyond this largely pedagogical function and delineate the conceptual and strategic implications of making use of such distinctions with regard to a wide range of phenomena, including control and stress, attitudes, and personality traits. We also provide a specific compendium of analytic procedures appropriate for making the most effective use of the moderator and mediator distinction, both separately and in terms of a broader causal system that includes both moderators and mediators.
Article
Full-text available
Describes the facial musculature and its lower motor neuron innervation. Upper motor neuron innervation from pyramidal and extrapyramidal circuits is explored, with special attention to the respective roles of these systems in voluntary vs emotional facial movements. Also discussed are the evolution of volitional and emotional motor systems, the behavioral and neurological differences between the upper and lower face, the mechanisms of proprioceptive feedback from the face, and asymmetry in facial expression. (117 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
The ability to recognize and respond appropriately to facial expressions of emotion is essential for interpersonal interaction. Individuals with mental retardation have problems not only in recognizing but also in accurately producing facial expressions of emotion. In Experiment 1, directed rehearsal was used to teach six boys with mild and moderate mental retardation to increase their ability to recognize facial expressions of emotion. In addition, their ability to produce the six basic facial expressions of emotion was periodically assessed throughout the study. The results showed that the boys' accuracy in recognizing facial expressions of emotion increased rapidly with instruction and that their increased accuracy was maintained at 8- and 12-week assessments following the termination of instruction. However, their increased levels of recognition did not generalize to the production of these emotions. In Experiment 2, four boys who had participated in the first study were provided with directed rehearsal training in the production of the six basic facial expressions of emotion. Their ability to produce facial expressions of emotion increased with instruction and was maintained following the termination of instruction. In addition, independent raters judged that the boys' production of these emotions matched the emotions that they were required to produce, suggesting a socially valid behavior change. These studies showed that the ability of children with mental retardation to recognize and produce facial expressions of emotion can be enhanced through instruction.
Article
Full-text available
The authors investigated whether accuracy in identifying deception from demeanor in high-stake lies is specific to those lies or generalizes to other high-stake lies. In Experiment 1, 48 observers judged whether 2 different groups of men were telling lies about a mock theft (crime scenario) or about their opinion (opinion scenario). The authors found that observers' accuracy in judging deception in the crime scenario was positively correlated with their accuracy in judging deception in the opinion scenario. Experiment 2 replicated the results of Experiment 1, as well as P. Ekman and M. O'Sullivan's (1991) finding of a positive correlation between the ability to detect deceit and the ability to identify micromomentary facial expressions of emotion. These results show that the ability to detect high-stake lies generalizes across high-stake situations and is most likely due to the presence of emotional clues that betray deception in high-stake lies.
Article
Full-text available
Emotion recognition, the most reliably validated component within the construct of emotional intelligence, is a complicated skill. Although emotion recognition skill is generally valued in the workplace, "eavesdropping," or relatively better recognition ability with emotions expressed through the less controllable "leaky" nonverbal channels, can have detrimental social and workplace consequences. In light of theory regarding positive emotion in organizations, as well as research on the consequences of perceiving negative information, the authors hypothesized and found an interaction between nonverbal channel and emotional valence in predicting workplace ratings from colleagues and supervisors. Ratings were higher for eavesdropping ability with positive emotion and lower for eavesdropping ability with negative emotion. The authors discuss implications for the complexity of interventions associated with emotional intelligence in workplace settings.
Article
The view that certain facial expressions of emotion are universally agreed on has been challenged by studies showing that the forced-choice paradigm may have artificially forced agreement. This article addressed this methodological criticism by offering participants the opportunity to select a none of these terms are correct option from a list of emotion labels in a modified forced-choice paradigm. The results show that agreement on the emotion label for particular facial expressions is still greater than chance, that artifactual agreement on incorrect emotion labels is obviated, that participants select the none option when asked to judge a novel expression, and that adding 4 more emotion labels does not change the pattern of agreement reported in universality studies. Although the original forced-choice format may have been prone to artifactual agreement, the modified forced-choice format appears to remedy that problem.
Article
Examined the developmental acquisition of females' superiority in decoding nonverbal cues. Three age groups (121 male and 129 female 9-15 yr olds, 46 male and 63 female high school students, and 32 male and 49 female undergraduates) were examined cross-sectionally, and 24 male and 24 female 11-24 yr olds were examined longitudinally. Decoding of 4 types of nonverbal cues (face, body, tone, and discrepancies), arranged from the most to the least controllable (most "leaky") channel, was examined. ANOVA and the appropriate contrast showed that as age increased, females lost more and more of their advantage for the more leaky or more covert channels but that they gained more and more of their advantage for the less leaky channels. Results of the longitudinal 1-yr study support those of the cross-sectional study--during the year, women lost more and more of their advantage in more leaky channels. Results are consistent with a socialization interpretation--that as females grow older, they may learn to be more nonverbally courteous or accommodating. (25 ref) (PsycINFO Database Record (c) 2006 APA, all rights reserved).
Chapter
In psychotherapy research, one is often hard pressed to make sense out of the many behaviors, processes, and other phenomena which can be observed in the therapy situation. The present report is concerned with one class of behaviors and processes which cannot be observed—namely, facial expressions which are so short-lived that they seem to be quicker-than-the-eye. These rapid expressions can be seen when motion picture films are run at about one-sixth of their normal speed. The film and projector thus become a sort of temporal microscope, in that they expand time sufficiently to enable the investigator to observe events not otherwise apparent to him.
Article
In this article, we report the development of a new test designed to measure individual differences in emotion recognition ability (ERA), five studies examining the reliability and validity of the scores produced using this test, and the first evidence,for a correlation,between,ERA measured,by a,standardized,test and personality. Utilizing Matsumoto ,and ,Ekman’s (1988) Japanese and ,Caucasian ,Fa- cial Expressions of Emotion (JACFEE) and Neutral Faces (JACNeuF), we call this measure,the Japanese and,Caucasian Brief Affect Recognition Test (JACBART). The JACBART improves,on previous measures,of ERA by (1) using expressions,that have substantial validity and reliability data associated with them, (2) including posers of two,visibly different races (3) balanced ,across seven ,universal emotions ,(4) with equal distribution of poser,race and,sex across emotions,(5) in a,format,that elimi- nates afterimages associated with fast exposures. Scores derived using the JACBART are reliable, and three studies demonstrated a correlation between ERA and the personality constructs of Openness and Conscientiousness, while one study reports a correlation with Extraversion and Neuroticism. Research on judgments,of emotion,from facial expressions,has a long
Article
Children and adults with mental retardation were tested on their ability to recognize facial expressions of emotion. The sample consisted of 80 children and adults with mental retardation and a control group of 80 nonhandicapped children matched on mental age and gender. Ekman and Friesen's normed photographs of the six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) were used in a recognition task of facial expressions. Subjects were individually read two-sentence stories identifying a specific emotion, presented with a randomized array of the six photographs of the basic facial expressions of emotion, and then asked to select the photograph that depicted the emotion identified in the story. This procedure was repeated with 24 different stories, with each of the six basic emotions being represented four times. Results showed that, as a group, individuals with mental retardation were not as proficient as their mental-age-matched nonhandicapped control subjects at recognizing facial expressions of emotion. Although adults with mild mental retardation were more proficient at this task than those with moderate mental retardation, this finding was not true for children. There was a modest difference between the children with moderate mental retardation and their nonhandicapped matched controls in their ability to recognize facial expressions of disgust.
Article
This second edition of The Expression of the Emotions in Man and Animals was edited by his son Francis Darwin and published in 1890. As Sir Francis notes in his brief preface, because the first edition did not sell out in Charles Darwin’s lifetime, ‘he had no opportunity of publishing the material collected with a view to a second edition.’ This material, in the form of ‘a mass of letters, extracts from and references to books’ was utilised in the second edition, as were Darwin’s pencilled corrections in his own volume of the first. The book is a study of the muscular movements of the face (both human and animal) triggered by the emotions being felt - a ‘physical’ response to a ‘mental’ sensation. Darwin’s detailed analysis of what actually happens to a body in a state of fear, or joy, or anger is illustrated by photographic images.
Article
Acknowledgements Contributors Editor's preface Part I. The Mechanism of Human Facial Expression or an Electrophysiological Analysis of the Expression of the Emotions: Preface Section 1. Introduction: 1. A review of previous work on muscle action in facial expression 2. Principle facts that emerge from my electrophysiological experiments 3. The reliability of these experiments 4. The purpose of my research Section 2. Scientific Section: Foreword 5. Anatomical preparations, and portraits of the subjects who underwent electrophysiological experiments 6. The muscle of attention (m. frontalis) 7. The muscle of reflection (superior part of m. orbicularis oculi, that part of the muscle called the sphincter of the eyelids) 8. The muscle of aggression (m. procerus) 9. The muscle of pain (m. corrugator supercilii) 10. The muscles of joy and benevolence (m. zygomaticus major and the inferior part of m. orbicularis oculi) 11. The muscle of lasciviousness (transverse part of m. nasalis) 12. The muscle of sadness (m. depressor anguli oris) 13. The muscles of weeping and whimpering (m. zygomaticus minor and m. levator labii superioris) 14. Muscles complementary to surprise (muscles that lower the mandible) 15. The muscle of fright, of terror (m. platysma) 16. A critical study of several antiquities from the point of view of m. corrugator supercilii and m. frontalis Section 3. Aesthetic Section: Foreword 17. Aesthetic electrophysiological studies on the mechanism of human facial expression 18. Further aesthetic electrophysiological studies 19. Synoptic table on the plates of the Album Part II. Commentary Chapters: 20. The highly original Dr Duchenne R. Andrew Cuthbertson 21. The Duchenne de Boulogne collection in the department of morphology, L'Ecole Nationale Superieure des Beaux Arts Jean-Francois Debord 22. Duchenne today: facial expression and facial surgery John T. Hueston 23. Duchenne and facial expression of emotion Paul Ekman Index.
Article
Examined the developmental acquisition of females' superiority in decoding nonverbal cues. Three age groups (121 male and 129 female 9–15 yr olds, 46 male and 63 female high school students, and 32 male and 49 female undergraduates) were examined cross-sectionally, and 24 male and 24 female 11–24 yr olds were examined longitudinally. Decoding of 4 types of nonverbal cues (face, body, tone, and discrepancies), arranged from the most to the least controllable (most "leaky") channel, was examined. ANOVA and the appropriate contrast showed that as age increased, females lost more and more of their advantage for the more leaky or more covert channels but that they gained more and more of their advantage for the less leaky channels. Results of the longitudinal 1-yr study support those of the cross-sectional study—during the year, women lost more and more of their advantage in more leaky channels. Results are consistent with a socialization interpretation—that as females grow older, they may learn to be more nonverbally courteous or accommodating. (25 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
VERBAL AND KINESIC BEHAVIOR CAN BE MINUTELY DESCRIBED AND TRANSCRIBED FROM THE MICRO LEVEL OF 1/48 SEC. (OR FASTER) UP TO AND INCLUDING MUCH WIDER BEHAVIORAL SEQUENCES. THE USE OF HIGH SPEED CAMERAS, INCLUDING THE APPROPRIATE ANALYTIC INSTRUMENTATION, THUS MAY PROVIDE A METHOD FOR THE MICROSCOPIC ANALYSIS AND "ORGANIZATIONAL DESCRIPTION" OF BOTH NORMAL AND PATHOLOGICAL BEHAVIOR AND PERMIT A COMPARISON OF THEM ACROSS MANY LEVELS. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Three series of studies with 832 high school students, college students, and adults investigated the hypothesis that nonverbally, women are more interpersonally accommodating than men. The 1st series of studies showed that women lost much of their advantage in decoding visual cues when the cues were based on displays too brief to be under good sender control. The 2nd series of studies showed that as nonverbal cues became less intended (more "leaky"), women showed decreasing advantage over men in accuracy of decoding nonverbal cues. There was also a trend for women who were more skilled at eavesdropping on nonverbal cues to be seen as having less successful social outcomes. Women were also more biased to use (the more controllable) visual cues than tone of voice cues and especially so when the video cues were of the face rather than of the "leakier" body. The 3rd series of studies showed that women were more polite in their ascription of characteristics to others, more accurate in decoding of nondeceptive behavior, but substantially more likely to interpret deceptive cues as the deceiver wanted them to be interpreted. Finally, it was shown that women's nonverbal cues were more easily read than men's. (28 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
With subjects who are given training in analyzing facial expression through a period of ten days with a test every other day, the average gain in ability is 51% over the original ability. A group which is given the same kind of training in the reading of expressions will become more uniform in ability as the training progresses. There is a significant negative correlation between initial ability to judge faces and improvement in that ability. This negative correlation is probably due to a difference of attitude; the better judges have habitually a less analytical attitude. There was an average advantage of 23% for the subjects in a sixty-second exposure as opposed to a fifteen-second exposure. The poorer judges have a greater advantage in the longer exposures than the better judges. There were no sex differences in original ability, in variation of ability, in degree of improvement, in advantage of a longer exposure over a shorter one, or in improvement during either a longer or shorter exposure period. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Discusses research on facial expressions of emotion and presents suggestions for recognizing and interpreting various expressions. Using many photographs of faces that reflect surprise, fear, disgust, anger, happiness, and sadness, methods of correctly identifying these basic emotions and of understanding when people try to mask or simulate them are outlined. Practice exercises are also included. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The present paper outlines the impact of group training on the emotion recognition of six individuals with a moderate learning disability. The accuracy of identifying emotions depicted by line drawings and photographs with and without an emotional context was examined before and after group training. The results indicated that there was a significant overall increase in accuracy in identifying emotions following group training. In addition, a significant increase was found in the ability to correctly label emotions depicted by line drawings typically used in symbol-based communication systems. The implications of the results are discussed.
Article
The structure of skill at decoding nonverbal cues was examined for 150 high school students and 95 college students. An overall principal components analysis yielded four factors differing in the complexity of the message (pure versus mixed) and in the relative importance of the video versus the audio modality. Factor 1 (pure video) was defined by accuracy at face and body cues of ordinary (2 second) and very brief exposure length. Factor 2 (mixed video) was defined by accuracy at face and body cues with a “noisy” background. Factor 3 (mixed audio) was defined by accuracy at decoding discrepant cues and “noisy” audio cues. Factor 4 (pure audio) was defined by accuracy at pure tone of voice cues. The overall evidence suggested that despite a nontrivial degree of relationship among all measures of skill at decoding nonverbal cues (Armor's Theta = .62), it would increase our theoretical and empirical precision to conceptualize nonverbal decoding ability as made up of several relatively unrelated subskills.
Article
Asymmetries of the smiling facial movement were more frequent in deliberate imitations than spontaneous emotional expressions. When asymmetries did occur they were usually stronger on the left side of the face if the smile was deliberate. Asymmetrical emotional expressions, however, were about equally divided between those stronger on the left side of the face and those stronger on the right. Similar findings were obtained for the actions involved in negative emotions, but a small data base made these results tentative.
Article
In this article, we report the development of a new test designed to measure individual differences in emotion recognition ability (ERA), five studies examining the reliability and validity of the scores produced using this test, and the first evidence for a correlation between ERA measured by a standardized test and personality. Utilizing Matsumoto and Ekman's (1988) Japanese and Caucasian Facial Expressions of Emotion (JACFEE) and Neutral Faces (JACNeuF), we call this measure the Japanese and Caucasian Brief Affect Recognition Test (JACBART). The JACBART improves on previous measures of ERA by (1) using expressions that have substantial validity and reliability data associated with them, (2) including posers of two visibly different races (3) balanced across seven universal emotions (4) with equal distribution of poser race and sex across emotions (5) in a format that eliminates afterimages associated with fast exposures. Scores derived using the JACBART are reliable, and three studies demonstrated a correlation between ERA and the personality constructs of Openness and Conscientiousness, while one study reports a correlation with Extraversion and Neuroticism.
Article
This paper describes the validation of the Interpersonal Perception Task (IPT), a new method for studying the process of social perception. The IPT is a videotape consisting of 30 scenes. Each scene is paired with a multiple-choice question about the interaction depicted in the scene. All scenes contain full-channel sequences of unscripted behavior and employ an objective criterion of accurate judgment. Five common types of social interaction are represented: status, intimacy, kinship, competition, and deception. In the first study the IPT was administered to 438 subjects. Results indicated that subjects performed better than chance for 28 of the 30 scenes and that females performed better than males. A second study investigated the possibility that the people who appear in the IPT display idiosyncratic or unrepresentative behaviors. Three coders performed a scene-by-scene content analysis of the IPT, noting the presence or absence of behaviors which previous researchers have found to be associated with the five areas represented in the IPT. In all but one scene, coders found enough behavioral information to enable correct interpretation. A third study employed a peer nomination procedure to explore the construct validity of the IPT. Subjects obtaining higher scores on the IPT were perceived by their friends as more socially skilled. Finally, in an investigation of the convergent and discriminant validity of the IPT, we found no relationship with a visual acuity task or the Machiavellian scale, a significant positive correlation with the Self-Monitoring Scale, a significant positive correlation with the Social Interpretations Task (SIT), and an even stronger positive correlation with those SIT items which measure the same areas as the IPT. Uses of the IPT to investigate the process and accuracy of interpersonal perception are discussed.
Article
The structure of skill at decoding nonverbal cues was examined for 150 high school students and 95 college students. An overall principal components analysis yielded four factors differing in the complexity of the message (pure versus mixed) and in the relative importance of the video versus the audio modality. Factor 1 (pure video) was defined by accuracy at face and body cues of ordinary (2 second) and very brief exposure length. Factor 2 (mixed video) was defined by accuracy at face and body cues with a "noisy" background. Factor 3 (mixed audio) was defined by accuracy at decoding discrepant cues and "noisy" audio cues. Factor 4 (ure audio) was defined by accuracy at pure tone of voice cues. The overall evidence suggested that despite a nontrivial degree of relationship among all measures of skill at decoding nonverbal cues (Armor's Theta = .62), it would increase our theoretical and empirical precision to conceptualize nonverbal decoding ability as made up of several relatively unrelated subskills.
Article
The ability to recognize accurately and respond appropriately to facial expressions of emotion is essential for interpersonal interaction. Individuals with mental retardation typically are deficient in these skills. The ability of 7 adults, 1 with severe and 6 with moderate mental retardation, to recognize facial expressions of emotion correctly was assessed. Then, they were taught this skill using a combination of a discrimination training procedure for differentiating facial movements, directed rehearsal, and Ekman and Friesen's "flashing photograph" technique. Their average increase in accuracy over baseline was at least 30% during the course of the training and over 50% during the last 5 days of the training phase. Further, these individuals were able to generalize their skills from posed photographs to videotaped role plays and were able to maintain their enhanced skills during the 8 to 9 months following the termination of training. This is the first study to show that individuals with mental retardation can be taught skills that enhance their ability to recognize facial expressions of emotion.
Article
A sample of 511 children and adults with mental retardation or borderline intelligence (1 SD below the mean IQ) and children of average intelligence were tested on their ability to recognize the six basic facial expressions of emotion as they are exemplified in Ekman and Friesen's (1975) normed photographs. Each subject was shown four sets of six photographs, one of each emotion. Subjects were read 24 short stories; after each one they were asked to point to the photograph that depicted the emotion described. Children and adults with mental retardation or borderline intelligence were less proficient at identifying facial expressions of emotion than were children of average intelligence. Among individuals with mental retardation or borderline intelligence, recognition of accuracy of facial emotion increased with IQ. Among individuals with average intelligence, recognition accuracy increased with age.